Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> To reiterate: scanning your device is not a privacy risk, but copying files from your device without any notice is definitely a privacy issue.

Not a lawyer, but I believe this part about legality is inaccurate, because they aren’t copying your photos without notice. The feature is not harvesting suspect photos from a device, it is attaching data to all photos before they are uploaded to Apple’s servers. If you’re not using iCloud Photos, the feature will not be activated. Furthermore, they’re not knowingly transferring CSAM, because the system is designed only to notify them when a certain “threshold” of suspect images has been crossed.

In this way it’s identical in practice to what Google and Facebook are already doing with photos that end up on their servers, they just run the check before the upload instead of after. I certainly have reservations about their technique here, but this argument doesn’t add up to me.



This basic implementation fact has been misrepresented over and over and over again. Does anyone read anymore? I’m starting to get really concerned. The hacker community is where I’ve turned to be more informed, away from the clickbait. But I’m being let down.


Agreed - so dissapointing.

The idea that standard moderation steps are a felony is such a stretch. Almost all the major players have folks doing content screening and management - and yes, this may invovle the provider transmitting / copying etc images that are then flagged and moderated away.

The idea that this is a felony is rediculous.

The other piece is that folks are making a lot of assumptions about how this works, then claiming things are felonies.

Does it not strain credibility slightly that apple, with it's team of lawyers, has decided to instead of blocking CASM to commit CASM felonies? And the govt is going to bust them for this? Really? They are doing what govt wants and using automation to drive down the number of images someone will look at and what even might get transferred to apple's servers in the first place.


Does the law have a moderation carve out? There are plenty of laws that have what's called 'strict liability' where your intent doesn't matter.

I'm not suggesting that this is absolutely positively a situation where strict liability exists and that moderation isn't allowed. But the idea that "hey we're trying to do the right thing here" will be honored in court is....not obvious.


If we investigated this author if they do in fact run a photo service we would inevitably find that unless they are incompetent they have to moderate content, either blind or based on flags.

So if apple is going to jail for child porn because they moderate / report content after flagging (this is normally actually required to do - report it), then this article writer should be going to jail as well - I guarantee his services stores, forwards and otherwise handles CASM content.

My complaint is just - HN used to focus on stuff where folks didn't just always jump to worst case arguments (ie, apple is guilty of child porn and is committing felonies) without at least allowing that apple MAY have given this a tiny bit of thought.

It's just tiresome to wade through. It's a mashup of they are blocking too much, are the evil govt henchperson to they are breaking the law and going to jail for felony child porn charges.

I get that it generates interaction (here I am), but it's annoying after a while. Clickbait sells though no question so things like "One Bad Apple" are probably going to keep on coming at us.


Well there is a difference as stated on the article that they don't expect to see CP or CSAM and says "We are not 'knowingly' seeing it since it makes up less than 0.06% of the uploads ... We do not intentionally look for CP."

Whereas Apple is moderating the suspected images so they intentionally look for CP (which, according to the author and his lawyer, is a crime).


This is such a pathetic interpretation. All flagging systems (which is how moderation work - facebook does not manually review every photo posted) alert the company that there may be a problem. Moderators do their thing. They EXPECT to see bad content based on these flags. Smaller places may act on complaints.

The idea that this makes them guilty of felony child porn charges is so ridiculous and offensive.

Facebook (with Insta) alone is dealing with 20 million photos a year -

https://www.businessinsider.com/facebook-instagram-report-20...

This lawyer is an absolute idiot.

How about we ask the actual folks involved in this (NCMEC) what they think about apples "felonies". Maybe they have some experts?

Oh wait, the folks actually dealing with this, the people who have to handle all this crap - are writing letters THANKING apple for helping reduce the spread of this crap.

So - we have a big company like Apple (with a ton of folks looking at this sort of thing). We have the National Center for Missing and Exploited Children looking at this. And we are being told - by some guy who will not even name the attorney and law firm reaching this opinion, that apple is committing child porn felonies.

Does no one see how making these types of horribly supported explosive claims just trashes discourse? Apple are child pornographers! So and so is horrible for X.

Can folks dial it back a TINY bit - or is the outrage factory the only thing running these days?


Yeah, as usual I'm worried the people who claim that others don't read are the ones not reading (or being able to comprehend) what the author is trying to say. To me it seems like moderation in general is fine. What Apple is doing here is that after they receive a flag that a certain threshold is crossed, they manually review the material. The author states that no one should do that i.e., the law explicitly prohibits anyone even trying to verify. If you suspect CP, you got to forward it to NCMEC and be done with that.

I 100% understand why Apple doesn't want to do that - automatic forwarding - they're clearly worried about false positives. I also think Apple has competent lawyers. It's entirely possible that the author and their lawyers' interpretation could be wrong (a possibility).

Point is - the author isn't trying to say moderation is illegal.


The whole thing rests on whether Apple knows that the content is CSAM or not. And they don’t. The author gets this fundamentally wrong. They do not know whether it is a match or not when the voucher is created. The process does, but they don’t. They know when the system detects a threshold number of matches in the account, and they can then verify the matches.

Additionally, we already know they consulted with NCMEC on this because of the internal memos that leaked the other day, both from Apple leadership and a letter NCMEC sent congratulating them on their new system. If you think they haven’t evaluated the legality of what they’re doing, you’re just wrong.


What does "manual review" mean then and how are those images reported?


Before: you would upload images to iCloud Photos. Apple can access your images in iCloud Photos, but it does not.

Now: You upload images to iCloud Photos. When doing so, your device also uploads a separate safety voucher for the image. If there are enough vouchers for CSAM matched images in your library, Apple gains the ability to access the data in the vouchers for images matching CSAM. One of the data elements in the voucher is an “image derivative” (probably a thumbnail) which is manually reviewed. If the image derivative also looks like CSAM, Apple files a report with NCMEC’s CyberTip line. Apple can (for now) access the image you stored in iCloud, but it does not. All the data it needs is in the safety voucher.

Lot of words spilled on this topic, yet I’d be surprised if a majority of people are even aware of these basic facts about the system operation.


Thank you for this explanation. Much more helpful than any of the lengthy articles I've read to date.

I think Apple has botched the rollout of this change by failing to explain clearly how it works. As a result, rumors and misunderstandings have proliferated instead.


Not sure your before is entirely correct. Apple has admitted to scanning iCloud photos, so they are already accessing them at some point.

https://digit.fyi/apple-admits-scanning-photos-uploaded-to-i...


The before is entirely correct. Only iCloud Mail was previously scanned for CSAM. As a sanity check: it's not plausible that Apple only generated O(100) referrals to CyberTip annually if it were scanning all iCloud Photos. Other services of similar scale generate O(1M) referrals.


> access the data in the vouchers for images matching CSAM. One of the data elements in the voucher is an “image derivative” (probably a thumbnail)

So the author of the article is technically correct: Apple intentionally uploads CP to their servers for manual review which is explicitly forbidden by law.

He even describes the issue with thumbnails


It is exceedingly unlikely that a system developed with NCMEC’s support and a Fortune 5 legal team somehow fails to comply with the most obviously relevant laws.


I'd say: A trillion-dollar company and a government agency can do whatever they feel like, and laws be damned :)


As I understand it:

When you choose to upload your images to iCloud (which currently happens without end-to-end encryption), your phone generates some form of encrypted ticket. In the future, the images will be encrypted, with a backdoor key encoded in the tickets.

If Apple receives enough images that were considered a match, the tickets become decryptable (I think I saw Shamir's Secret Sharing mentioned for this step). Right now, Apple doesn't need that because they have unencrypted images, in a future scheme, decrypting these tickets will allow them to decrypt your images.

(I've simplified a bit, I believe there's a second layer that they claim will only give them access to the offending images. I have not studied their approach deeply.)


These are not “claims.” The process by which they get access to only the safety vouchers for images matching CSAM is private set intersection and comes with a cryptographic proof.

In no step of the proposal does Apple access the images you store in iCloud. All access is through the associated data in the safety voucher. This design allows Apple to switch iCloud storage to end to end encrypted with no protocol changes.


The private set intersection is part of the protocol to shield Apple (and their database providers) from accountability, not to protect the users privacy.

They could instead send the list of hashes to the device (which they already must trust is faithfully computing the local hash) and just let the device report when there are hits. It would be much more CPU and bandwidth efficient, too.

The PSI serves the purpose that if Apple starts sending out hashes for popular lawful images connected to particular religions, ethnicity, or political ideologies that it is information theoretically impossible for anyone to detect the abuse. It also makes it impossible to tell if different users are being tested against different lists, e.g. if Thai users were being tested against political cartoons that insult the king.


The list of hashes is confidential. Good luck getting NCMEC to sign off on an implementation which lets clients infer which photos are matching their database.

The database is embedded into iOS. There are at least three primary sources which say that users will not receive different databases, and it should be easily confirmed.


I am well aware but that is exactly the point. If Apple can't provide an accountable implementation they should not implement this at all. This should be table stakes that all users should demand, at a minimum.

Otherwise there is no way to detect if the system is abused to target lawful activities.

The fancy crypto in the system isn't there to protect the user, it's to guard the system's implementer(s) against accountability. It protects Apple's privacy, not yours.


What good is end to end encryption if the OS is prebuilt with a method of breaking that encryption? This is definitional backdooring, and you’re back to trusting Apple’s goodwill (and/or willingness to resist governments) to keep your data safe (I.e., not add new decryptable triggers).

Not having backdoors is a hard requirement for end to end encryption offering privacy guarantees.


This is taking the discussion into the realm of hypothetical. If we end up in a world where there are reliable public cloud providers that offer end to end encryption with no content scanning whatsoever, I'll be glad to give them my money.


There's a weird meme I the hacker community that law enforcement has no right to use any means to enforce the law, and can only bust people who turn themselves in.


Legality aside – how is this not a privacy risk? Privileged users of the infrastructure can gain information about users (whether they possess CSAM that's in the hash-database... for now).


Presumably the reviewers would not know the identity of the user whose photos are under review, as they have no need to.


Unless you're a public figure or celebrity. If I were, I wouldn't use iCloud photos, but that's not exactly how Apple markets their photo service.


... but the link between user and photo obviously exists somewhere in Apple's system.


> they’re not knowingly transferring CSAM, because the system is designed only to notify them when a certain “threshold” of suspect images has been crossed

And when they’re notified, Apple manually checks (a modified but legible version of) the images.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: