Apple to review photos of iPhones and iCloud to fight child abuse | Antivirus and Security

THE apple announced the implementation of a new system that aims to help combat child abuse by analyzing iPhone photos stored in iCloud. The resource is one of the efforts the company is presenting with a focus on child protection, and will use trained algorithms to identify Child Abuse Material (what the company called “CSAM”) — in practice, this material consists of content that reveals sexually explicit activities involving children.

iPhone 12 Pro (Image: Alwin Kroon/Unsplash)

Apple says system preserves user privacy

The company claims that the process is secure and designed to preserve users’ privacy. According to Apple, assurances are that it does not store any knowledge based on images that do not match the CSAM database provided by the US National Center for Missing and Exploited Children.

In addition, the company says that “the risk of the system incorrectly flagging an extremely low account”, more specifically, less than 1 in 1 trillion accounts per year could be flagged incorrectly. Even so, the company will rely on manual analysis of all alerts that become reports to the authorities, in order to avoid AI errors.

Users do not have access to the CSAM database and are also not warned about content that has been flagged as CSAM by the Apple system.

Apple system intends to analyze iPhone photos saved in iCloud (Image: Playback/Apple)

Apple system intends to analyze iPhone photos saved in iCloud (Image: Playback/Apple)

How does the system work?

The explanation given by Apple says, in practical terms, that before an image is stored in iCloud, it goes through a matching process on the device itself. The action looks for patterns and compares them to an unreadable set of hashes of known CSAM content — all covered by encryption. Images saved only locally do not go through this process.

In this way, it is possible for Apple to identify whether an image hash matches a database hash without actually gaining knowledge about any other aspect of the content.

If the process matches, the device will encode the positive result. These results cannot be interpreted by Apple until the iCloud account exceeds an established threshold for positive results — the number was not disclosed by the company.

Only when multiple patterns are identified, a report is issued (Image: Replay/Apple)

Only when multiple patterns are identified, a report is issued (Image: Replay/Apple)

Upon reaching the limit, Apple must manually review the report to confirm the result and may block the user’s account, also sending a report to the National Center for Missing and Exploited Children.

Problems with the type of system adopted by Apple

A tool that will scan all the content you store on iCloud Photos, albeit with the promise of privacy, raises several questions and criticisms.

First, the mere possibility of the feature coming into practice makes us question: what if it wasn’t Apple, but some government authority behind the system? What is the guarantee that this cannot be a door to something much more ambitious and complex, which falling into the wrong hands could even lead to the persecution of people?

As cryptography expert Matthew Green points out on Twitter, even though Apple makes “good use” of the tool, the database used by the company cannot be verified by the consumer. There is no way to be sure which is being used as the default. Remember that Apple may have given up using end-to-end encryption in iCloud backup under pressure from the FBI, which casts doubt on how far the company’s appreciation for privacy will go.

In addition, the system can report false positives, as precisely to seek greater efficiency, the parameters used in the analysis do not work as an immutable perfect code for each image — the system is capable of reading and interpreting different signals to detect changes in content potentially harmful. This also means that there are chances of perfectly “clean” content being classified as abusive by mistake.

Finally, there is an even more troubling issue: the system can be exploited by malicious people to harm others without their knowledge. Because harmless hashes can match Apple’s CSAM bank standards, it’s possible that someone would also purposely create a file that could be shared without any visual problems, but that has a problematic hash.

The action could cause a person or a group of people to be noticed by the Apple tool — which, in a limit that we don’t know, could lead to the suspension of the account, causing several inconveniences.

For now, the company has not made any statement about the criticisms. But in light of the new features, it remains to be seen how Apple intends to circumvent the raised alerts.

Initially, the system reaches users in the United States, but the company has plans to expand the feature to other countries.

With information: Apple, 9To5Mac.

Leave a Comment