Unless you’ve been living under a rock this week, you have probably seen the countless ‘Old Person’ selfies popping up in your social media feeds. Everyone from friends to celebrities have been using FaceApp, the viral AI photo editor app, to manipulate photos to make people look older, younger, and a multitude of other effects and sharing the often hilarious results on social media. This is not the first time the app has gone viral, however this time there has been rising skepticism as to its legitimacy and the way in which it operates. So what are these concerns and should we be worried?
One of the biggest concerns making the rounds on social media is the claim that the app can access and download the full device camera roll in the background, without the user’s knowledge or permission. This would be a major risk for user privacy and data security given the amount of sensitive information like passwords, addresses, and banking details that users screenshot or photograph on a daily basis. Users have also been worried over the fact that selected photos are uploaded to an external server, instead of applying the effects on the device itself. Concerns were raised, especially after it was discovered that the app was developed in Russia, that photos were being sent to an unknown cloud location, without making it clear to users that processing would not occur locally on the device. Users also noticed an issue with the fact that FaceApp allows you to pick photos to upload, even when photo access permissions have not been granted to the app. A clause in the Terms and Conditions of the app is also causing concern where it states that:“You grant FaceApp a perpetual, irrevocable. non-exclusive , royalty-free, worldwide, fully-paid transferable sublicensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content…”.
If you use #FaceApp you are giving them a license to use your photos, your name, your username, and your likeness for any purpose including commercial purposes (like on a billboard or internet ad) — see their Terms: https://t.co/e0sTgzowoN pic.twitter.com/XzYxRdXZ9q
— Elizabeth Potts Weinstein (@ElizabethPW) July 17, 2019
In response to these concerns, FaceApp have issued a statement stating that while they do in fact process all photos in the cloud instead of locally on the device, only photos that have been individually selected by the user are edited. The app does not have access to the entire photo library of a device nor is any user data shared with third parties or transferred to Russia, despite being the base of the core app R&D team. They also confirmed that photos are stored on their servers for up to 48 hours, however requests to remove data from external servers can be requested (rather tediously) via the app. The company also drew attention to the fact that allowing users to pick individual photos, even without full permission to access that data is actually 100% allowed by an Apple API introduced in iOS 11. According to Apple, the act of tapping on a photo to select it provides ‘user intent’ and therefore is recognized as consenting to allowing access to that particular photo. Also in response to the concerns, FaceApp have now added a disclaimer to the app that warns users that any images selected will be uploaded to the cloud for processing.
Following these allegations and the growing media storm around the app, US Senate minority leader Chuck Schumer has called for an investigation by the FBI and the Federal Trade Commission (FTC) into FaceApp. In a letter posted on Twitter, the Senator called it ‘deeply troubling’ that personal data of US citizens could go to a “hostile foreign power”. Meanwhile the UK’s Information Commissioner’s Office (ICO), which recently issued two record fines to multinational corporations Marriott and British Airways for data breaches and violations to the GDPR, have said they are considering these allegations of the misuse of personal data and is monitoring customer concerns. A spokesperson for the watchdog issue the advice for “people signing up to any app to check what will happen to their personal information and not to provide any personal details until they are clear about how they will be used”.
So should we be worried about FaceApp? Is it being used by Russia to collect personal data or by AI labs to train facial recognition software? According to the app themselves and to security researchers, it probably isn’t this extreme. However that is not to say that we shouldn’t be cautious of how much access we allow to personal information on our phones. Many apps and mobile services can access personal data such as camera roll, SMS messages, and social media accounts often without our knowledge and while their intent may be innocent, this data can be used for any number of other reasons. It is essential to be aware of exactly what an app can access and where our data is being sent or processed. Mobile endpoint protection solutions, like Corrata, are also becoming more and more necessary as the threat of excessive data collection becomes a reality.