Meta, the parent company of Facebook and Instagram, is currently testing a new feature that requires users to grant its AI access to their unpublished photo libraries. This move has sparked concerns about how Meta plans to utilize this expanded access to personal information, especially considering the company’s history of using public posts to train its generative AI models since 2007.
The new feature prompts Facebook app users with pop-ups asking for permission to access their camera rolls for “cloud processing.” This feature aims to use AI to restyle photos, group images by themes like birthdays or graduations, and generate personalized creative ideas such as travel highlights and collages. While the notification assures users that the data will not be used for ad targeting, it does raise questions about the extent of access Meta will have to users’ personal information.
Meta has denied using data from users’ photo libraries to train its AI models, but its AI Terms of Service state that personal information may be used to improve AI and related technology. By agreeing to Meta’s AI Terms, users allow the company to analyze media and facial features for creating ideas based on factors like date, location, and the presence of people or objects.
The company has emphasized that the new feature is opt-in only and can be turned off at any time. While camera roll media may be used to enhance content suggestions, it will not be utilized to improve AI models in this test. However, Meta has not provided details on whether this feature will be extended to other platforms, the timeline for its introduction, or if data from unpublished images will be used to train AI in the future.
Interestingly, Meta’s practices differ between the United States and Europe. In the US, users were not informed about the use of public posts to train AI, and there are no opt-out options available. In contrast, European users have the ability to opt out of Meta’s data scraping project, thanks to stricter privacy laws in the region.
Artists have expressed concerns about the broader issue of training AI on publicly available images, as it can lead to the replication of artistic styles by AI models. Some artists fear that this technology’s ability to mimic their styles could have negative implications for their livelihoods.
In conclusion, Meta’s new feature raises important questions about privacy and data usage, highlighting the ongoing debate surrounding AI training on personal information and its potential impact on individuals and industries.