Secret policy changes
Federal Trade Commission to Chase AI Firms Breaking User Privacy Commitments
The Federal Trade Commission has vowed to pursue artificial intelligence companies that secretly change their privacy policies to harvest user data.
The FTC’s privacy and identity protection unit wrote in a blog post on Tuesday that businesses feeding their AI models with large quantities of user data are breaking their privacy commitments, The Record reported.
The commission specified that it would go after companies adopting “permissive data practices,” referring to those that start using consumer data for AI training or sharing it with third parties while only notifying users through “surreptitious, retroactive” terms of service updates.
Historically, the FTC has pursued companies that break past privacy promises to consumers. The commission pointed out that it charged genetic testing company 1Health for failing to get consent before it expanded the third parties with which it was sharing user data.
The FTC’s action resulted in 1Health agreeing to instruct “third-party contract laboratories” to dispose of consumer DNA samples stored for over 180 days.
Other federal agencies are exploring ways to address AI-related privacy concerns. In December, the National Institute of Standards and Technology released draft guidance on an algorithm that would safeguard individual identities in datasets.
Category: Federal Civilian