FTC Investigates OpenAI Over Data Policies, Misinformation

The Federal Trade Commission has opened a civil investigation into OpenAI to determine the extent to which its data policies are harmful to consumers as well as the potentially deleterious effects of misinformation spread through “hallucinations” by its ChatGPT chatbot. The FTC sent OpenAI dozens of questions last week in a 20-page letter instructing the company to contact FTC counsel “as soon as possible to schedule a telephonic meeting within 14 days.” The questions deal with everything from how the company trains its models to the handling of personal data.

Led by chair Lina Khan, the FTC oversees consumer protection law. The New York Times notes “the FTC is acting on AI with notable speed, opening an investigation less than a year after OpenAI introduced ChatGPT.”

Despite a swift and aggressive posture toward Big Tech, Khan just saw her agenda suffer a setback when a U.S. district court ruled in favor of Microsoft completing its Activision Blizzard acquisition, a decision Reuters reports the FTC is appealing to the Ninth Circuit tribunal.

The Washington Post, which broke news of the OpenAI investigation Thursday, calls the investigation “expansive” and says it probes whether the maker of ChatGPT has run afoul of consumer guardrails “by putting personal reputations and data at risk.” As an opening gambit, the 20-page FTC demand letter “represents the most potent regulatory threat to date to OpenAI’s business in the United States, as the company goes on a global charm offensive to shape the future of artificial intelligence policy.”

Testifying Thursday at a House Judiciary Committee hearing, Khan said “ChatGPT and some of these other services are being fed a huge trove of data” but that “there are no checks on what type of data is being inserted,” according to NYT, which writes of “sensitive data showing up.”

The FTC investigation could result in OpenAI having to disclose training methods for its GPT models and the ChatGPT bot, including what data sources it uses. While NYT says OpenAI has “been fairly open about such information,” it adds that the company has gotten more secretive of late, “probably because it is wary of competitors copying it and has concerns about lawsuits over the use of certain data sets.”

The FTC letter says OpenAI must “provide detailed descriptions of all complaints it had received of its products making ‘false, misleading, disparaging or harmful’ statements about people” and asks for “records related to a security incident that the company disclosed in March when a bug in its systems allowed some users to see payment-related information, as well as some data from other users’ chat history,” WaPo reports.

OpenAI CEO Sam Altman has been vocal in calling for AI regulation but tweeted that the FTC initiating a dialogue in this way is “disappointing” and said it “does not help build trust,” according to CNBC, which writes that while Altman “has mostly received a warm welcome in Washington,” some AI experts warn that “the company has its own incentives in articulating its vision of regulation,” urging policymakers “to engage a diverse set of voices.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.