FTC Chairwoman Lina Khan testifies in the course of the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce listening to on the “FY2024 Federal Trade Commission Budget,” in Rayburn Building on Tuesday, April 18, 2023.
Tom Williams | Cq-roll Call, Inc. | Getty Images
The Federal Trade Commission is on alert for the ways in which rapidly-advancing synthetic intelligence could possibly be used to violate antitrust and shopper safety legal guidelines it is charged with implementing, Chair Lina Khan wrote in a New York Times op-ed on Wednesday.
“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote, echoing a theme the company shared in a joint assertion with three different enforcers final week.
In the op-ed, Khan detailed a number of methods AI is perhaps used to hurt customers or the market that she believes federal enforcers must be in search of. She additionally in contrast the present inflection level round AI to the sooner mid-2000s period in tech, when corporations like Facebook and Google got here to eternally change communications, however with substantial implications on knowledge privateness that weren’t absolutely realized till years later.
“What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” Khan wrote.
But, she stated, “The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.”
One potential impact enforcers ought to look out for, in line with Khan, is the impression of only some companies controlling the uncooked supplies wanted to deploy AI instruments. That’s as a result of that kind of management may allow dominant corporations to leverage their energy to exclude rivals, “picking winners and losers in ways that further entrench their dominance.”
Khan additionally warned that AI instruments used to set costs “can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination.”
“The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition,” she wrote.
Khan additionally warned that generative AI “risks turbocharging fraud” by creating authentic-sounding messages. When it involves scams and misleading business practices, Khan stated the FTC wouldn’t solely take a look at ” fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.”
Finally, Khan stated that current legal guidelines about improper assortment or use of non-public knowledge will apply to the large datasets on which AI instruments are educated, and legal guidelines prohibiting discrimination may also apply in circumstances the place AI was used to make selections.
Subscribe to CNBC on YouTube.
WATCH: The risks of A.I: How will synthetic intelligence have an effect on the 2024 election?
Source: www.cnbc.com