wixamixstore



Amidst a campaign tinged by concerns about so-called “deep fakes,” the Federal Communications Commission is proposing a first-of-its-kind rule to mandate disclosure of AI-generated content in political ads, though it may not go into force before the election.

Regulators have been slow to grapple with the new technology, which allows people to use cheap and readily available artificial intelligence tools to impersonate others. FCC Chair Jessica Rosenworcel says disclosure is a critical — and perhaps just as importantly, doable — first step in regulating artificially created content

“We spent the better part of the last year in Washington handwringing about artificial intelligence,” Rosenworcel said in an interview with NBC News. “Let’s do something more than handwringing and pearl clutch.”

The new rule would require TV and radio ads to disclose if they include AI-generated content, sidestepping, for now, the debate about whether that content should be banned outright. Existing laws prevent outright deception in TV ads.

“We don’t want to be in a position to render judgment; we simply want to disclose it so people can make their own decisions,” Rosenworcel. 

The move was inspired in part by the first-known deepfake in American national politics, a robocall impersonating President Joe Biden that told voters not to turn out in January’s New Hampshire primary. 

“We kicked into high gear because we want to set an example,” Rosenworcel said of the swift official response to the New Hampshire deep fake. 

The political consultant behind the deepfake robocall, who was outed by NBC News, is now facing a $6 million fine from the FCC and 26 criminal counts in New Hampshire courts. The U.S. Department of Justice on Monday threw its weight behind a private lawsuit brought by the League of Women Voters. 

The consultant, Steve Kramer, claimed he only made the ad to highlight the danger of AI and spur action.

Some political ads have already started using artificially generated content in both potentially deceptive and non-deceptive ways, and the generic AI content is becoming more common in non-political consumer ads simply because it can be cheaper to produce.

Some social media companies have implemented bans on AI-created political ads. Congress has considered several bills. And about 20 states have adopted their own laws regulating artificial political content, according to the non-profit Public Citizen, which tracks the efforts.

But advocates say national policy is necessary to create a uniform framework. 

The social media platform X has not only not banned videos created with AI, but its billionaire owner, Elon Musk, has been one of their promoters. Over the weekend, he shared with his 192 million followers a doctored video made to look like a campaign ad for Vice President Kamala Harris.

The government does not regulate social media content, but the FCC has a long history of regulating political programing on TV and the radio, including maintaining a database of political ad spending, with information that TV and radio stations are mandated to collect from ad buyers. The new rule would just have broadcasters also ask ad-buyers if their spot was made with AI.

The Federal Elections Commission, meanwhile, has been considering its own AI disclosure rules. The Republican chairman of the FEC wrote to FCC Rosenworcel asking her agency to stand down, arguing his is the rightful regulator of campaign ads.

Rosenworcel brushed past the inter-agency squabbling, noting both agencies — along with the IRS and others — have played complimentary roles in regulating political groups and spending for decades. The FCC also regulates a wider range of ads than the FEC, including so-called issue ads run by nonprofits that do not expressly call for the defeat of the election of a candidate. 

And advocates note the FEC has a difficult time doing much of anything because it is, by-design, split evenly between Republicans and Democrats, making consensus rare.

“We’re barreling towards elections which may be distorted, or even decided, by political deepfakes. Yet this is an entirely avoidable dystopia, if regulators simply demand disclosures when AI is used,” said Robert Weissman, the co-president of Public Citizen, who said he hopes the FCC rule will be finalized and implemented “as soon as possible.”

Still, while Rosenworcel said the FCC is moving as quickly as possible, federal rulemaking is a deliberate process that requires clearing numerous hurdles and time for public input.

“There will be complicated questions down the road,” she said. “Now is the right time to start this conversation.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *