Monday, November 4, 2024
No menu items!
No menu items!
HomeUS HeadlinesNervous about falling behind the GOP, Democrats are wrestling with how to...

Nervous about falling behind the GOP, Democrats are wrestling with how to use AI

WASHINGTON — President Joe Biden’s campaign and Democratic candidates are in a fevered race with Republicans over who can best exploit the potential of artificial intelligence, a technology that could transform American elections — and perhaps threaten democracy itself.

Still smarting from being outmaneuvered on social media by Donald Trump and his allies in 2016, Democratic strategists said they are nevertheless treading carefully in embracing tools that trouble experts in disinformation. So far, Democrats said they are primarily using AI to help them find and motivate voters and better identify and overcome deceptive content.

″Candidates and strategists are still trying to figure out how to use AI in their work. People know it can save them time — the most valuable resource a campaign has,” said Betsy Hoover, director of digital organizing for President Barack Obama’s 2012 campaign and co-founder of the progressive venture capital firm Higher Ground Labs. “But they see the risk of misinformation and have been intentional about where and how they use it in their work.”

Campaigns in both parties for years have used AI — powerful computer systems, software or processes that emulate aspects of human work and cognition — to collect and analyze data.

The recent developments in supercharged generative AI, however, have provided candidates and consultants with the ability to generate text and images, clone human voices and create video at unprecedented volume and speed.

That has led disinformation experts to issue increasingly dire warnings about the risks posed by AI’s ability to spread falsehoods that could suppress or mislead voters, or incite violence, whether in the form of robocalls, social media posts or fake images and video.

Those concerns gained urgency after high-profile incidents that included the spread of AI-generated images of former President Donald Trump getting arrested in New York and an AI-created robocall that mimicked Biden’s voice telling New Hampshire voters not to cast a ballot.

The Biden administration has sought to shape AI regulation through executive action, but Democrats overwhelmingly agree Congress needs to pass legislation to install safeguards around the technology.

Top tech companies have taken some steps to quell unease in Washington by announcing a commitment to regulate themselves. Major AI players, for example, entered into a pact to combat the use of AI-generated deepfakes around the world. But some experts said the voluntary effort is largely symbolic and congressional action is needed to prevent AI abuses.

Meanwhile, campaigns and their consultants have generally avoided talking about how they intend to use AI to avoid scrutiny and giving away trade secrets.

The Democratic Party has “gotten much better at just shutting up and doing the work and talking about it later,” said Jim Messina, a veteran Democratic strategist who managed Obama’s winning reelection campaign.

The Trump campaign said in a statement that it “uses a set of proprietary algorithmic tools, like many other campaigns across the country, to help deliver emails more efficiently and prevent sign up lists from being populated by false information.” Spokesman Steven Cheung also said the campaign did not “engage or utilize” any tools supplied by an AI company, and declined to comment further.

The Republican National Committee, which declined to comment, has experimented with generative AI. In the hours after Biden announced his reelection bid last year, the RNC released an ad using artificial intelligence-generated images to depict GOP dystopian fears of a second Biden term: China invading Taiwan, boarded up storefronts, troops lining U.S. city streets and migrants crossing the U.S. border.

A key Republican champion of AI is Brad Parscale, the digital consultant who in 2016 teamed up with scandal-plagued Cambridge Analytica, a British data-mining firm, to hyper target social media users. Most strategists agree that the Trump campaign and other Republicans made better use of social media than Democrats during that cycle.

Scarred by the memories of 2016, the Biden campaign, Democratic candidates and progressives are wrestling with the power of artificial intelligence and nervous about not keeping up with the GOP in embracing the technology, according to interviews with consultants and strategists.

They want to use it in ways that maximize its capabilities without crossing ethical lines. But some said they fear using it could lead to charges of hypocrisy — they have long excoriated Trump and his allies for engaging in disinformation while the White House has prioritized reining in abuses associated with AI.

The Biden campaign said it is using AI to model and build audiences, draft and analyze email copy and generate content for volunteers to share in the field. The campaign is also testing AI’s ability to help volunteers categorize and analyze a host of data, including notes taken by volunteers after conversations with voters, whether while door-knocking or by phone or text message.

It has experimented with using AI to generate fundraising emails, which sometimes have turned out to be more effective than human-generated ones, according to a campaign official who spoke on the condition of anonymity because he was not authorized to publicly discuss AI.

Biden campaign officials said they plan to explore using generative AI this cycle but will adhere to strict rules in deploying it. Among the tactics that are off limits: AI cannot be used to mislead voters, spread disinformation and so-called deepfakes, or deliberately manipulate images. The campaign also forbids the use of AI-generated content in advertising, social media and other such copy without a staff member’s review.

The campaign’s legal team has created a task force of lawyers and outside experts to respond to misinformation and disinformation, with a focus on AI-generated images and videos. The group is not unlike an internal team formed in the 2020 campaign — known as the “Malarkey Factory,” playing off Biden’s oft-used phrase, “What a bunch of malarkey.”

That group was tasked with monitoring what misinformation was gaining traction online. Rob Flaherty, Biden’s deputy campaign manager, said those efforts would continue and suggested some AI tools could be used to combat deepfakes and other such content before they go viral.

“The tools that we’re going to use to mitigate the myths and the disinformation is the same, it’s just going to have to be at a higher pace,” Flaherty said. “It just means we need to be more vigilant, pay more attention, be monitoring things in different places and try some new tools out, but the fundamentals remain the same.”

The Democratic National Committee said it was an early adopter of Google AI and uses some of its features, including ones that analyze voter registration records to identify patterns of voter removals or additions. It has also experimented with AI to generate fundraising email text and to help interpret voter data it has collected for decades, according to the committee.

Arthur Thompson, the DNC’s chief technology officer, said the organization believes generative AI is an “incredibly important and impactful technology” to help elect Democrats up and down the ballot.

“At the same time, it’s essential that AI is deployed responsibly and to enhance the work of our trained staff, not replace them. We can and must do both, which is why we will continue to keep safeguards in place as we remain at the cutting edge,” he said.

Progressive groups and some Democratic candidates have been more aggressively experimenting with AI.

Higher Ground Labs — the venture capital firm co-founded by Hoover — established an innovation hub known as Progressive AI Lab with Zinc Collective and the Cooperative Impact Lab, two political tech coalitions focused on boosting Democratic candidates.

The goal was to create an ecosystem where progressive groups could streamline innovation, organize AI research and swap information about large language models, Hoover said.

Higher Ground Labs, which also works closely with the Biden campaign and DNC, has since funded 14 innovation grants, hosted forums that allow organizations and vendors to showcase their tools and held dozens of AI trainings.

More than 300 people attended an AI-focused conference the group held in January, Hoover said.

Jessica Alter, the co-founder and chair of Tech for Campaigns, a political nonprofit that uses data and digital marketing to fight extremism and help down-ballot Democrats, ran an AI-aided experiment across 14 campaigns in Virginia last year.

Emails written by AI, Alter said, brought in between three and four times more fundraising dollars per work hour compared with emails written by staff.

Alter said she is concerned that the party might be falling behind in AI because it is being too cautious.

“I understand the downsides of AI and we should address them,” Alter said. “But the biggest concern I have right now is that fear is dominating the conversation in the political arena and that is not leading to balanced conversations or helpful outcomes.”

Rep. Adam Schiff, the Democratic front-runner in California’s Senate race, is one of few candidates who have been open about using AI. His campaign manager, Brad Elkins, said the campaign has been using AI to improve its efficiency. It has teamed up with Quiller, a company that received funding from Higher Ground Labs and developed a tool that drafts, analyzes and automates fundraising emails.

The Schiff campaign has also experimented with other generative AI tools. During a fundraising drive last May, Schiff shared online an AI-generated image of himself as a Jedi. The caption read, “The Force is all around us. It’s you. It’s us. It’s this grassroots team. #MayThe4thBeWithYou.”

The campaign faced blowback online but was transparent about the lighthearted deepfake, which Elkins said is an important guardrail to integrating the technology as it becomes more widely available and less costly.

“I am still searching for a way to ethically use AI-generated audio and video of a candidate that is sincere,” Elkins said, adding that it’s difficult to envision progress until there’s a willingness to regulate and legislate consequences for deceptive artificial intelligence.

The incident highlighted a challenge that all campaigns seem to be facing: even talking about AI can be treacherous.

“It’s really hard to tell the story of how generative AI is a net positive when so many bad actors — whether that’s robocalls, fake images or false video clips — are using the bad set of AI against us,” said a Democratic strategist close to the Biden campaign who was granted anonymity because he was not authorized to speak publicly. “How do you talk about the benefits of an AK-47?”

___

Associated Press writers Alan Suderman and Garance Burke contributed to this report.

—-

This story is part of an Associated Press series, “The AI Campaign,” that explores the influence of artificial intelligence in the 2024 election cycle.

—-

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

0 0 votes
Article Rating
RELATED ARTICLES
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
- Advertisment -

Most Popular

Recent Comments

0
Would love your thoughts, please comment.x
()
x