Deep Fakes are among the election-related problems the generative AI company expects to battle in the year ahead. Credit: Thinkstock / Lutsina Tatiana / Getty Images OpenAI is hoping to alleviate concerns about its technology’s influence on elections, as more than a third of the world’s population is gearing up for voting this year. Among the countries where elections are scheduled are the United States, Pakistan, India, South Africa, and the European Parliament. “We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges,” OpenAI wrote Monday in a blog post. “They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.” There’s been growing apprehension about the potential misuse of generative AI (genAI) tools to disrupt democratic processes, especially since OpenAI — backed by Microsoft — introduced ChatGPT in late 2022. The Open AI tool is known for its human-like text generation capabilities. And another tool, DALL-E, can generate highly realistic fabricated images, often referred to as “deep fakes.” OpenAI gears up for elections For its part, OpenAI said ChatGPT will redirect users to CanIVote.org for specific election-related queries. The company is also focusing on enhancing the transparency of AI-generated images using its DALL-E technology with plans to incorporate a “cr” icon on such photos, signaling they are AI-generated. The company also plans to enhance its ChatGPT platform by integrating it with real-time global news reporting, including proper attribution and links. The news initiative is an expansion of an agreement made last year with the German media conglomerate Axel Springer. Under that deal, ChatGPT users gain access to summarized versions of select global news content from Axel Springer’s various media channels. In addition to those measures, the company is also developing techniques to identify content created by DALL-E, even after the images undergo modifications. Growing concerns about mixing AI and politics There’s no universal rule for how genAI should be used in politics. Last year, Meta declared it would prohibit political campaigns from using genAI tools in their advertising and mandate that politicians reveal any such use in their ads. Similarly, YouTube said all content creators must disclose whether their videos contain “realistic” but altered media, including those created with AI. Meanwhile, the US Federal Election Commission (FCC) is deliberating on whether existing laws against “fraudulently misrepresenting other candidates or political parties” apply to AI-generated content. (A formal decision on the issue is pending.) False and deceptive information has always been a factor in elections, said Lisa Schirch, the Richard G. Starmann Chair in Peace Studies at the University of Notre Dame. But genAI allows many more people to create ever more realistic false propaganda. Dozens of countries have already set up cyberwarfare centers employing thousands of people to create false accounts, generate fraudulent posts, and spread false and deceptive information over social media, Schirch said. For example, two days before Slovakia’s election, a fake audio recording was released of a politician attempting to rig the election. Like ‘gasoline…on the burning fire of political polarization’ “The problem isn’t just false information; it is that malignant actors can create emotional portrayals of candidates designed to generate anger and outrage,” Schirch added. “AI bots can scan through vast amounts of material online to make predictions about what type of political ads might be persuasive. In this sense, AI is gasoline thrown on the already burning fire of political polarization. AI makes it easy to create material designed to maximize persuasion and manipulation of public opinion.” One of the major concerns about genAI and attention-grabbing headlines involves deep fakes and images, said Peter Loge, director of the Project on Ethics in Political Communication at George Washington University. The more significant threat comes from large language models (LLMs) that can generate endless messages with similar content instantly, flooding the world with fakes. “LLMs and generative AI can swamp social media, comments sections, letters to the editor, emails to campaigns, and so on, with nonsense,” he added. “This has at least three effects — the first is an exponential rise in political nonsense, which could lead to even greater cynicism and allow candidates to disavow actual bad behavior by saying the claims were generated by a bot. “We have entered a new era of, ‘Who are you going to believe, me, your lying eyes, or your computer’s lying LLM?’” Loge said. Stronger protections needed ASAP Current protections are not strong enough to prevent genAI from playing a role in this year’s elections, according to Gal Ringel, the CEO of the cybersecurity firm Mine. He said that even if a nation’s infrastructure could deter or eliminate attacks, the prevalence of genAI-created misinformation online could influence how people perceive the race and possibly affect the final results. “Trust in society is at such a low point in America right now that the adoption of AI by bad actors could have a disproportionately strong effect, and there is really no quick fix for that beyond building a better and safer internet,” Ringel added. Social media companies need to develop policies that reduce harm from AI-generated content while taking care to preserve legitimate discourse, said Kathleen M. Carley, a CyLab professor at Carnegie Mellon University. They could publicly verify election officials’ accounts using unique icons, for instance. Companies should also restrict or prohibit ads that deny upcoming or ongoing election results. And they should label election ads that are AI-generated as AI-generated, thus increasing transparency. “AI technologies are constantly evolving, and new safeguards are needed,” Carley added. “Also, AI could be used to help by identification of those spreading hate, identification of hate-speech, and by creating content that aids with voter education and critical thinking.” Related content news Platform lets creators monetize their content for use in LLM training Avail’s Corpus tool ‘flies in the face’ of comments made by head of Microsoft AI, says analyst. By Paul Barker Jul 17, 2024 5 mins Artificial Intelligence news ChatGPT users speechless over delays OpenAI has delayed an alpha release of its new voice mode for ChatGPT, citing safety and scalability concerns By Gyana Swain Jun 26, 2024 4 mins Generative AI Voice Assistants Artificial Intelligence news Public opinion on AI divided While many think it may benefit society as a whole, they find it hard to see what’s in it for them, highlighting some lessons for employers and developers. By Lynn Greiner May 28, 2024 7 mins Employee Experience Generative AI IT Skills news analysis There aren't nearly enough workers to support new US chip production Even as the semiconductor industry hopes to find and recruit skilled workers, a lack of talent could undermine national objectives, push up labor costs, and hinder the returns from the billions of dollars being spent, according to a McKinsey & Co By Lucas Mearian May 15, 2024 10 mins CPUs and Processors Government Manufacturing Industry Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe