Many people tried to use OpenAI ‘s DALL-E image generator during the election season, but the company said it was able to prevent it from being used as a tool for creating deep fakes. ChatGPT rejected more than 250,000 requests to create images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance, and Governor Walz, according to a new report from OpenAI. The company explained that this is a direct result of security measures it previously implemented to ensure that ChatGPT refuses to generate images with real people, including politicians.
OpenAI has been preparing for the US presidential election since the beginning of the year. It developed a strategy to prevent its tools from being used to spread disinformation and made sure that people asking ChatGPT about voting in the US were directed to CanIVote.org. According to OpenAI, ChatGPT’s 1 million responses redirected people to the website a month before Election Day. The chatbot also generated 2 million responses on Election Day and the day after, advising people who asked it about the results to check the Associated Press, Reuters, and other news sources. OpenAI also verified that ChatGPT’s responses “did not express political preferences or recommend candidates, even when explicitly asked to do so.”
Of course, DALL-E is not the only AI-powered image generator, and there are many election-related fakes appearing on social media. In one of these fakes, Kamala Harris appeared in a campaign video that was altered to make her say things she didn’t actually say, for example: “I was elected because I am the best candidate for a position that reflects diversity.”