The Rise of AI-Generated Images: Implications for Online Misinformation
Over the past week, AI-generated images have gone viral, ranging from obviously fake to compellingly real. The images have created a new online reality, where a new crop of buzzy AI tools has made it cheaper and easier than ever to create realistic images, as well as fake audios and videos. These images are likely to pop up with increasing frequency on social media.
The recent viral AI-generated images have raised concerns that computer-generated media is further polluting the information ecosystem, adding to the challenges for users, news organizations, and social media platforms to vet what is real, after years of grappling with online misinformation featuring far less sophisticated visuals. There are also concerns that AI-generated images could be used for harassment, or to further drive divided internet users apart.
AI-generated text has recently proliferated thanks to tools like ChatGPT. But compared to AI-generated text, images can be especially powerful in provoking emotions when people view them. This can make it harder for people to slow down and evaluate whether what they’re looking at is real or fake. What’s more, coordinated bad actors could eventually attempt to create fake content in bulk to confuse internet users and provoke certain behaviors.
Computer-generated image technology has improved rapidly in recent years, from the photoshopped image of a shark swimming through a flooded highway that has been repeatedly shared during natural disasters to the websites that four years ago began churning out mostly unconvincing fake photos of non-existent people. Many of the recent viral AI-generated images were created by a tool called Midjourney, a less than year-old platform that allows users to create images based on short text prompts.
Midjourney has emerged as a popular tool for users to create AI-generated images. The latest version of Midjourney is only available to select paid users, while the free trial of its earlier versions has been paused due to extraordinary demand and trial abuse. The creators of the recent viral images don’t appear to have been acting malevolently. The Trump arrest images were created by the founder of the online investigative journalism outlet Bellingcat, who clearly labeled them as his fabrications.
Platforms including Meta’s Facebook and Instagram, Twitter, and YouTube have policies restricting or prohibiting the sharing of manipulated media that could mislead users. There are efforts by AI technology companies and industry groups to improve transparency around when a piece of content is generated by a computer. Work is also underway on technical solutions that would watermark an AI-generated image or include a transparent label in an image’s metadata, so anyone viewing it across the internet would know it was created by a computer.
A group of tech leaders this week wrote an open letter calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.” While it’s not clear whether any labs will take such a step, as the technology rapidly improves and becomes accessible beyond a relatively small group of corporations committed to responsible practices, lawmakers may need to get involved. Legislation will probably play a role in reigning in some of the more radically open models.
In conclusion, AI-generated images have raised new concerns and challenges that need to be addressed by tech companies, regulators, and lawmakers. Efforts to improve transparency and to develop safety practices for synthetic media need to be accelerated, and the potential harms and risks of this new age of AI need to be addressed proactively. A radical open approach where AI tools are open-sourced or used with minimal restrictions is going to lead to a massive scaling of harm, and legislation will play a crucial role in reigning in such models.