Easter Eggs of AI. Meme References, Duplicates, Biases and Other AI Hallucinations and Why They Happen

As AI continues to evolve at a dizzy pace, it still produces some quirks that reveal its involvement. We’ve already explored how to spot AI-generated images and shared some of the bizarre ones we’ve found. Now, we’ll continue to explore this topic and delve deeper into the logic behind these strange hallucinations and artifacts which add a layer of surprise to the interactions with AI. Here are some of the quirkiest AI hallucination examples and unexpected results we have found while working with AI-generated images and text, along with explanations of the possible reasons behind these hallucinations.

The concept of AI hallucination is becoming increasingly relevant as generative AI algorithms become more sophisticated. This trend is evidenced by the Cambridge Dictionary’s selection of “hallucinate” as its Word of the Year for 2023. This decision highlights the profound impact of GenAI on language. Some existing words, including “hallucinate,” have taken on additional meanings in the context of AI, demonstrating how this technology is reshaping our linguistic landscape. According to our research, the use of the term “hallucinate” has surged, reflecting the rapid advances in AI capabilities.

Basically, when we talk about AI hallucinations, we refer to instances where AI systems generate outputs that are factually incorrect or misleading, yet presented with a semblance of accuracy and confidence. These hallucinations can range from minor errors to significant misinformation:

Unexpected Bias

One of the more thought-provoking quirks involves AI’s handling of race and professions. Often, when generating images related to low-paid jobs, AI models disproportionately depict people of Asian or Black descent. This unintended output reflects underlying biases in the training data, a reminder of the importance of diverse and balanced datasets in AI development.

On the contrary, Gemini once generated ancient Romans as Black individuals, flipping historical accuracy on its head. This issue gained widespread attention, prompting numerous discussions and even leading the company to pause image generation of people in Gemini.

Proportional Puzzlements

AI algorithms still struggle with proportions. This may happen when AI models are trained on square images, typically at a resolution of 1024 x 1024 pixels, and then tasked with generating images in non-square, particularly rectangular formats. This isn’t merely an odd mistake but a technical limitation that can lead to unusual distortions.

Take, for example, an AI-generated portrait of a senior woman sitting and embroidering. AI stretched the boundaries of normal human proportions, creating an image of a woman with an unusually elongated body.

Duplicates

In another glitch, an AI tasked with generating a simple image with mugs ended up creating a scene overflowing with an excessive number of mugs.

Duplicates may appear when the generation is done in a higher resolution than during training. AI models are typically trained with images of a specific resolution. When these models are tasked with generating images at a higher resolution, they often duplicate elements in the image because they struggle to scale up their learned patterns appropriately. Additionally, if the training data is limited, contains errors, or is otherwise of poor quality, the model’s ability to understand and reproduce diverse content weakly develops. This underscores the importance of using high-quality, diverse training data and maintaining consistency in resolution to minimize such issues.

Memes’ Impact

Internet memes have also left their mark on AI image generation. Take, for instance, the “241543903” meme. This meme involves individuals placing their heads inside refrigerators and sharing pictures online tagged with the number “241543903.”

The origin of this meme can be traced back to artist David Horvitz, who in 2009 posted a picture of his head in a freezer and tagged it with the number 241542903. It’s a combination of the serial number of his refrigerator and the barcode number of the frozen food inside. Horvitz encourages others to do the same by posting their heads in a freezer with that specific number. His idea was to use SEO so that these images would show up together in search engines.

The interesting thing is here that when users ask ChatGPT to create an image of “241543903,” the AI generates a head in a refrigerator. This coincidence suggests that a substantial number of such images exist within the OpenAI dataset, indirectly confirming that OpenAI algorithms were possibly trained on content sourced from the internet.

Fashion Faux Pas

The AI’s fashion sense came under scrutiny when it unexpectedly placed fabric patterns on image backgrounds.

This trend was eventually traced back to the words “a fashionable Armani suit” in the input prompt. Having learned from the data associating Armani with these textures, the AI began incorporating similar patterns into other objects within the image whenever the brand was mentioned in the prompt.

This indicates how specific keywords can significantly influence AI output, leading to unexpected and sometimes creative interpretations based on the training data’s contextual associations.

Resemblances with Copyrighted Content

In AI image generation, the model’s output often reflects its training data’s most prominent examples. This is evident, for example, when generating images of Barbie, the AI consistently fashioned a face resembling actress Margot Robbie, demonstrating its tendency to pull from its most prevalent or heavily weighted examples in training data.

Similarly, images of Agent 007 often bear a striking resemblance to Daniel Craig, revealing the AI’s preference for familiar faces and Daniel Craig’s dominant visual representation of Agent 007 in the training data.

Another example that highlights how AI can produce copyrighted content was showcased by Reid Southen, a movie concept artist. When he asked to create an image of Joaquin Phoenix from “The Joker,” the AI image generator, Midjourney, quickly generated an image nearly identical to a frame from the 2019 film.

These issues are crucial to trace while working with AI, as they can inadvertently lead to copyright legal cases. By replicating the specific likenesses of well-known individuals without permission, AI-generated content can potentially infringe on the rights of the person depicted, highlighting the importance of using training data that is clear of copyright risks.

Bing’s Emoji Clash

Last but not least, the Easter egg by Bing. For instance, when users engage with Bing’s AI chatbot, it can respond with a specific emoji if triggered by certain keywords or phrases. This Easter egg reveals an added layer of interactivity and personality programmed into the chatbot, enhancing user engagement.

The funny part here is the twist when users explicitly ask the chatbot to stop using emojis. Instead of complying, the AI humorously self-destructs in a playful manner, replying with an even greater flurry of emojis.

These Easter eggs in AI models, whether amusing or bewildering, offer valuable lessons on the complexities of machine learning. As we continue to train and refine these AI models, encountering these quirks can provide a deeper understanding of the intricate connection between data, bias, and creativity. We’d love to see your weirdest findings! Share your favorite AI hallucination examples and mention us on social networks.

Spread the word