Scarlett Johansson vs OpenAI: The Main Thing

Scarlett Johansson recently expressed her shock and anger over OpenAI’s use of a voice resembling hers in their ChatGPT software. This controversy has spread widely in the media, raising significant questions about ethics, legality, and the future of AI. We have gathered all the information in one place to provide a comprehensive overview of what happened and how people are reacting to it.

Timeline of Events

September 2023: Initial Communication and Refusal: It is reported that In September 2023, Scarlett Johansson was approached by OpenAI with a proposal to collaborate on a project involving the use of her voice for their AI technologies. Johansson declined the offer “after much consideration and for personal reasons”.

May 13, 2024: OpenAI Presentation: During a presentation, OpenAI unveiled a new feature for their ChatGPT software: a virtual assistant named Sky. Attendees were immediately struck by the similarity of Sky’s voice to that of Scarlett Johansson. The presentation, available on OpenAI’s official YouTube channel, showcased Sky’s capabilities, with many viewers assuming it was Johansson’s voice.

On the same day, OpenAI CEO Sam Altman posted a cryptic message with just a word: “Her.” It was widely interpreted as a reference to the film “Her,” in which Scarlett Johansson voiced an AI assistant named Samantha. This added another layer of complexity to the situation, suggesting a deliberate nod to Johansson’s role in the movie. The post fueled further speculation on social media.

May 19, 2024: OpenAI’s Blog Post: In response to the backlash, OpenAI published a detailed blog post explaining the development process of voices for ChatGPT. In addition, they said they has been in conversation with Johansson’s team to discuss her concerns about Sky since May 15, 2024. They clarified that the voice used for Sky was not Johansson’s and not an imitation of her voice: “We obtained all necessary legal permissions for the voice we used and did not intend to mislead the public.” However, the company paused the use of Sky in their products.

May 20, 2024: Scarlett Johansson’s Statement: On May 20, 2024, NPR published an exclusive statement from Scarlett Johansson, provided by her publicist, Marcel Pariseau. Johansson expressed her shock and anger, stating, “I was shocked and angered when I heard my voice being used without my consent. This is a violation of my rights and a blatant disregard for my personal boundaries.” She emphasized the importance of consent and respect in using personal attributes, especially in AI.

Public and Expert Reactions

The case has sparked a wave of reactions from the public, experts, and the media. Below are some of the varied perspectives:

Many social media users sided with Johansson, arguing that Johansson’s voice, as a unique personal attribute, should be protected against unauthorized use. Others also argue that using a voice so similar to Scarlett Johansson without explicit consent was unethical:

This action seems to be inconsistent with OpenAI’s commitments to promoting responsible AI and negates much of their efforts to foster ethical AI practices, especially in the context of multiple partnership announcements.

Dmitry Shironosov, CEO of Everypixel

We also managed to get a comment on this situation from a person directly involved in the industry — AI filmmaker:

The OpenAI/Scarlett Johansson situation is another puzzling gaffe from OpenAI. Despite being the most sophisticated AI company, OpenAI seems unready for the world’s scrutiny. This debacle comes only a few months after CTO Mira Murati’s awkward stuttering in an interview about Sora’s training on publicly available content from the Internet.

There’s a popular narrative among creatives that AI is stealing artists’ work, and beyond that, there’s anxiety about AI stealing people’s likenesses. This situation is a high-profile example of the public’s greatest fears about AI playing out in a clumsy and oblivious manner. Worse yet, it was clearly supported by top leadership, with CEO Sam Altman personally calling Scarlett Johansson’s agent before the demo.

It gives off a sense of ‘No consequences matter in the grand scheme of AGI.’ Unsurprisingly, people don’t like this. This misstep by the world’s largest AI company will make artists even more defensive against AI.

Mike Gioia, AI filmmaker, author of Intelligent Jello

Some defended OpenAI, pointing out the legal permissions they had obtained. And if the voice is legally licensed and not Johansson’s, there’s no case.

It is worth noting that in contrast to the recent legal case between OpenAI and the New York Times, where OpenAI provided more detailed arguments and defended their position more rigorously, their reaction to Johansson’s statement seems more conciliatory. They explained their process of developing AI voices, claimed the resemblance to Johansson’s voice was unintentional, and promised to remove the voice. This shift in tone highlights the sensitive nature of the issue.

The internet, of course, has had its say. Memes take on the situation have flooded social media.

The controversy has broader implications for the AI industry, casting a shadow over its credibility and highlighting the need for clearer guidelines and ethical standards in AI development. While OpenAI might be on solid legal ground, the ethical misstep could have lasting repercussions on its reputation and the broader AI industry. This case serves as a crucial reminder of the importance of consent and the ethical use of personal attributes in AI development. The debate it has sparked will likely influence future policies and practices in the field of AI.

Spread the word

Posted

in

by