A quick overview of the essential news in the AI world which were on everyone’s lips in January.
Shutterstock Launched AI Image-Generating Tool
Shutterstock has introduced its AI image generation platform. The tool grounded on text-to-image technology turns prompts into license-ready imagery. According to a press-release, the platform is built on a library of diverse assets and delivers ethically created content. “We ensure that the artists whose works contributed to the development of these models are recognized and rewarded,” said Paul Hennessy, CEO at Shutterstock.
The new platform is arguably the result of the collaboration between Shutterstock and Open AI which has been announced in October 2022. In a statement, Shutterstock revealed its plans to sell content generated with DALL-E, a text-to-image model from OpenAI.
Microsoft and OpenAI Expand Partnership
Microsoft and OpenAI have announced the third phase of a long-term partnership, with a multi-billion dollar investment. The tech giant has already invested $1 billion in the AI startup back in 2019, and another $2 billion between 2019 and 2023. Semafor reported that the value of OpenAI is estimated at $29 billion, including Microsoft’s new investment of as much as $10 billion.
According to a press-release on Microsoft’s website, the agreement will accelerate AI breakthroughs and allow both companies to independently commercialize AI technologies. Microsoft will increase investment in supercomputing systems, be OpenAI’s exclusive cloud provider, and deploy OpenAI models across consumer and enterprise products.
Microsoft is also rumored to be planning to bring AI chatbot capabilities into Bing search results and its applications such as Word, PowerPoint, and Outlook.
Content Creators Jumped Into the Fray With AI Companies
Stock images giant Getty Images is suing the creators of Stable Diffusion, a text-to-image model, for scraping its content and processing “millions of images protected by copyright”.
Getty Images CEO Craig Peters told The Verge that the company wants to “create a new legal status quo”. The company’s goal is to seek licensing terms that would respect intellectual property, rather than financial damages or stopping the development of AI technologies.
Earlier this month, three artists filed a class-action lawsuit against AI image-generating companies Stability AI and Midjourney, and the art portfolio platform DeviantArt. They alleged that the companies violated the rights of millions of artists by training their AI art generators on five billion images collected from the web without consent.
AI models rely on human-created images for training data, often scraped from the web without the creators’ consent. This has led to numerous debates between content creators and AI algorithms developers.
OpenAI Ran an Early Experimental Subscription Model for ChatGPT
OpenAI is going down the path to a premium version of ChatGPT, the incredibly ubiquitous AI chatbot. Users can join a waitlist and receive an invitation to pilot ChatGPT Professional.
The subscription benefits outlined in the waitlist form include faster responses from the chatbot, more reliable access (no ‘blackout’ windows even when the service is down), and at least 2x the regular daily message limit. As for pricing, users have reported that the pro access costs $42 per month. Since the subscription program is in its early stages, both pricing and features may be subject to change as the paid version goes public.
OpenAI Reportedly Used Low-Paid Kenyan Workers to Label Toxic Content
An investigation by Time reported that OpenAI, the company behind the popular generative AI chatbot ChatGPT, used low-paid workers to sort through sensitive content in order to create an AI filtering system that would make the chatbot safe. For the project, OpenAI partnered with Sama, a U.S. company that hires workers in Kenya, Uganda, and India to perform data labeling tasks for Silicon Valley companies.
According to Sama employees interviewed by Time, the workers earned between $1.32 and $2 per hour and went through 150-250 passages of text per 9-hour shift. The workers reported feeling mentally scarred by their work: the content they sifted through described situations of child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.
OpenAI confirmed that Sama workers from Kenya have contributed to building a tool to detect toxic content. “Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content,” an OpenAI spokesperson told to Time. Sama has recently decided to stop labeling sensitive data and instead focus on other areas.