How Companies Are (And Aren’t) Embracing AI: Shutterstock, Quora, And More

AI has become a hot topic in recent years, with many organizations looking for ways to leverage its capabilities, while others resist. This post examines a few notable cases of companies and tracks how they’ve changed their perspective on AI adoption (if at all).

According to a McKinsey report, AI adoption in the enterprise has more than doubled since 2017, with industries from finance to healthcare incorporating AI into their organizations and seeing increasing financial returns as a result. Another report, from Stanford University, found that there has been an 18x increase in private investment in AI over the last decade.

However, as with other disruptive technologies and inventions, such as the advent of the Web, the telephone, or cryptocurrencies, there is a lot of concern and fear along with excitement. Not all companies (even those in the tech industry) have been quick to embrace AI, with some initially expressing skepticism or even outright opposition to the technology.

Shutterstock 

In September 2022, it’s been a few months since advanced text-to-image algorithms such as Midjourney, DALL-E 2, and Stable Diffusion became available. Petapixel reported that thousands of AI-generated images were being offered on stock photo websites, and tutorials on selling AI art were circulating on YouTube. This sparked a fierce debate about what was in store for the industry as a whole.

Soon after, Shutterstock, one of the largest providers of stock images, footage, and music, removed such content from its website. Some Shutterstock contributors posted on Reddit that they’d received an email stating that the platform had partnered with Open AI to bring AI capabilities to the Shutterstock ecosystem, and that contributors would no longer be allowed to upload AI-generated content directly to their marketplace.

To fairly reward contributors who might be affected by the partnership with OpenAI, they announced an additional form of compensation for content creators — a revenue share compensation model for those whose content is used in training datasets for developing AI algorithms. However, artists were skeptical about the amount of revenue share they would receive:

Comment
byu/alohadave from discussion
inphotography

At the same time, the company released an official statement about its partnership with OpenAI, and a few months later, Shutterstock launched its AI image generation platform integrated into its interface.

As with Shutterstock, the reason for banning AI-generated content wasn’t inspired by their skepticism about AI innovation itself. Rather than sharing a piece of the pie with third-party services, they want to sell the product they’ve built.

Either way, Shutterstock is pushing the entire industry to innovate and motivating other companies to embrace AI disruption. In the company’s most recent announcement, Shutterstock said that it has partnered with NVIDIA ​​to create 3D assets from text prompts.

Getty Images

Another stock photo agency, Getty Images, has been more reluctant to embrace AI and join the generative AI rush. In September 2022, Getty Images reportedly banned AI-generated images. The reason was that they didn’t meet the company’s quality standards and had the potential to violate copyrights, according to Getty Images CEO Craig Peters, cited by The Verge.

However, just a few weeks later, simultaneously with Shutterstock’s partnership with OpenAI being announced, Getty Images partnered a partnership with BRIA, an AI company that specializes in visual content transformation. The collaboration aims to use responsible AI to improve visual content creation and address the ethical and legal issues surrounding AI-generated content.

Additionally, Getty Images has been in the news recently for suing Stability AI, a company behind Stable Diffusion. The reason for the lawsuit can be seen in many of the images generated by their AI image generation algorithm — they often tend to repeat the Getty Images watermark.

Adobe

Unlike some of its competitors in the stock imagery industry, Adobe Stock didn’t follow the trend of banning AI-generated content and chose alternative tactics to work with tech disruptors. The platform accepted submissions of AI imagery with some restrictions. Contributors must have the rights to license AI content for sale on the platform, and they must label the content as having been created with AI.

Adobe is also building its own generative space for creators while remaining friendly to third-party AI content submitted to its platform. Last month, the company announced Adobe Firefly, a toolkit of AI models that can be used to create various types of visual content — from illustrations, art concepts, and photos to “creative ingredients” such as brushes, color gradients, text effects, or video transformations.

Quora

Generative AI, and chatbots in particular, pose a huge risk to Q&A sites. Quora, for instance, is a Q&A platform with 300 million monthly active users. Its mission is to share and grow the world’s knowledge. For Quora users, with AI tools at their fingertips, why not ask a question on ChatGPT instead of posting it on Quora? AI will come back with an answer immediately, whereas on Quora it can take time for a user to find and read a relevant thread or wait for someone to answer their question. That’s the point where Quora and others may lose a large portion of their users, who may prefer to use chatbots. 

How did Quora respond to this issue? In fact, they haven’t clearly articulated their position yet. Officially, the company has neither banned nor allowed its users to post AI-generated content when answering questions on the platform. Behind the scenes, however, Quora has reportedly been removing such content that has been published on the platform as discussions about it have been going on.

In January 2023, Quora spokesperson William Gunn announced that an update on the issue would be coming soon. As of April 2023, no official statement or update to the platform’s policies has been released to reflect the change in rules regarding the use of AI chatbots. As was the case a few months ago, it’s currently not illegal to post responses generated by ChatGPT and other AI algorithms. Are they purging those responses? Who knows. What’s interesting is that, as Quora CEO Adam D’Angelo wrote in his post in early February, the volume of weekly answers on Quora reached the highest level in its history. Was ChatGPT part of this growth? Who knows.

“If you can’t beat them, join them” — that’s the principle that Quora has adopted while avoiding taking a clear stance on using AI on the platform. Rather than allowing AI content on Quora or, conversely, creating a barrier to it, Quora has built a safe space for experimenting with AI chatbots. The company has released a new app called Poe, which stands for “Platform for Open Exploration.” It’s worth noting that Quora is separating Poe from its core business — it’s a new product with a new brand that exists independably from Quora.

It’s like a messaging app, but for AI chatbots. The app allows users to ask questions, get instant answers, and have back-and-forth conversations, switching between multiple AI bots powered by models from OpenAI and Anthropic. In addition, the company is developing an API that will make it easy for any AI developer to plug their model into Poe. Poe’s goal is to give users access to a large number of chatbots, optimized for different purposes and representing different perspectives and points of view. Once the AI content meets Quora’s quality standards, it will be distributed to Quora.

Stack Overflow

The other concern with AI disruption for community-based platforms is that some Q&A volunteers can take advantage of the chatbot’s superpower to provide instant answers, and post these AI responses to the community without having to check their quality. This multiplies their activity on the site a thousand times, helping to build their reputation as an expert while saving the time and effort of researching the topic and writing a response manually. This can lead to an influx of content on the site that is difficult to moderate, frustrating users with irrelevant or repetitive responses, and creating a negative experience that can damage the platform’s reputation.

This is exactly what happened at Stack Overflow, another Q&A site, but for programmers. Its users have been using ChatGPT to answer coding questions on the site, resulting in a spate of AI answers on the platform. Inundated with them, the platform’s volunteer-based curation infrastructure was virtually overwhelmed by content that, while being compelling, had a “high rate of inaccuracy.” The sheer volume of content made it impossible for moderators to verify its accuracy. This led to a temporary ban on using ChatGPT to distribute content for Stack Overflow. However, the policy regarding the use of AI text generators is subject to change after consultation with the community.

Stack Overflow is the largest and best-known site in the Stack Exchange network, which includes 180 knowledge-sharing communities in fields as diverse as programming, science, art, languages, and more. The flagship product was affected more than any other site in the network. With the rest of their sites, moderators have opted for the no-ban ChatGPT tactic, at least until they experience a volume of AI-generated content that becomes cumbersome to manage.

“Because sites are impacted to such different degrees by the usage of ChatGPT, we encourage sites to create these policies as they become an issue. A blanket policy does no good if affected communities are not simultaneously developing the methods they use to combat the material problems they face.” — explained the moderator in response to one of the users who requested a network-wide GPT ban.


In his recent blog post, Bill Gates emphasized the transformative potential of AI, stating that “we are in the midst of a technological revolution that is transforming every aspect of our lives.” However, he also cautioned that we must approach the development of AI in a responsible and ethical manner. This sentiment reflects a growing concern in both the public and the private sectors about the importance of responsible AI development and deployment. Even some technology companies, while more innovative and flexible than others, share the same fears. Many have recognized the potential benefits of AI and have adopted it in various forms, while others are still on their way. It’s interesting to watch them change their perspective on this, and we’ll continue to write about these shifts.

Spread the word