In this blog post, we share thoughts on legal disputes between tech companies and rights holders from the entrepreneur, investor, CEO & co-founder of Everypixel Dmitry Shironosov.
After a few years of euphoria over the potential of AI and admiration for increasingly complex technologies and solutions, AI is now frequently at the center of legal disputes between tech companies and rights holders. With daily innovations becoming the norm, a critical question arises: who is footing the bill?The current copyright crisis boils down to a conflict between rights holders and tech companies. The former are unwilling to give away their work for training AI models at no cost and view the technology with suspicion. Meanwhile, the latter argue that the training falls under fair use. Although they have started to license content, the intention is often seen as a way to appease public opinion rather than to genuinely resolve the conflict. OpenAI’s multiple attempts at partnerships with publishers before and after The New York Times lawsuit exemplify this tension.
The root of the problem
The crux of the issue extends beyond a simple clash of opinions. This problem should have been resolved long ago, and while it might seem too late now, we should act to ensure the tech and creative industries can coexist and benefit from each other in the future.
But how did this crisis start?
The U.S. legal framework has long been considered one of the most effective for venture investments, thanks to a well-functioning system of checks and balances that fosters a business-friendly environment. However, GenAI illustrates how these advantages can unintentionally slow innovation or push the industry into a grey area. Machine learning technologies require data access, but the existing legal system, once beneficial to all, now hampers AI companies by creating barriers to training neural networks on publicly available data. Ignoring current copyright norms predictably leads to legal issues, and creators are seizing the opportunity to protect their rights. Consequently, AI companies are attempting to set legal precedents for data access.
As my friend and industry expert, Mark Milstein, Co-Founder and Director of business development at vAIsual, accurately observed, “Although there are marketplaces to license biometric datasets, most AI platforms prefer to risk legal issues and pay more in GPU costs rather than do the right thing and pay for data. I hear too many tech firms complain about the high cost of biometric data. The equivalent of I couldn’t afford a car, therefore I stole one.”
Temporary compromises and their limits
The compromise of companies starting to license content for training is merely a temporary solution. Photo agencies are selling content acquired in the pre-AI era under a different economic model, with authors receiving a share of sales revenue per copyright law.
Consider what happens if creators are paid only the one-time fees currently made by photo agencies for selling their content to tech companies. Photographer Shannon Fagan told The Wall Street Journal that last year he received a one-time payment for AI training from Adobe Stock, which amounted to 10-12% of his annual income. Internal statistics clearly show that even the most affordable production studios are forced to shut down and lay off teams because their work can now be replicated via prompts, and the one-time training payments, after which images can be generated endlessly, barely compare to their earnings from selling photos.
Thus, the current situation is far from sustainable. The good news for creators is that in niche domains, complex concepts, and small details, AI performs poorly (at least, yet). The bad news is that AI has proven to be highly effective at mimicking artistic style. That’s exactly what frustrates those authors whose styles are successfully copied in Midjourney and Stable Diffusion.
Towards a long-term solution
Compromises as temporary solutions are important as they help weather the crisis with minimal losses. However, it seems we are on the brink of rethinking the entire system, possibly reconsidering how copyright law functions. I believe that current norms and regulations need to evolve to support technological advancement.
Nevertheless, as the founder of a tech company that grew out of a content creation business, I also identify with creators and cannot ignore the fact that AI companies risk stalling the content industry. We cannot create exceptional AI tools without human-generated data. Therefore, when revising existing norms, we must consider the opinions of creative professionals. I believe that they are the ones who will be the driving force behind the next leap in technology development.
Eventually, when data obtained in the “pre-GenAI” era becomes outdated, existing models will need new fuel—who will produce it? Training data is undoubtedly the new oil, with the advantage that unlike oil, it is renewable and inexhaustible—provided we do not destroy the source, the content creation industry. Don’t bite the hand that feeds you.
In my view, resolving this crisis lies in public discussions and open dialogue among all stakeholders. For this dialogue to happen, two things must occur:
First, the tech community should recognize that creators remain an integral part of the market. Without their participation, they cannot develop new AI tools or maintain the relevance of current ones, and their contribution must be fairly rewarded. Hiding behind fair use is not only unethical but also incorrect from the concept of fair use itself. Using content to create commercial products and depriving a copyright owner of income while directly competing with the original work cannot be considered fair use.
Second, the creators’ community should become more adaptive and embrace AI to enhance their work. By leveraging this powerful tool, creators can produce more commercially appealing content in greater volumes. Their expertise and professionalism enable them to use AI more effectively than the average user of text-to-image tools. Embracing AI allows creators to stay at the forefront, set trends, and create high-quality, in-demand content. In contrast, saying “I hate GenAI” and wishing it would go away is unconstructive—GenAI is here to stay, and we have to accept it.
Uniting in alliances to defend their rights and interests amidst changing conditions and revising a system that has worked flawlessly for decades is something that can save an entire industry and make it part of the technological world. Here, the music industry sets a good example for everyone—just look at how loudly and proactively record labels respond to any infringement of their rights. Photographers, designers, illustrators, writers, and comedians have much to learn from them.