Four Things to Know Before Using AI Content in Your Marketing Activities and Design Projects

Venturing into the world of AI-driven content requires a nuanced understanding of the intricacies involved. Therefore, this article delves into key complexities to consider before integrating AI-generated content into projects:

1. Stay Informed About the Legal Landscape

In the current landscape of AI-driven content creation, the legal framework governing its use is far from settled. While it’s clear that AI cannot be a copyright owner, the challenge lies in defining boundaries to prevent infringements on existing copyrights, considering that AI tools pull content from various sources. In that sense, designers, bound by contracts stating the client owns the work product, bear the responsibility of informing clients about the AI involvement, impacting its copyright status.

“Services like OpenAI explicitly disclaim any guarantees regarding copyrights. Examining their terms of use makes it clear that it is a user of content generation tools who must take ultimate responsibility for the content being published.”

Daria Kuznetsova, Corporate Lawyer of Everypixel

In a recent legal development, three visual artists took legal action earlier this year by initiating a proposed class-action lawsuit against Stability AI. The lawsuit claims that Stability AI “scraped” over 5 billion images from the internet to train its image generator Stable Diffusion without the consent of copyright holders. The controversy escalated further when stock photo company Getty Images filed its lawsuits against Stability AI. Getty Images also asserted that Stability AI copied over 12 million photos along with captions and metadata from its collection without permission or compensation.

Take these legal cases in a situation where, for example, a small business owner uses an AI-generated image for commercial purposes, like advertising on Instagram. If the image turns out to infringe on someone’s copyright, this action could result in unforeseen consequences, including the risk of facing legal action from copyright holders whose images were used to train the algorithm without consent.

AI’s tendency to closely replicate characters goes beyond mere interpretation or reference; it’s more like plagiarism. Consider a situation where a designer has an idea to generate images in the style of a Barbie movie. The concept includes pink colors, people dressed in costumes reminiscent of Barbie and Ken, along with makeup and other thematic elements to set the right mood. However, despite repeated attempts, AI algorithms consistently generate images resembling Margot Robbie, which couldn’t be used due to copyright concerns. This issue might not be obvious to everyone, but it could lead to unintended and undesirable consequences.

Or take, for instance, our attempt to generate a Spider-Man Halloween costume that resulted in an overly cinematic representation that deviates from the simpler, real-life one seen in cosplay:

The complexity expands further when considering copyright for content created by individuals using AI, adding layers of legal intricacies. As of March 15, 2023, the U.S. Copyright Office has stated that works created with AI assistance may be copyrightable if there’s sufficient human authorship involved. The definition of “sufficient human authorship” remains a grey area, prompting caution among designers, content creators, and businesses.

To navigate the legal landscape of AI content creation effectively, minimizing the risk of copyright-related challenges and legal consequences, keep in mind these tips:

  • Be aware that you may not be able to own copyright on AI-generated content, or you will need to prove significant human contribution in the creation process.
  • Recognize the potential legal implications if AI-generated content infringes on someone’s rights or if the algorithm is trained on content without the consent of the original authors. Your use of AI-generated content may unintentionally reveal such issues and lead to legal complications.

2. Always Adapt AI Outputs

While some may perceive the use of AI as indicative of a lack of intelligence, creativity, or authenticity, it’s crucial to dispel these misconceptions. AI tools are not meant to replace human capabilities but rather to complement and enhance them. Consider other everyday tools you use, like Adobe’s features, that already incorporate AI without being explicitly labeled as such (e.g., Select Subject, Auto-cut, Remix, Content Aware, Image Trace). Or take, for example, mobile cameras or predictive text functions — they all incorporate AI technologies. From this perspective, AI is essentially just another tool in the toolbox, facilitating efficiency.

It’s also important to remember that a high-quality final result requires extensive human involvement. While AI can generate base concepts, assist in brainstorming or minor edits, the key lies in striking a balance between AI assistance and human creativity to achieve the best outcomes.

In the process of adapting AI outputs for practical use, consider the following ways of fine-tuning:

  • When preparing AI-generated images for printing, especially for large formats like billboards, it’s essential to optimize resolution. The default output from algorithms may fall short of the necessary printing quality. Adjust the resolution using external tools to meet the desired standards.
  • Use special editors to remove defects in AI-generated images. As in our Everypixel Exclusive project, our editors examine each AI image and make an upscale. We also have a special tool that can help to enhance AI-generated images. In certain cases, manual adjustments using software like Photoshop may be also necessary.
  • When dealing with AI-generated text, conduct comprehensive fact-checking, adapt the writing style, remove any redundant phrases and irrelevant content, and adjust the tone of voice to match the brand guidelines, as AI often struggles with these aspects.

3. Don’t Share Confidential Information

Sharing information with AI-based services raises valid security concerns regarding third-party access. The core issue lies in the security practices of the service provider. Consider the specifics of the data you input into the generator, taking note that data may be collected.

As we have already highlighted the growing concerns around security and privacy, several companies are taking steps to restrict the use of ChatGPT by their employees. Major players in the financial sector, including JP Morgan, Citigroup, Bank of America, Deutsche Bank, Goldman Sachs, and Wells Fargo & Co, have already imposed restrictions on ChatGPT usage. Other industry leaders, such as Amazon, Verizon, and Accenture have issued warnings to their employees against entering confidential information into ChatGPT.

A key recommendation here is to be careful when formulating prompts for AI models like ChatGPT:

  • Review and remove any sensitive data from prompts before initiating interactions.
  • If posing questions that involve confidential information, consider anonymizing the content to prevent the inadvertent exposure of sensitive details.
  • A more robust approach involves the avoidance of prompts that may involve sensitive data, minimizing the risk of potential breaches.

4. Disclose AI and Build Trust

The final point on this list remains a subject of controversial debates, eliciting varied opinions on whether to disclose AI involvement in work or not. The discussion focuses on the nuanced aspects of being transparent about using AI.

“Given the controversy surrounding the use of AI in the workplace, a contractor should at least inform a company about the use of AI in the work. At least for today. If the company is okay with it – then go ahead, but if not – accept their terms or don’t contract with them.”

Dmitry Shironosov, CEO of Everypixel

The absence of specific regulations further complicates the matter of disclosing the use of AI in the workplace. There’s no one-size-fits-all answer as it often depends on various factors. To give a broader view, we’re sharing different perspectives and contradictions discussed in one of our Reddit discussions:

Disclose

“Considering that AI cannot be copyrighted, you need your client to be aware that they do not have the ability to protect themselves so that they don’t sue you when their ad campaign is ripped off by someone else or when the AI accidentally creates an image so similar to the image that was used to train the AI, that the original artist would be able to sue.”

pip-whip

Not to disclose

“There are a lot of tools, especially in Photoshop we’ve been using AI for years. I don’t see the necessity stating an aspect of a project was created in AI.”

JTLuckenbirds

“If we’re gonna go by that then everything that’s been photoshopped and edited should be disclosed as well?”

Alternative_Antler

“No. The client is paying for your Intuition of what is commercially appropriate for their business. Even AI results would need heavily modified and tweaked for sure, although it’s a great concept generator.”

shenanigans05

“How would it be proven that the inspiration or concept was derived from generative AI?”

Spoffle

Depends

“If it’s major then yes (eg. making art of a character for a book cover with midjourney, with no significant photobashing or other manual work). If it’s minor (small bits of generative fill, neural filters etc) then you don’t owe anyone anything… you don’t “disclose” that you used topaz upscaler or remove.bg.”

mikachabot

“I think it’s essential to disclose the involvement of AI in design work, especially if it significantly contributes to the final outcome. This ensures ethical practices and builds trust with clients. However, the boundary for disclosure may vary depending on the extent of AI involvement.”

NikoleBrown

“But if I were to generate and develop the prompt and go through many iterations before getting perfection, then I went on illustrator and recreated it… Wouldn’t I be able to copyright that? I created it, who would know that the entire concept came from AI, I wouldnt tell anyone, no one would know.”

Alternative_Antler

When uncertain about disclosing AI usage, we advise you to ask yourself a key question: “If I don’t disclose AI involvement, could it potentially mislead or harm someone?” If the answer is yes, it’s crucial to disclose. Also inform your clients, particularly if using an image that could pose legal risks. However, if you’ve significantly transformed an idea proposed by AI, disclosure may not be required. In the realm of news content, media outlets should be cautious, as lack of disclosure could lead readers to mistake generated photos for real ones, affecting trust. It’s worth noting that using generated content, even by top magazines, is acceptable when clearly indicated.

From our point of view, adding a disclosure about the use of AI in a content creation process seems like a small step, but it holds significant potential for building trust. Acknowledging that AI played a role in crafting content serves as a clear sign of transparency, that if not enhancing trust in both the creator and the technology, certainly does not undermine it.

Spread the word