Top AI News, June 2024: Apple Intelligence, Figma AI, Gen-3 Alpha, and more

In this monthly roundup, we highlight the top AI news stories from June:

Apple’s AI Launches:

Apple’s stock soared following the introduction of “Apple Intelligence” at WWDC 2024. This AI platform integrates generative models with personal context across iPhone, iPad, and Mac, offering new features like systemwide Writing Tools for text editing and Image Playground for creating personalized images. Apple also confirmed the integration of ChatGPT into iOS 18, iPadOS 18, and macOS Sequoia, allowing users to seamlessly access ChatGPT’s capabilities. Siri can now utilize ChatGPT, offering deeper language understanding and task automation. Powered by GPT-4o, ChatGPT will be accessible for free, with additional features available to subscribers, launching later this year.

Beyond the news: Apple is indeed a master of product presentations. Our friends from Wonderslide analyzed the WWDC Apple presentation and Google’s latest showcase and found some significant differences: Apple’s WWDC 2024 kicked off with high-energy antics, a stark contrast to previous years’ reserved intros. Meanwhile, Google’s IO 2024 featured a live musical performance, showcasing AI’s creative potential. Both companies emphasize minimalistic, distraction-free environments to keep audiences engaged, highlighting the importance of simplicity and clear storytelling in their presentations.

Figma AI

Figma has launched a suite of AI-powered features aimed at enhancing creativity and efficiency for designers. Now in limited beta, Figma AI includes Visual Search and AI-enhanced Asset Search to help users quickly find and reuse designs. These tools simplify common design tasks like text rewriting, image background removal, and interactive prototyping. With AI-generated realistic content, designers can create persuasive mockups effortlessly.

Between the lines: Just as we published the digest, news broke that Figma disabled its “Make Design” AI feature due to allegations of copying Apple’s Weather app designs. NotBoring Software founder Andy Allen discovered that Figma’s tool often replicated Apple’s designs, raising concerns about data training practices. Figma CEO Dylan Field denied these claims but acknowledged the need for better quality assurance. He announced that the feature will be temporarily disabled until thorough testing is completed to ensure its reliability and originality.

Runway Introduces Gen-3 Alpha

Runway has unveiled Gen-3 Alpha, the latest model in their AI series, featuring significant improvements in fidelity, consistency, and motion. Trained on both videos and images, Gen-3 Alpha powers Runway’s Text to Video, Image to Video, and Text to Image tools, along with advanced control modes like Motion Brush and Director Mode. This new model enhances the creative process with fine-grained control over structure, style, and motion.

Anthropic’s Claude 3.5 Sonnet

Anthropic has introduced Claude 3.5 Sonnet, the latest and most powerful AI model in its Claude series. Known for its nuanced understanding and natural tone, Claude 3.5 Sonnet is available for free on and in the Claude iPhone app, with enhanced access for Pro and Team subscribers. Anthropic also unveiled “Artifacts,” a feature allowing users to generate and edit documents and code in real-time, enhancing productivity in tasks like legal drafting and business report writing.

Music Labels Sue AI Song Generators

The world’s leading music companies — Sony, Universal, and Warner — are suing AI song generators Suno and Udio for alleged copyright infringement. The lawsuits, filed in federal courts in Boston and New York, accuse these AI startups of exploiting the recorded works of artists ranging from Chuck Berry to Mariah Carey, seeking $150,000 per infringed work. Suno, partnered with Microsoft, and Udio, popularized by producer Metro Boomin, face allegations of their software “stealing” music to create similar tunes. Suno’s CEO defends the technology as generating new outputs rather than copying existing content, while the Recording Industry Association of America (RIAA) condemns such unlicensed services as exploitative and damaging to the promise of innovative AI.

Between the lines: The swift legal action by music labels highlights their proactive approach to addressing AI-related challenges, which seems quite logical given current industry dynamics. Similar to how visual content creators, photographers, and publishers like The New York Times have sued AI companies to protect their works, music labels are defending their industries. However, it is worth noting that legal actions involving photographers and designers are often driven by individuals or small communities. In contrast, in the music and publishing sectors, large companies such as Sony, Universal, Warner and publishers like the New York Times are the ones taking legal steps to protect their industries. This distinction highlights different approaches across creative sectors in addressing the challenges posed by AI.

Shutterstock’s AI Licensing Partnerships Garner $104 Million in 2023

Shutterstock Inc. has successfully ventured into AI licensing, generating $104 million in revenue last year by providing its extensive library of images and videos for AI model training. This business model has attracted major tech firms, including OpenAI and Meta, and most recently Reka AI, a startup specializing in multimodal language models.

Why it mattes: By leveraging its vast media collection, Shutterstock has positioned itself as a crucial player in the AI industry. This move also underscores the potential of the dataset market, as there is a growing trend towards using legally sourced content for training AI models. As companies emphasize ethical AI practices, the demand for legal datasets is expected to rise, revealing significant opportunities in the dataset market despite its current small size.

Picsart Partnership with Getty Images

Picsart announced a partnership with Getty Images to launch a new AI model that provides commercially-safe image generation for creators, marketers, and small businesses. This custom model, developed by Picsart’s AI lab, uses Getty Images’ licensed content to ensure high-quality, legally compliant visuals. Subscribers can create images with commercial rights and enhance them using Picsart’s editing tools, streamlining the content creation process.

The context behind: Getty Images has already partnered with other AI innovators like Runway and BRIA. These collaborations aim to democratize the creative process and enhance creative tools by combining AI-powered solutions with Getty’s Getty’s extensive library of licensed content.

Former Chief Scientist Launches AI Company

Ilya Sutskever, co-founder and former chief scientist at OpenAI, has launched a new AI company Safe Superintelligence Inc (SSI). Sutskever’s departure followed a disagreement with OpenAI leadership over AI safety approaches. SSI aims to balance rapid advancements in AI capabilities with robust safety measures, ensuring development remains secure and ethical.

Between the lines: Unlike OpenAI’s nonprofit origins, SSI is designed as a for-profit entity.

DMLA’s Webinar on The Future of Responsible GenAI

DMLA’s webinar on the Future of Responsible GenAI explored the transformative impact of generative AI on creative industries such as stock photography, filmmaking, advertising, news, and art. Led by moderator Mark Milstein, Co-Founder and Director of Business Development at vAIsual, the panelists examined emerging trends and challenges in the field, emphasizing the ethical and legal considerations tied to AI adoption.

For example, Vered Horesh, Chief Strategic AI Partnerships at BRIA, detailed their approach to ensuring the ethical use of generative AI by meticulously tracking the origins of generated content. Through advanced attribution technology, they can trace each generated image back to its original visuals in the training dataset, promoting transparency and honoring the contributions of content creators.

The panel also explored how AI is employed to streamline costs and mitigate risks in creative projects, underscoring the necessity for responsible use of AI and training with proprietary data under human oversight.

Agency representatives discussed leveraging AI tools for content creation in the context of client-agency relationships. AI helps their creative teams quickly visualize concepts behind the scenes, while they remain cautious about the external use of AI-generated content for clients’ projects due to copyright complexities. Rohit Vaswani, Head of Delivery & PMO at Ogilvy Indonesia, shared that Ogilvy’s legal team, for example, has created guidelines that allow creative teams to use only approved and legally compliant tools.

The webinar concluded with insights into prospective legal frameworks that may further regulate AI practices, reflecting a growing shift towards ethical compliance in AI-driven content creation. Dmitry Shironosov, CEO of Everypixel, stressed the importance of building responsible AI models by actively involving creators in the development process and ensuring they are fairly compensated for their contributions.

Facebook’s Five Pillars of Responsible AI

Facebook has outlined its commitment to Responsible AI through five key pillars: Privacy & Security, Fairness & Inclusion, Robustness & Safety, Transparency & Control, and Accountability & Governance. The Responsible AI (RAI) team collaborates across disciplines to ensure AI systems are designed and used ethically, addressing issues like privacy, bias, and transparency. Initiatives include privacy-preserving technologies, fairness tools, adversarial testing, and clear AI model documentation.

Between the lines: This approach resonates strongly with the standards we previously described in the article regarding ethical datasets. We believe that ethical datasets as well as AI models should adhere to high standards of transparency, inclusivity, governance and respect for data creators’ rights.

Spread the word