/vnd/media/media_files/C12AWfSGFFLymFwH0mwu.jpg)
Adobe has announced a significant expansion of its Firefly platform, extending its AI-assisted content generation capabilities to mobile devices. The new Firefly app, now available on iOS and Android, allows users to generate and edit images and videos using AI directly from their phones. The mobile app complements the existing web version and integrates with Adobe Creative Cloud, enabling seamless project continuity across devices and applications.
The update introduces Firefly Boards, now in public beta, which adds video support to the platform’s AI-first moodboarding interface. This development enables creative professionals to collaborate and iterate across both image and video formats using generative AI tools.
Mobile Access to AI-Driven Creative Tools
With the launch of the Firefly mobile app, creators can now generate images and videos using text prompts, transform still images into video clips, remove or insert objects, and expand images using AI-generated content. Features such as Generative Fill, Generative Expand, and Style Reference are included, offering a wide range of creative controls on the go.
Users can work with Adobe’s Firefly AI models as well as models from external providers including OpenAI, Google (Imagen 3 and 4, Veo 2 and 3), and others. All assets generated in Firefly are automatically synced with Creative Cloud, allowing creators to begin work on a mobile device and continue seamlessly on desktop applications such as Adobe Photoshop and Premiere Pro.
Firefly joins Adobe’s broader suite of mobile creative apps, including Photoshop, Lightroom and Adobe Express. These applications share core technologies with Adobe’s desktop tools, supporting professional-quality content creation on mobile platforms.
Firefly boards: A new approach to collaborative ideation
Firefly Boards introduces a new way for creative teams to explore and iterate on ideas collaboratively across multiple media formats. The public beta includes video generation features powered by Adobe’s Firefly Video Model and other third-party models such as Google’s Veo 3, Luma AI’s Ray2, and Pika 2.2.
In addition to image and video generation, Firefly Boards includes conversational AI editing capabilities. Users can refine visuals through text-based prompts using tools like Black Forest Labs’ Flux.1 Kontext and image models from OpenAI. The platform supports rapid iteration and concept exploration at scale, making it suitable for collaborative workflows in both commercial and creative settings.
Broader integration of generative AI models
Adobe continues to expand its generative AI ecosystem by incorporating models from a growing list of partners. New additions include Ideogram, Luma AI, Pika, and Runway, alongside earlier integrations with OpenAI, Google, and Black Forest Labs.
These models are initially being introduced through Firefly Boards and will soon be accessible across the broader Firefly app. Among the newly supported tools are Flux.1 Kontext (Black Forest Labs), Ideogram 3.0 (Ideogram), Ray2 (Luma AI), 2.2 Text-to-Video (Pika), Gen-4 Image (Runway), and Google’s Imagen 4 and Veo 3.
This expansion allows creators to experiment with a wide range of aesthetic styles and media formats, offering greater flexibility in visual storytelling and content production.
Content credentials and responsible AI use
To support transparency and protect creative rights, Adobe’s Firefly automatically attaches Content Credentials to AI-generated content. These metadata tags indicate whether content was generated using Adobe’s own models or those from partners, ensuring accountability and traceability.
Adobe states that this approach aligns with its commitment to supporting creators and promoting responsible AI use. The company continues to position Firefly as a platform that respects creative authorship while offering flexible, collaborative tools for ideation and production.