/vnd/media/media_files/2026/02/26/aws-unveils-ai-engine-for-live-and-on-demand-video-broadcasts-2026-02-26-13-56-04.jpg)
Amazon Web Services has announced the launch of AWS Elemental Inference, a fully managed artificial intelligence (AI) service designed to transform and optimise live and on-demand video broadcasts in real time for social and mobile platforms.
The service enables broadcasters and streamers to adapt landscape video into vertical formats for platforms including Instagram Reels, YouTube Shorts, and TikTok without manual post-production or AI expertise.
AWS said most broadcasts are still produced in landscape format, while audiences increasingly consume content in vertical formats on mobile devices. Converting content manually can delay distribution and reduce the ability to capture viral moments, the company said.
Customers can enable the service through the AWS Elemental MediaLive console or integrate it using AWS Elemental MediaLive APIs. Pricing is consumption-based, with customers paying for the features used and video processed, with no upfront commitments.
Deployment Options and Workflow Integration
AWS Elemental Inference can be deployed either through a standalone interface in the AWS Management Console or integrated directly into an AWS Elemental MediaLive channel configuration.
To begin, users navigate to the AWS Management Console and select AWS Elemental Inference, the company shared in a blog post. From the dashboard, they create a feed, which serves as the top-level resource for AI-powered video processing. A feed contains feature configurations and transitions from a 'Creating' state to 'Available' once ready.
After feed creation, users configure outputs for vertical video cropping or clip generation. For cropping, the service automatically manages parameters based on video specifications. For clip generation, users add an output, assign a name, select 'Clipping' as the output type and set the status to 'Enabled'.
Alternatively, AWS Elemental Inference can be activated within an existing AWS Elemental MediaLive channel. The MediaLive console now includes a dedicated AWS Elemental Inference tab displaying the service Amazon Resource Name, data endpoints and feed output details, including enabled features such as Smart Crop and their operational status. AWS said this approach allows AI capabilities to run in parallel with video encoding without requiring changes to existing architecture.
Also Read: How AWS custom silicon is reshaping AI economics in India
Real-Time AI Processing and Launch Features
According to AWS, the service uses an agentic AI application that analyses video in real time and applies optimisations automatically. Vertical cropping and clip generation operate independently, executing multistep transformations without human intervention.
AWS Elemental Inference applies AI features in parallel with live video, achieving 6–10 seconds of latency compared with minutes in traditional post-processing workflows prevalent in the industry. The company described this as a “process once, optimise everywhere” approach, enabling multiple AI capabilities to run simultaneously on the same video stream without reprocessing.
The service is powered by fully managed foundation models that are automatically updated and optimised, removing the need for dedicated AI teams. Features include AI-powered vertical video creation in a 9:16 aspect ratio that tracks subjects and maintains key action in frame, and clip generation with metadata analysis that detects and extracts highlight moments from live content, such as game-winning plays in football or basketball.
AWS said additional features, including tighter integration with core AWS Elemental services and monetisation capabilities, will be introduced later this year. The service will be available in four AWS regions, including US East (N Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Mumbai).
/vnd/media/agency_attachments/bGjnvN2ncYDdhj74yP9p.png)