AI Everywhere: Google Cloud on Veo, Gemini, and the Next Wave of Cloud-Native Broadcast

Chris Hampartsoumian, Customer Engineer,
Media & Entertainment, Google Cloud
Chris Fellows, Director of Global Solutions Engineering, Zixi

Overview
Chris Fellows from Zixi sits down with Chris Hampartsoumian, Customer Engineer in the Media and Entertainment team at Google Cloud, to discuss how rapidly evolving AI models and cloud infrastructure are beginning to reshape broadcast and live video workflows.

Chris H. shares his perspective from inside Google’s Media and Entertainment organization, where tools such as Gemini, Imagen, and Veo are evolving at an unprecedented pace. In just the past year, Google has released multiple generations of Gemini models alongside diffusion models capable of generating images and video from prompts, dramatically expanding what media organizations can automate and create.

The conversation explores how these technologies could impact every stage of the broadcast value chain, from archive search and metadata enrichment to creative production, automated ad insertion, and personalized advertising. Chris also describes hands-on experimentation at IBC, including a Google Cloud Hackfest where teams produced AI-generated advertisements using Veo and dynamically inserted them into live streams using Ad Manager and Video Stitcher.

They also discuss the continued shift toward cloud-native production models and the role of edge infrastructure such as Google Distributed Cloud in remote production environments. As AI capabilities continue to expand, the discussion highlights why the most effective way for media organizations to understand these tools today is simply to start experimenting with them.

Key Takeaways

  • AI will affect every part of the broadcast workflow Rather than transforming a single stage of media operations, AI is beginning to influence the entire chain, including content discovery, archive search, metadata generation, creative workflows, production assistance, and monetization.

  • Rapid model innovation is accelerating experimentation In the past year alone, Google has released multiple Gemini versions along with new diffusion models like Imagen for image generation and Veo for video creation. The pace of development is pushing media companies to move from passive observation to hands-on experimentation.

  • AI-powered search unlocks massive content archives By extracting video embeddings and storing them in vector databases, organizations can search thousands of hours of archived content visually and contextually, enabling entirely new discovery workflows across large media libraries.

  • Cloud infrastructure enables scalable live experimentation At IBC, Google demonstrated how distributed cloud environments can ingest multiple live feeds, process them in the cloud, and enable collaborative experimentation. In one example, teams created AI-generated ads with Veo and dynamically stitched them into live streams.


  • Edge and cloud are converging for remote production Hybrid deployments using edge clusters with centralized cloud processing are enabling new production models where contribution happens on site while switching, processing, and distribution scale in the cloud.

  • AI could reshape the future of advertising As AI-generated content becomes more accessible, broadcasters may be able to create multiple ad variants dynamically or generate personalized creative tailored to individual viewers.

  • AI-assisted development is accelerating media engineering Tools like Gemini are already helping engineers generate infrastructure code and automate operational tasks, allowing smaller teams to build and deploy complex media workflows faster.



Contact our Sales team to find your perfect solution.