AI-Native & Innovation

Generative Video Models

Generative video models: an AI development frame from LEGS

Generative video models are machine learning systems trained to produce video clips from text prompts, image references, or both, used in animation production for ideation, animatic frames, and previsualisation.

The current generation of video models (Runway, Sora, Kling, Veo and others) typically produce clips of a few seconds at a time, with consistency that varies between shots. They are useful at the front of the pipeline, where the cost of an idea is highest. In our practice, a small team can iterate through many variations of a scene in a single sitting, then choose the strongest one to take into a traditional pipeline.

At Myth Studio, generative video sits inside hybrid AI animation, where it is one tool among many: 2D, 3D, stop-motion, hand-drawn, and AI-generated frames are combined inside a single production. The work on LEGS uses generative models alongside hand-keyed character animation, with human direction at every stage.

Generative video is not a replacement for traditional production. The output is short, the consistency limited, and the control coarse compared with a hand-keyed shot. The right use is upstream: research animatics, concept tests, and reference frames that inform the production work that follows.

We covered the wider picture of how artists work with these tools in how artists are using AI without losing the craft.

Related

Sources

Academic papers, recognised industry standards, and canonical industry texts that back up claims in this entry.

  1. Generative AI for Character Animation: A Comprehensive Survey of Techniques and Applications. Liu et al., arXiv, 2025Supports: generative video models definition
  2. Make-A-Video: Text-to-Video Generation without Text-Video Data. Singer, U., Polyak, A., Hayes, T., et al., arXiv (Meta AI / ICLR 2023), 2022Supports: Text-to-video diffusion foundations

Frequently asked questions

Are generative video models broadcast-ready?

Not by themselves. Output length, frame consistency, and brand fidelity are all limits today. They are excellent for animatics, concept tests, and reference, and they feed traditional production downstream. We mix generated frames with hand-keyed animation in projects like LEGS, but the broadcast master is still made under human direction.

Which models do you use?

We pick by job: Runway and Veo for fast clips, Kling for character consistency, Sora-class models when access allows, and image models like Flux and Midjourney for stills that feed the video models. The toolkit moves quickly. The pipeline shape, brief, animatic, production, finishing, stays the same.

Who owns the output?

Rights vary by tool and by tier. We track licence terms per model and only ship work where the deliverable is cleanly licensed for commercial use. Brand-safe usage is a Myth Labs specialism. See the Myth Labs animatics service for production-grade workflows.