AI-Native & Innovation

Upscaling and Super-Resolution

Upscaling: a high-resolution frame from LEGS

Upscaling and super-resolution are AI techniques that increase the resolution of an image or video, often hallucinating plausible new detail, used in animation production to take generated frames from a model's native output up to broadcast or cinema resolution.

Many generative video models currently produce output at lower-than-broadcast resolution. A modern model might output 720p or 1080p natively; a typical UK broadcast deliverable is 1920 by 1080 HD or 3840 by 2160 UHD per the broadcaster's published delivery spec, and cinema deliverables can require 4K or higher. Super-resolution bridges the gap with quality good enough for finishing, in most cases.

Inside our pipeline, upscaling sits late, after the creative work is locked. The shot is generated at the model's native resolution where iteration is fast, then up-rated for delivery as part of the compositing and finishing pass. On hybrid AI work like LEGS, the 3D-rendered shots ship at full resolution and the AI-generated shots get the upscale treatment before they meet in the final cut.

The honest limit is artefact handling. Super-resolution can sharpen unwanted detail (compression artefacts, model glitches) along with the wanted detail, so the input quality matters. Garbage upscaled is sharper garbage. Production work pairs the upscale with a clean-up pass.

Myth Labs operates production upscaling for brand campaigns at broadcast resolution.

Related

Sources

Academic papers, recognised industry standards, and canonical industry texts that back up claims in this entry.

  1. Learning Real-World Super-Resolution Models for Animation Videos. Li et al., arXiv, 2022Supports: real-world animation video SR
  2. Anime Production Inspired Real-World Anime Super-Resolution. Wang et al., arXiv, 2024Supports: production resolution upscaling gap

Frequently asked questions

Is super-resolution good enough for cinema delivery?

For background and secondary shots, yes. For hero shots in close-up, the bar is higher and we usually render those native. The split lets us combine the speed of AI generation with the fidelity required by the cinema bar, without compromising on either.

Does this affect frame rate?

It can. Upscaling and frame interpolation often happen in the same pass, taking a low-res 24 fps generation up to high-res 50 fps for broadcast. They are sister techniques, often shipped together.

Can you upscale archive footage too?

Yes. The same models that upscale AI output also work on legacy footage. We use this on heritage brand work where existing assets need to be brought up to current broadcast specifications without a re-shoot.