AI 360° Video Production & Immersive AI Documentaries

London-based immersive content studio combining real-location 360° capture with AI-generated worlds — for documentaries, performance and experimental work that takes the viewer beyond what 360° cameras alone can reach.

Inside the Jungle — Amazonian shamanism experimental documentary, click to play in 360°
Click to play · Drag to look around

Inside the Jungle

An experimental immersive documentary set in the world of Amazonian shamanism — AI-generated environments woven with 2D location footage from the Amazon rainforest.

We've been producing immersive 360° content since 2016, and exploring AI-augmented 360° production since 2022. Based in London, with shoots across the UK and internationally, our practice combines real-location 360° capture with AI-generated worlds, panoramas and historical reconstructions — building experiences that go beyond what either technique can reach alone. AI 360° production is particularly well suited to documentary, where the immersive medium already gives the viewer the sense of being inside something real rather than watching it from outside; using AI to extend that anchor opens up storytelling territory that pure 360° capture can't access. As AI-generated content becomes ubiquitous, real-location 360° capture as the foundation matters more, not less — it's what gives the AI-extended scenes their weight.

What we make

Travel & Destination Documentaries

Real-location 360° capture extended through AI-generated environments — aerial maps, historical reconstructions, stylised dreamscapes that complement the on-the-ground footage. We've had a long-standing passion for VR 360° travel work; AI now lets us build the complementary worlds that transport the viewer further than the camera alone can. The Kyushu VR series is the lead reference for this approach. For tourism boards, destination marketing organisations, travel publishers and broadcasters with travel slates.

Cultural & Heritage Experiences

Historical reconstruction, archival extension, immersive companion content for exhibitions and venues, and performance documentation extended through AI-generated environments. AI 360° production is particularly suited to scenes that can't be filmed because they no longer exist or were never recorded — building immersive context around objects, sites and events. For museums, galleries, heritage organisations, opera and theatre houses, arts venues.

Music & Performance

AI-augmented 360° music videos and performances reimagined through generative environments. Capture in a black box studio, on stage with no audience, or in front of a live one — then rebuild the world around the performance to match its narrative, mood and concept. For artists, labels, music supervisors and festivals. See the Music page for full detail.

Brand & Experiential

Bespoke AI-augmented 360° content for brand campaigns, product launches and experiential activations — where visual ambition matters and the content needs to do something competitor work can't. For luxury brands, automotive, fashion houses and brand experience agencies.

Featured work

All four pieces below were originally mastered in full 8K resolution to hold up in VR headset playback. The clips embedded in the player above are shorter, lower-resolution versions for fast web delivery — full-resolution masters are available on request.

Kyushu VR Series — Travel Documentary

A three-episode immersive travel documentary covering Kyushu's nature, culture and history.

A three-episode immersive travel documentary covering Kyushu's nature, culture and history — Volcanoes of Kyushu, Yakushima and Nagasaki Peace Park. We combined on-the-ground 360° footage from across the region with AI-generated aerial maps and historical reconstructions, taking visual cues from Japan's anime tradition. The anime aesthetic served two purposes: a respectful nod to Studio Ghibli, whose Princess Mononoke was inspired by the forests of Yakushima, and a way to depict the destruction of the 1945 Nagasaki atomic bombing more tastefully than photographs of the aftermath could allow. The approach made it possible to build an immersive document of historical events for which there is little or no existing immersive record, in a register that's magical and poetic rather than journalistic.

Piano Dance — Music-Theatre Performance

A minimal performance piece featuring two performers in a black box studio, with sets entirely rebuilt through AI in post.

A minimal performance piece featuring two performers in a black box studio, captured with multiple 360° cameras. In post, we rebuilt the backgrounds and sets entirely through AI to add layers of mood and narrative to the performance. Recreating sets for theatre work like this requires close collaboration with the writers and directors of the piece — done well, it can transform a simple two-performer rendition into something that would have cost a fortune to stage physically. The technique also works as pre-visualisation for larger theatre productions, letting directors see how a staged version might feel in immersive form before committing to the build.

Clockwork Collective — Live Performance with AI Stage Sets

A music-theatre piece by a collective exploring AI in creative work, captured in front of a live audience.

A music-theatre piece by a collective exploring the role of AI in creative work, featuring a MIDI-controlled, custom-sculpted robot performing some of the music. We captured the performance in front of a live theatre audience while projecting AI-generated environments onto the stage's back and side screens to create an immersive set in the room itself. In post we took it further — rebuilding the theatre based on its real geometry, then morphing both stage and audience through different worlds. When the robot wakes up, the visuals shift into evolving AI environments meant to feel like being inside its mind.

Amazonian Shamanism — Experimental Documentary

A 10-minute immersive documentary built mostly from AI-generated environments, with 2D location footage from the Amazonian rainforest.

A 10-minute immersive documentary built mostly from AI-generated environments, with 2D location footage from the Amazonian rainforest woven through it. The challenge was finding ways to incorporate real video and stills into more fantastical scenarios in a way that felt congruent rather than collaged. AI let us depict the otherworldly qualities of the shamanic tradition — visionary entities and altered-state sensations that have no equivalent in conventional documentary footage — in a way that transports the viewer rather than describes the experience. Alongside the linear film, we produced a parallel interactive piece: a virtual-tour-style journey through a series of linked immersive environments that move from a recognisable jungle into increasingly fantastical territory.

Hybrid 360° Worldbuilding — Our Approach to AI 360° Production

Generative AI video has evolved fast, but on its own the resolution still isn't high enough for VR — which needs at least 8K to hold up in a headset without breaking the illusion. We've spent the last few years developing hybrid techniques that combine multiple methods to maintain that quality threshold across a full piece. We call our approach Hybrid 360° Worldbuilding: pure-prompt-to-VR-world isn't yet viable, but carefully directed integration of multiple capture and generation approaches is. This section outlines how that works in practice and where the craft sits.

Combining real 360° capture with AI-generated worlds

Most of our projects benefit from at least some real-world 360° capture, even when the final piece leans heavily on AI-built environments. The captured material anchors the viewer in something that feels real and believable, which makes the AI-extended scenes more convincing rather than less. We use the Insta360 Titan and Insta360 Pro 2 for capture: the Titan delivers higher quality and is the right call for stationary or rigged shots, the Pro 2 is lighter and more portable for run-and-gun documentary work. From there, we extend, augment or replace specific elements through AI: skies, backgrounds, historical reconstructions, surreal continuations, archival recreations. The craft is in directing how the captured and generated layers relate — colour, light, perspective, geometry, narrative beat — so the join is invisible to the viewer.

Where craft sits, and where AI sits

AI is a strong tool for visual extension, scene generation, archival recreation and stylistic transformation. It's not a substitute for narrative direction, location knowledge, real interviews, or the editorial judgment that makes a documentary or performance piece work. The craft is in directing the integration, not in operating the tools. AI also unlocks productions that would previously have been prohibitively expensive, or genuinely impossible — historical reconstructions where no footage exists, scenes that no camera could practically reach. We use these capabilities deliberately rather than because they're available: AI augmentation should reinforce the narrative or deepen understanding, not exist for its own sake.

What works well, and what doesn't

Generative 360° is genuinely strong for static panoramic environments, dreamlike or stylised aesthetics (partly why the anime-influenced approach worked for the Kyushu series), and recreating scenes that can't physically be filmed. It's weaker on temporal consistency, fast motion, and faithful documentary recreation where accuracy matters. Where a project requires faithful documentation, we use 360° cameras wherever possible. Where this isn't possible — historical events, lost spaces, internal or visionary experiences — we make the reconstruction explicit to the viewer rather than presenting it as captured fact. As AI-generated content becomes more photorealistic, the ethical imperative to be transparent about what is captured and what is reconstructed becomes more important, not less.

Why this matters now

The cost and ambition unlock is real. Visually ambitious immersive work that would have cost £100k+ in traditional production becomes possible at a fraction of the budget. Use cases that weren't viable before — heritage reconstruction, anthropological documentary, region-portrait travel, performance reimagined through generative environments — become commissionable propositions. As AI-generated content saturates the wider media landscape, real-location 360° capture as the foundation of an immersive piece becomes more valuable, not less: it's the anchor that makes the AI-extended world feel weighted rather than weightless. We expect that within a few years, text-prompt-driven generation of full interactive 3D worlds at VR-realistic resolution will be viable. We're not there yet — but the Hybrid 360° Worldbuilding techniques we've been refining are how to do this work now, while pointing toward where the medium is heading.

Who we work with

Tourism boards · destination marketing organisations · museums, galleries and heritage organisations · opera, theatre and arts venues · music labels, artists and festivals · brand experience and creative agencies · broadcasters and streaming platforms commissioning immersive content · cultural institutions running immersive programmes.

The body of AI 360° work above has been built largely through self-directed experimentation rather than commercial commission. We're actively looking to collaborate with partners who want to explore AI-augmented immersive content for the first time — if your organisation is curious about what's possible but unsure where to start, the conversation is the easiest first step.

Process

A typical AI 360° project runs in six stages.

  1. 01

    Brief & concept

    Discussion of the audience, format, story, and where AI augmentation lands creatively — what's captured live and what's generated, and how the two relate to the narrative.

  2. 02

    Pre-production

    References, location reconnaissance, AI workflow planning, asset development, production schedule. AI 360° projects need more pre-production thinking than pure 360° work because the generation pipeline is decided in advance, not improvised in post.

  3. 03

    Capture

    Real-location 360° shoot using the appropriate rig — Titan for higher-quality stationary work, Pro 2 for more mobile shoots — alongside any 2D camera coverage that adds to the source material.

  4. 04

    AI generation & integration

    Building the panoramic environments, historical reconstructions or scene extensions, integrated with the captured footage in regular feedback rounds with the client. This is the longest single phase and benefits from active client involvement.

  5. 05

    Edit, audio & finishing

    Stitching, edit, spatial audio (where required, with partner specialists), colour grade, final composite at 8K to maintain VR-headset quality.

  6. 06

    Delivery

    Encoded for VR headsets, mobile, desktop, YouTube 360°, Meta Quest, Apple Immersive Video, MP4 social cuts, and any platform-specific requirements.

FAQ

What is AI 360° video production?

AI 360° video production combines real-location 360° camera capture with AI-generated panoramic environments, scenes and assets to build immersive experiences that wouldn't be possible with either technique alone. The result is a fully spherical experience that can be viewed in VR headsets, on mobile or desktop — with the visual ambition that AI enables, anchored in the credibility that real capture provides.

How does AI 360° production differ from regular 360° production?

Regular 360° production is fundamentally a documentary medium — the camera captures what's physically there. AI 360° production extends that by allowing scenes to be built, augmented or reimagined: historical reconstructions where no footage exists, environments that are too costly or impossible to film, dreamlike or stylised worlds that anchor narrative meaning rather than physical reality. Most of our projects combine both, with real capture as the foundation and AI extending where it adds something the lens can't reach.

Can you produce fully AI-generated 360° content with no live capture?

Yes — we make experimental pieces where the entire visual world is AI-generated. In practice though, even our most AI-heavy projects benefit from at least some real-world anchor: a recorded voice, location footage from a 2D camera, or a single 360° environment that grounds the experience. Pure generation tends to feel weightless without something real to hold it.

Can you augment 360° footage we've already shot with AI?

Yes. We can extend, augment, replace or stylise elements of existing 360° footage — replacing skies and backgrounds, adding scene continuations, building historical reconstructions around captured locations, or transforming the visual style. The integration work is the craft; the existing footage often becomes the anchor that the AI-generated elements relate to.

Who owns the rights to the AI-generated elements?

This is an evolving area of law. Under current UK and US law, content generated purely by AI typically isn't separately copyrightable because it lacks human authorship. However, the assembled final work — combining real capture, creative direction, editing and AI-generated elements into a single piece — is a creative work authored by us, and ownership transfers to the commissioning body under our standard production agreement. For projects where rights are particularly important, we recommend specialist IP legal advice.

What kinds of projects work best with this approach, and what doesn't?

The approach works best where visual ambition or scenes that can't be physically captured are central to the brief — historical reconstruction, anthropological storytelling, performance reimagined as immersive experience, surreal or stylised travel, brand-led experiential work. It works less well where strict documentary accuracy is the core requirement, where temporal consistency is critical (long takes of fast motion), or where the budget and timeline don't allow for the iteration the AI integration needs. We're honest with clients during scoping about which side of that line a project sits on.

About this side of the work

Immersive content for VR and projection-based experiences is still in its early days. Like the first decades of cinema in the 1900s, the people working in this medium now are still figuring out what it actually is and what it's best used for. Bringing AI into the creative process opens up new areas of that experimentation. Immersive content is by its nature a multi-medium form — combining captured footage, panoramic environments, audio, interaction — and AI extends what's possible further still: realising experiences and worlds that would otherwise be confined to the imagination, and finding new ways to root them in real places, real performances and real subject matter. The work on this page is a body of practice that's still actively evolving, alongside the production company Promo Video and the immersive studio Catch Reality it sits within.

Have an AI 360° project in mind?

We're particularly interested in commercial collaborations that push the boundaries of what AI-augmented immersive content can do — documentary, performance, heritage, brand experience. Book a 20-minute scoping call to talk through what's possible.

Discuss a project →