From Kinoautomat to AI Cinema: Interactive Films and the Prospect of Their Real-Time Generation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This article explores the evolution of interactive cinema from early experiments to future possibilities enabled by artificial intelligence. Beginning with Kinoautomat (1967), the world's first film allowing audiences to vote on narrative outcomes, and Netflix's Bandersnatch (2018), which revived interactive storytelling on a mass digital platform, we trace the development of techniques that blend cinematic and game-like elements. Current advances in generative video models (e.g., OpenAI Sora, Google Veo) demonstrate the feasibility of short, AI-produced clips but remain limited by narrative coherence, consistency of characters, and high computational costs. We provide technical estimates for the real-time generation of a one-hour film, showing that present-day hardware requires clusters of GPUs, but that mid-2030s hardware may make such tasks feasible on a single GPU. Alongside technical prospects, we identify creative, ethical, social, and psychological challenges, ranging from authorship and digital actor rights to the transformation of collective viewing experiences. We conclude that personalized, real-time AI cinema represents a plausible new art form emerging within the next two decades.

Article activity feed