The visionary behind the acclaimed series “Squid Game,” Hwang Dong-hyuk, has made a significant leap into the world of technology by partnering with TwelveLabs. His studio, Firstman, has invested $3 million into this innovative company that specializes in AI technology tailored for the entertainment industry.
TwelveLabs, co-founded by Jae Lee and Soyoung Lee, is based in San Francisco. The company collaborates with studios, streamers, filmmakers, and broadcast networks to provide a system that meticulously “indexes and enriches video metadata at the scene level.” This ability allows editors, directors, and producers to operate with unmatched speed and accuracy while maintaining their creative essence.
Hwang expressed his excitement about the partnership, stating, “Storytelling is becoming increasingly global, visual, and fast-paced. Creators who adapt to these changes will define the future of entertainment. I believe that technologies like TwelveLabs will be crucial in transforming concepts into fully realized stories in the rapid timeline expected by audiences today.”
He further emphasized, “AI tools are unveiling new pathways to create cinematic magic that we could scarcely have imagined a few years back. For Firstman Studio and myself, it is about allowing filmmakers to dedicate more time to the artistry, emotion, and unique magic they can bring to their work.”
TwelveLabs takes a unique stance in the AI landscape by focusing not on generating entirely new content but on enhancing pre-existing footage. “Film and television archives hold billions of dollars in underutilized footage. In many instances, less than 5% of this material gets repurposed due to the slow and fragmented processes involved in searching and preparing it for use,” according to the company. Their advanced video foundation models can efficiently search through hours of footage by leveraging visual, audio, and contextual cues concurrently, thus drastically reducing the time needed to locate a particular scene and prepare usable files.
Soyoung Lee, co-founder and Chief Go-To-Market officer of TwelveLabs, pointed out, “Some of the most valuable footage goes unutilized simply because the retrieval process is too lengthy. We aim to make video content instantly searchable and ready to use on a large scale, allowing media organizations, companies, and creators to extract greater value from their collections.”
Below is an insightful Q&A from Variety featuring Jae Lee, CEO and co-founder of TwelveLabs, discussing the company’s mission and its collaboration with Hwang.
How did Director Hwang become involved with TwelveLabs, and what motivated the investment?
Director Hwang has consistently shown an affinity for innovation, both in storytelling and in the tools employed to manifest his narratives. When we first connected, he expressed interest in how video AI could comprehend the emotional and narrative nuances of scenes, extending beyond basic elements such as dialogue or objects. This profound ability resonated with him as a filmmaker. After seeing what our technology can accomplish, he recognized its immense value for creators eager to devote less time to technical challenges and more time to the essence of storytelling. This shared vision inevitably led to his investment through Firstman Studio.
What is the overarching mission of TwelveLabs?
Our fundamental goal at TwelveLabs is to categorize every video globally and make it as comprehensible as written text. Currently, over 90% of the world’s data consists of video, much of which remains inaccessible, be it in Hollywood film archives, sports storage facilities, or corporate libraries. We aim to revolutionize this.
By endowing machines with the capability to not only interpret video content but also grasp context—such as tone and emotional weight—we empower humans to concentrate on the creatively significant aspects. We envision transforming video from a stagnant archive into a dynamic resource that enhances storytelling, exploration, and fosters completely new experiences.
In what ways does this technology benefit creators like Hwang?
For filmmakers such as Director Hwang, time is often the most precious commodity. Many hours are expended on tasks like organizing dailies, reworking archival footage, and validating rights before a project can progress. The technology developed by TwelveLabs minimizes this friction, accelerating the process of identifying the right scenes, addressing potential issues early, and clearing pathways for creators to channel more energy into emotional depth, visual framing, and narrative arcs.
This innovation significantly enhances the creative workflow, enabling storytellers to spend more time on the unique aspects of their craft—deciding what resonates emotionally and what encapsulates the essence of a story. This translates into reduced overhead and greater creative freedom for professionals like Director Hwang and his company.
How is TwelveLabs addressing concerns about creators collaborating with AI-driven companies?
Much of the apprehension stems from generative AI technologies that aim to replace human creativity by fabricating new content from scratch. That is not our approach at TwelveLabs.
We focus on augmenting the existing materials that creators have at their disposal. Our models are designed to index, search, and comprehend footage, allowing editors, directors, and producers to streamline their workflows. Decisions regarding what material to utilize, how to frame it, and the story’s arc remain entirely human.