Strada's Vision for AI in Video Production: Insights from Michael Cioni

Published on in Exclusive Interviews

This interview was published in partnership with Telly Awards

In a candid interview, Michael Cioni, co-founder of Strada, explores the impact of AI on video production and how Strada's cloud-based platform leverages AI to optimize workflows and empower creators. He shares the journey from startup to innovation leader, tackling both challenges and goals as Strada positions itself in the evolving AI landscape.

PH: What inspired you and your brother Peter to found Strada? Can you share the journey that led to its creation and how you identified the need for such a platform in the industry?

Michael Cioni: After starting 3 businesses in my career, I’ve learned that being honest about your own strengths and weaknesses is crucial in achieving any form of success.  For me personally, I’ve learned that I do my best work when I’m able to lead small, hungry teams of innovators and creatives.  A notable example is when my brother Peter and I started the post house Light Iron in 2009 (acquired by Panavision in 2015) and it was at Light Iron we achieved success through observing a 3-pronged structure of core leadership:

  • Vision & innovation strategy
  • Finance & business strategy
  • Design & engineering strategy

I serve as the visionary lead for our product roadmap, Pete leads our financial business strategy and our cousin, Austin Case, serves as our head of engineering.  Looking back on 27 years of business experience in media and entertainment, I’ve found that my mission hasn’t changed all that much: I remain focused on exploring, adopting, and inventing new technology to help serve the storytelling process.  2 years ago AI technologies began to rapidly encroach (and in some cases threaten) the creative industry so Pete, Austin and I left our previous roles (Netflix & Adobe respectively) to find a more practical way that AI can be used to serve creatives.  Strada is that vision coming to life.

PH: Strada aims to empower creative professionals to work entirely from the cloud. What do you see as the biggest advantages of cloud-based workflows compared to traditional methods in film production and post-production?

Michael Cioni: I believe traditional desktop applications are entering the final years of application dominance and that a decade from now, users in every market will begin to require their tools to be web-based for the benefits cloud brings to speed, flexibility, and collaboration.  Most desktop applications suffer from limitations that will only compound in the future such as:

  • desktop apps are are unable to fully leverage cloud computing (limiting render, storage, and processing speeds to the specs of a local machine)
  • desktop apps are slower to update (e.g.: a bug fix or feature update in cloud native tools are instantly deployed for every user)
  • when it comes to the value of AI tools, desktop apps are reliant on edge inference models (cloud inference models are faster and offer more choices). 
  • desktop apps have a difficult time sharing the metadata they create with other users who access the same asset (cloud native apps benefit from user metadata that is instantly available for collaborators, regardless of location)

Similar to how a multi-user cloud document works like G Suite, imagine the creative power of having a clip that you can play, edit, read, search, color correct and sound sweeten all in the same tool, with different users contributing at the same time, and have only 1 source asset.

PH: How does AI play a role in simplifying workflows within Strada? Can you provide specific examples of how these tools enhance the creative process for filmmakers and editors?

Michael Cioni: We believe that all creative teams share a universal problem: creative professionals spend too much time searching for their own media.  Doubling down on that, editors are all familiar with the “logging” column in their NLE, but very few people actually take the time to log their clips.  This issue spans beyond the edit room and impacts producers, directors, cinematographers, writers, supervisors, etc.  Strada analyzes and logs your media and can read the text, transcribe the language, and tag objects, locations, faces, and even emotions.  Some clips can have hundreds of unique tags which are all frame-accurate so any user can perform natural searches across an entire media pool such as “Jill says “cheers” while sipping her drink in a bar” which is a much more natural way to find moments instead of having to remember the date it was shot, the clipname or even the scene and take number.  Strada’s ultra-rich metadata is then exported to NLEs where they become searchable within each editing system.  This way people inside and outside the cutting room have access to the most hydrated set of metadata available on the market for work-in-progress clips.

PH: Transcription and translation of narrative content are key features of Strada. How do these capabilities impact the efficiency of the editing process, particularly for international projects?

Michael Cioni: Language is one of the fastest ways to search for anything.  We often have a memory of something that was spoken in a take or a subject that was discussed.  Using Strada’s AI analytics, a single word can instantly narrow down 1,000 clips to just a dozen where users can find their way to the moment they are trying to locate in seconds without ever having to scroll.  Strada is also designed to work in bulk making it automatic to transcribe 100 dailies in just a few minutes (dailies/rushes are assets traditionally not transcribed because the tools available today do not make it easy or useful to generate and search reliable transcripts. For nonfiction projects that rely heavily on interviews, Strada is able to display multiple cameras and multiple  languages at the same time so users have a much wider view of their content - even when it’s in a different language than they speak (Strada supports over 100 languages which are fully searchable.

PH: The ability to tag and analyze images for easy searching is a game changer for editing. How does Strada’s AI technology facilitate this process, and what feedback have you received from users about its effectiveness?

Michael Cioni: Strada Intelligence allows users to search by locations, objects, words, faces, text, and emotions.  The result of our search gives users literally dozens of individual ways to look for media which includes searching for multiple tags at the same time.  Users have reported to us that they are saving weeks of time compared to traditional logging and searching. Today Strada already has 6 unique AI models working together to improve search but we will continue to add dozens of models which will mean users can tailor their actions to even more specific uses in larger batches of data.

PH: What specific challenges in the traditional editing workflow did you aim to address with Strada’s features, and how does your solution stand out in the market?

Michael Cioni: Strada is unique in that members of our team are experts in workflow. I have worked on more than 150 major motion pictures and have extensive experience on feature films from every major studio. Since Peter and I owned and operated the post house Light Iron (5 North American offices), we had front-row access to countless projects from production through distribution.  I also served as the product director for the Panavision DXL camera which increased my knowledge and experience in mechanical engineering, production workflow and optics.  Our experience with cameras, lenses, dailies, logging, transcoding, color science, visual effects management, online, conform, version tracking, archiving, and distribution is all being used to shape the Strada platform. One of our goals is to eliminate redundant, slow, or laborious steps that traditional workflows waste time on such as sound syncing, naming scene and take,and even transcoding dailies. Strada is a platform product which means it’s designed to serve a wide range of users and touch creative professionals throughout the entire production workflow, not just a single department.

PH: Given the rise of remote work, how does Strada facilitate collaboration among team members who may be working from different locations? What tools or features are in place to support seamless communication and workflow?

Michael Cioni: It’s estimated that 70% of creative teams now regularly work remotely (and that figure is unlikely to go down).  In this new era of remote collaboration, tools must be able to facilitate multiple users accessing the same assets which live at the foundation of Strada.  Many companies capitalize on this by charging for users but Strada is an unlimited user product.  Creative professionals don’t like the feeling of having to add and remove users based on the cost of each active seat, especially when production headcount is constantly changing throughout an active post production schedule.  Strada’s position on unlimited users fosters more collaboration and cooperation without having to make concessions based on pricing.  Once any user activates a transcription, translation or visual tag, every user can instantly search by any of those parameters and get results, regardless of their location.  In addition, users can create individual projects and bins within those projects to further organize their data in the Strada project library.  This is a modern way of working in the cloud but has rarely been applied for creative professionals in relation to workflow until now. 

PH: Can you discuss any success stories or case studies from users who have adopted Strada and seen significant improvements in their workflow?

Michael Cioni: To date, more than 3,000 people have signed up for Strada with nearly 50% of sign-ups coming outside of the United States.  We released the Strada beta in mid June, 2024 and since then more than 200 people sign up each month to try the beta.  Users include non-fiction, natural history, documentary, educational, houses of worship, influencers, and narrative film and television shows. The most value is provided by people working in the non-fiction space that record lots of interviews and b-roll.  Since Strada can transcribe interviews and tag thousands of b-roll clips in large batches, documentary teams have told us Strada saves more than 1 month of time tagging and transcribing in a way that is fast and easy for all their collaborators.  Since Strada can tag faces, some users analyze a final project and can use the tags to quickly cut promos, trailers, and social posts simply by searching a name, a location, or an action.  We also learned that people with large media libraries are using Strada to analyze old assets from projects in the past and using Strada metadata to find and pull shots so they can be reused in new productions.

PH: Looking ahead, what are your plans for the future of Strada? Are there any upcoming features or innovations you’re particularly excited about?

Michael Cioni: We are on track to deliver Strada Version 1.0 in early 2025.  Version 1.0 will dramatically widen our features to include a set of features not found anywhere in a single project.  Some of these features include:

  • facial recognition - Strada automatically finds faces and groups them together where users need only name them one time and those people will forever be named within a user’s account (ideal when combined with other tags to make searches extremely specific)
  • transcription summary - Strada can generate a summary of an entire transcript into 250 words (ideal for story producers and editors to be able to understand the essence of long interviews without having to watch the entire clip)  
  • semantic search - using natural language search to pull clips (ideal for situations in which the users aren’t familiar with the media)
  • text-based-editing - editing down transcripts automatically trims the video clip so text can drive the video result (ideal for story producers who want to quickly trim interview for an edit) 
  • trim-by-tag - a powerful timeline allowing users to search by tags and automatically trim them into the tag ranges (ideal for people who want to isolate moments in their media and review them altogether without having to scrub through a series of assets)

These breakthrough functions bring value to a wide array of creative users including directors, producers, story editors, and assistant editors who can identify and even pre-cut sequences or favorites and deliver frame-accurate results to the editor. We’re really excited about these features because they are a powerful complement to our current features and we expect teams to be able to lean on Strada and make it a tool used daily in their workflow.

PH: As someone who has been influential in identifying industry trends, what do you see as the next big trend in digital cinema and post-production technology, particularly in relation to AI and cloud computing?

Michael Cioni: Storytelling is a constantly evolving process.  Some changes that I expect to see over the next 10 years include:

  • cameras will eventually all shoot proxy files directly into the cloud (they will have Wifi and cell modems in them to manage automatic uploads - eventually leading to the elimination of removable camera cards)
  • software tools that are desktop only will eventually be replaced by tools that are cloud-based (recent example is how cloud native app Figma made the desktop app Adobe InDesign largely obsolete) 
  • single AI-tools will eventually be integrated into platforms and broken down to individual tasks (we call this “taskification” where users will eventually take any clip and have an AI tool perform any necessary function or task)
  • new NLEs will emerge that are built exclusively in the cloud and slowly usher in a shift from desktop editing to cloud editing
  • Smartphones (likely led by Apple) will become mainstream cameras for common use in many classes of production (I expect Apple may eventually release a stand-alone camera that is separate from the phone that will upset the DSLR market and eliminate the current leaders in that space)
ProductionHUB ProductionHUB Logo

Related Blog Posts
Evelyn Lorena on Directing Gabriela: A Powerful Exploration of Identity, Resilience, and Representation
Evelyn Lorena on Directing Gabriela: A Powerful Exploration of Identity, Resilience, and Representation
Evelyn Lorena, the visionary filmmaker behind the Oscar-qualifying short film Gabriela, has captivated audiences worldwide with her poignant storytelling. Screened at over 50 festivals globally, Gabriela delves into themes of race, identity, and social justice through the lens of its compelling protagonist. The short, which earned Lorena the Netflix and Latino Film Institute's Indigenous Latino Fellowship, was produced by The Blended Future Project, an organization committed to amplifying marginalized voices. With Gabriela, Lorena continues to redefine narratives, shining a spotlight on stories that demand to be heard.
Published on Saturday, December 7, 2024
Capturing Autumn’s Tender Glow: Cinematographer Kai Leung on All Shall Be Well and His Expanding Global Vision
Capturing Autumn’s Tender Glow: Cinematographer Kai Leung on All Shall Be Well and His Expanding Global Vision
A luminary of East Asian cinema, Kai Leung is making waves in the U.S. with his breathtaking work on Ray Yeung's All Shall Be Well, debuting in theaters on September 20th. Renowned for his striking visual storytelling in Hong Kong and Thailand, Kai infuses the film’s tender melodrama with a velvety softness and warm autumnal hues, immersing audiences in a world of rich reds and oranges. With a career already flourishing, including the recent success of The Way We Talk and upcoming premieres at the Tokyo International Film Festival, Kai opens up about his artistic approach and how he brings raw emotion to the screen.
Published on Friday, December 6, 2024
Chaos Redefines VFX with Ray Tracing FTW: A Groundbreaking Fusion of Comedy and Innovation
Chaos Redefines VFX with Ray Tracing FTW: A Groundbreaking Fusion of Comedy and Innovation
In a visionary leap for the filmmaking industry, Chaos has unveiled Ray Tracing FTW, a comedic short film that tackles the pressing challenges of the visual effects (VFX) world. Blending Old Hollywood western charm with cutting-edge virtual production, the film is the first major showcase of Chaos’ revolutionary Project Arena toolset. By enabling real-time, on-set collaboration, Ray Tracing FTW disrupts the costly “fix-it-in-post” norm, empowering creators to prioritize artistry and storytelling while redefining what’s possible in modern filmmaking. 
Published on Wednesday, December 4, 2024

Comments

There are no comments on this blog post.

You must be logged in to leave a comment.