We spoke exclusively with Will Driscoll, CEO of Wild Capture, about how Wild Capture came about, the release of Digital Human Platform, and products the platform works with.
PH: Tell us about yourself Will!
Will Driscoll: I am the chief executive officer and co-founder of Wild Capture. I bring extensive experience as a digital human technologist for art-directable live-action characters for visual effects, virtual production, spatial interactive media, and games. I co-founded Wild Capture to build the bridge between volumetric video and digital humans for media and tech production needs.
With a background as a content creator, I’ve always been interested in new technologies to enhance the creative process. I was an early adopter of stereoscopy, facial motion capture, LiDAR, photogrammetry, virtual reality, and now volumetric video and have enjoyed making technical contributions to each innovation.
PH: How did Wild Capture come about? Did you have previous production experience in volumetric video capture?
Will Driscoll: Along with Wild Capture’s other founding partners, I have worked in the immersive entertainment technology space and on projects in volumetric video for the past twenty years. I was on the team at Intel Studios/Sports group working on early volumetric video demos.
We formed the company in 2020 to bring scanned humans to all facets of production from theatrical films to mobile interactive. Wild capture aims to ease the complexities of the highly technical spatial media world and provide artists and creators with the most modern and efficient volumetric capture solutions available to improve productivity.
PH: Wild Capture recently released the Digital Human Platform. Tell us about the solution and how digital humans fit into the virtual production pipeline.
Will Driscoll: We introduced the 'Digital Human Platform' to offer a production pipeline that translates the human essence into volumetric video with high-quality performance captures. The Digital Human Platform is a combination of middleware technology and services that bridge the gap for artists, creators, or anyone wanting to use digital humans in spatial media. The technology delivers rigged characters/avatars with volumetric video's live-action realism.
PH: What products does the platform work with?
Will Driscoll: Through our development partnership with the XR Foundation, the Digital Human Platform features the Universal Volumetric (UVol) web streaming player, a cross-platform, open-source framework for web-based volumetric video.
UVol is an agnostic solution that brings heavy data sequences to web streaming. The platform relies on the Side FX PDG procedural architecture and the Houdini software engine to open a path common to 3D editors as we continue to develop plugins to feed our volumetric capture data natively.
PH: We see also see the term “Smart Assets” on your website. Can you explain how artists and content creators can leverage these assets in media production, the metaverse, and software and web-based deliveries? Is the workflow in real-time? Are the assets editable?
Will Driscoll: The expanding Digital Human Platform production pipeline is comprised of the CohortÔ crowd toolkit. Cohort is the new tool for volumetric assets used to create crowd assets for XR, virtual production and web activations. Based on varied pre-recorded Wild Capture volumetric performances, Cohort has various libraries of smart assets to choose from to produce lifelike crowd behaviors. It is deployable across virtual production and gaming engines such as Unreal and Unity, software applications, and for VR and web-based needs.
Cohort includes a digital fashion application that allows creators to apply CG fabric in 3D virtual spaces and solves the necessary volumetric character interaction to create lifelike realism. All digital fabric is created from traditional CG cloth design tools, then applied and adjusted for collisions and real-world physics.
These tools use nondestructive Universal Scene Description (USD) to create new layers within and around each digital human to allow new artistic possibilities in this space. These USD pipelines are giving users the opportunity to create new virtual world and e-commerce possibilities and save hundreds of hours for VFX and other new media directors and artists.
PH: Wild Capture talks about democratizing open-source technology for virtual world-building. Please elaborate.
Will Driscoll: As a partner in the XR Foundation's "Open Metaverse" initiative and co-developer of the UVol streaming media player, our guiding principle is to advance and democratize open-source technology in virtual world-building so that anyone interested can create digital humans with as few technical hurdles as possible.
PH: How can creators access the Digital Human Platform?
Will Driscoll: We are currently developing Cohort as a standalone product as more studios adopt this technology. For the time being, we are offering the platform as a white-glove “product-as-a-service” solution.
PH: What are some of the recent projects that have relied on Wild Capture’s Digital Human Platform?
Will Driscoll: Recently were fortunate to work on the music video and a virtual concert series that taps into the heart and soul of Atlanta Hip Hop. Working with music icon Dallas Austin and rising stars Kaelyn Kastle and Jazzy Tha Rapper, the Wild Capture team provided performance captures and implemented our Cohort volumetric crowd and CG fashion capabilities that included virtual wardrobe changes, backup dancers and various environments to position and playback an entire live virtual concert performance by the artists.
For Sprite and the Atlanta Hawks, Wild Capture partnered with Creative Media Industries Institute (CMII) and the You Are Here (YAH) agency to facilitate an immersive experience starring Atlanta rap artist, Latto. Shot in volumetric video, the imagery appeared throughout stadium monitors during halftime shows at the State Farm Arena in Atlanta.
PH: Where do you see the future of digital humans in the volumetric virtual production pipeline headed?
Will Driscoll: With volumetric video, we are advancing human cinematography, maintaining the nuances and details of what makes a digital human real. We’ll transition from controlling digital people with motion capture and rigged characters to manipulating live action performances. Achieving realism will be easier and more accessible. This sort of technology will be democratized with innovation for creator communities and perfected by studios and researchers. Academia has influenced the digital human world and will hopefully continue to provide resources to enable companies like ours.
As for Wild Capture, we will continue our efforts to collaborate with strategic partners and invest in our development pipeline for volumetric fashion, crowd systems, and UVol. This ideally positions us to standardize and prepare for next-generation web-based spatial media to execute high-end activations.