Future Media Hubs , New Technology Hub, Game Hub

Virtual Humans in a Media Context: How do you create a virtual human?

Virtual world

What’s in a name? At first glance, virtual and human are two polar opposites. Nevertheless, they are the beginning of a new digital world, and therefore an exciting opportunity in the ever-changing media landscape. VRT, YLE Finland and BCE Luxembourg collaborated on a project that aimed to make virtual humans meet in the metaverse. In this article we go over what we at Future Media Hubs learned, stumbled on and what we hope other broadcasters can take away. By documenting our process, we try to make virtual humans more familiar and easier to approach for European media companies by showing how to create and use them.

 

What is a virtual human?

Virtual humans are computer-generated figures. They are hyper realistic or stylized representations of living personalities, controlled in real-time or pre-rendered by an actor or an AI using traditional 3D graphics. Simply put: it’s a human looking CGI- avatar.

 

Virtual Human

 


Think of Blade Runner 2049, where K, played by Ryan Gosling, falls in love with Joi, a virtual human. Next to popular culture, we are surrounded by them more often than we tend to realize. For example: there are also virtual human influencers. The most famous one is probably Lil Miquela, who is valued at a whopping $125M.

“As a public broadcaster, we find it important to invest in this new technology.”

Why did we do it?

The media landscape is in full transition: the virtual and real world are merging together, and what happens online has a direct impact on the personal life or social environment of the consumer. Broadcasters want to stay relevant, and can’t help but evolve from television creators into multimedia hubs.


The benefit of virtual humans is in the versatility of the concept: virtual hosts are like game characters. They have extra physical powers (they can fly, explode, grow,...) and are customizable (appearance, gender,...).  This opens possibilities in storytelling completely new from the paved roads in contemporary media formats.

 

Stars4Media

The team received a grant from the Stars4Media project, a 4-monthinnovation exchange programme in which media organizations from different EU countries cooperate on innovative practices, at editorial, technological or organizational levels.

Although at first our aim was to research and test the different types of entities and technologies that exist to bring virtual humans to live in the context of media storytelling, we looked at it as a first step in a bigger concept. Our goal for the future is to collaborate more often on an international level. 
 


How did we do it?


Unreal
We decided to use the Unreal Engine as our platform to create our virtual human. The software has a lot of tools that make it easier to build a virtual set-up. Moreover, Unreal also features “Blueprints”. This node-based way of programming makes it user-friendly for artists that don’t have a big background in coding. 
 

Unreal workflow
 
An example of node-based programming in Unreal Engine.


MetaHuman
But the biggest reason we opted for Unreal is because we wanted to create our virtual humans with MetaHuman (Epic Games), which has the same publisher as Unreal Engine. Besides being very high quality, they are created to seamlessly work with Unreal. Throw in Quixel Megascans, which is a library of hyperrealistic assets created by Epic Games, and you get a great level of detail. Because MetaHuman and Quixel Megascans are all part of that same company as the Unreal Engine, these features are free. That's what made working with Unreal Engine in this case a no-brainer. 

 

Virtual Human 1Virtual Human 2Virtual Human 3

 

RADiCAL & Live Link Face
For the facial motion capture we used Live Link Face and an iPhone 12. Besides the fact that the interface of the Iphone 12 is very user-friendly, YLE already had some experience with the Live Link Face software. After that, the team started working on the rest of the body. This is where RADiCAL, an AI-powered 3-D motion capture software, came in. 3D artist for VRT Steven Roelant created the environment in Unreal. The collaboration with RADiCAL enabled the team to get data from movement of real life people, and map it onto a metahuman. The ambition here was to combine as many motion capture movements as possible.
 

“One of the revelations of this project is that getting familiar with Live Link Face opens up a world of possibilities.”

Our first challenge was making sure we could get our virtual human’s head to work. Although there is a small delay, the head tracking works rather well. Making the characters talk is much harder, since the tracking of the lip-movement isn’t very accurate.

 

Production

 

Production 2

 

The logistics turned out to be more of a roadblock than originally anticipated. How do we make our humans meet in the same environment? The most straightforward way to put the avatars in one environment was by pulling in two RADiCAL datastreams in one Unreal project. By doing that, we could make two avatars exist in this one space.

There were a few big pros to using RADiCAL: a webapp can be used to stream data to Unreal and there is lots of light and a wide open space with a neutral background. So you are not restricted by the location. But the biggest selling point is that RADiCAL is an economical option compared to using an expensive mocap suit, since there is no need for trackers. Although with this low budget setup there is no finger tracking, and the end result will be less accurate.

We also had some difficulty connecting the face and body data. This is because our VRT network blocked the connection with Live Link Face. Another issue was the fact that the face data was streamed on the same network, which meant we couldn’t pull in the face-data from YLE. Using a multi-user Unreal Engine is difficult. It requires setting up a VPN LAN, which adds extra problems (for example how Live Link Face-data is transferred to Unreal Engine if it's on VPN LAN). One of our aims for future projects is to look for ways to integrate the hand tracking and make that process easier.

 

View

 

Conclusion


Our goal was to make virtual humans more familiar and easier to approach for European media companies by showing how to create and use them. We achieved more than we first anticipated. At first we wanted every media company to only make their own avatar. Since the cooperation was running smoothly, we wanted to take it a step further and see if we could meet LIVE in the Metaverse. After we decided our final output, that’s when things got fun. We wanted to search for an interesting place for the metahumans to meet up and decided to create our own environment in the Unreal Game Engine. We tried to add face expressions to the metahumans, objects in the environment and even played the first meta football match between the countries!

Working with live motion capture in Unreal Engine brings virtual production closer to the art of improvisation. When you have mastered Unreal, there is unlimited freedom in trying things just because you can. You can also get new assets from the asset store. It is instant and enjoyable - as is creating a MetaHuman.

The virtual world mimics reality, and once you know how to work with Unreal you can break the boundaries. When there is room for improvisation, that’s where the magic happens.

We decided on making a video as a final output. It is the easiest way to give a demo and show what are the possibilities of the virtual humans we created.

“The most important lesson was to sit down and try it.”

Watch the result

Link to the video here.
Link to the video of the actors in the environment


What’s next?


With this project we tried to find out what the quality is of a low-cost setup. The next step will be to have a little bit more accurate MoCap data. How can we make sure the quality of this data is good enough to employ in a media setting, while using an affordable setup?

Thanks to the Stars4Media project we’re already contemplating upcoming collaborations. We had such a positive experience on this project that we're looking into future possibilities to keep working together. 

This project was made possible by Sarah Geeroms, Gregg Young, Steven Roelant, Rani D’Hulster, Rymenans Robin, Arthur Leplae, Magali Van Zele, Jeff Rommes, Wesa Aapro, Petri Karlsson and Jouni Frilander

Article written by Anne Vanoppen

 

This article is supported by the Stars4Media programme.
S4M logo

Eu logo