3D rendering is always a huge pain, that’s why we have dedicated processors for doing such grunt work. But out of all the things that you can render on a screen, there is one thing that always takes up a ton of power and resources, hair. But that problem may be no more, because researchers from the University of California, Pinscreen, and Microsoft have developed a hair rendering technique reliant on an AI neural network which can take images in 2D and render the hair depicted in the image into a fully physics-based 3D rendition.
The neural network follows a process similar to how a brain does, taking in various details and creating layers upon layers to make the final product. In order to teach the network to make the leap from 2D to 3D, they initially fed it 40,000 unique hairstyles as well as 160,000 2D images from different viewpoints around the styles themselves. It can use that training to render hair in a variety of different styles, colors, lengths, all in milliseconds, all while being able to mimic the movement of individual hairs from video clips.
“Realistic hair modeling is one of the most difficult tasks when digitizing virtual humans, in contrast to objects that are easily parameterisable, like the human face, hair spans a wide range of shape variations and can be highly complex due to its volumetric structure and level of deformability in each strand.”
It’s not a perfect system by far, some hairstyles don’t easily make the transition from an image to a 3D render, but with more training and development the program can become massively useful in hairstyle rendering in the future. It should be said that this network was run through a system of multiple NVIDIA Titan Xp GPUs, not something necessarily inviting to your average gamer. But hey, maybe we can see this technology make its home in the Tensor Cores of a GTX 1180 if that becomes a reality.