Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
Here's a look behind the scenes of how I used a 3d model, generated from an image, as a "driver" for ai animation. In the near future we'll see more 3d tools that support these types of workflows, with much more emphasis on enhancing user input instead of just letting the ai do everything.
I used the 3d model to generate keyframes that I then animated in Luma Dream. By using a 3d model with a diffusion "layer" done with Krea, I got quite high quality frames with a relatively high consistency as well since the AI didn't have to "hallucinate" that much. I upscaled and detailed the frames using Magnific.
Using semi-detailed 3d models as drivers for gen ai is very powerful, and it allows us to use models that are sculpted in a more free-flowing type of workflow that doesn't rely so much on high surface detailing or very time-consuming finish, but instead relies more on gestural 3d sculpting.
I used the 3d model to generate keyframes that I then animated in Luma Dream. By using a 3d model with a diffusion "layer" done with Krea, I got quite high quality frames with a relatively high consistency as well since the AI didn't have to "hallucinate" that much. I upscaled and detailed the frames using Magnific.
Using semi-detailed 3d models as drivers for gen ai is very powerful, and it allows us to use models that are sculpted in a more free-flowing type of workflow that doesn't rely so much on high surface detailing or very time-consuming finish, but instead relies more on gestural 3d sculpting.