👌Video recording best practices

Learn how to maximize the output quality using our User-Generated Emote feature.

This page guides you through various guidelines, best practices, and tools to optimize video recording and/or uploading, for an optimized emote output.

We highly recommend that you share these guidelines with your players, to help them get the most out of the User-Generated Emote feature.

Also, make sure you have read and acknowledged the User-Generated Emote specifications before learning about our recommended best practices.

Remember: Kinetix’s AI only captures human movements!

How to maximize the quality of the emote output

Please find below the Kinetix recommendations to optimize the video input.

  • Stable camera movement: ensure the camera is positioned on a flat, stable surface. This stability is crucial for capturing clear, consistent footage, essential for creating high-quality emotes without distortion or interruption.

  • All the body within the frame: make sure all the movements are fully captured within the camera's frame. If recording yourself, position yourself centrally and keep a consistent distance from the camera, ensuring all your movements—from feet to head and arms—are clearly visible and accurately captured.

  • Single character: we recommend featuring only one person in the video frame, even though our system supports multi-character inputs. This best practice ensures that the AI can focus on the movements of a single individual, thereby avoiding to consider the wrong actor. Having just one performer in the frame minimizes potential distractions. When using a video taken from the Internet that includes multiple actors, we recommend to crop the video, in order to focus on the desired performer.

  • Area of movement: ensure the movements are contained within a well-defined space, ideally maintaining all activity within a 1-meter radius from the center point.

  • Grounded start: begin your video with a grounded stance. Ensure that your starting position is firmly on the ground, as starting mid-air or in an elevated position can cause difficulties for the AI in accurately interpreting your movements.

  • Clothing: if possible, opt for well-fitting attire, avoiding loose or baggy clothes. Snug clothing ensures the movements are captured accurately and distinctly, facilitating a more precise translation of the actions into emotes.

  • Colors: be careful not to wear clothes and shoes that have the same color as the background. It's important to distinguish every part of the human body from the others and from the background.

Checkout this video for a simple recap

Any questions? Feel free to reach out to Ben - we'll always be happy to help!

Last updated