human poses in uncontrolled environments

#3
by tomathan12 - opened

As a researcher specializing in computer vision and human-computer interaction, I am particularly interested in understanding how the system handles the addition of novel poses and avatars within the existing framework of TalkBody4D, especially considering the dynamic nature of the 3DGS model and the need to maintain performance on mobile and AR devices. What specific processes are involved in extending the pose library or avatar templates to accommodate new variations while ensuring consistency with the pre-trained network? How does the system ensure that new poses integrate smoothly without compromising real-time performance and high-quality rendering?

I'm beginning to think to this project may be fake?

@tropicalstream I observed your inquiry regarding the Apple Vision Pro implementation without TestFlight and found myself contemplating the same question. I submitted a request for access to their project resources two days ago, primarily to comprehend their technical approach more thoroughly (besides the research work I am doing), as their demonstration exhibits considerable authenticity. The methodology articulated in their research paper is particularly compelling, and their demonstrated results are remarkably impressive (if true). Although AvatarRex would adequately fulfill my current requirements, I developed a genuine interest in examining their solution firsthand. I look forward to observing how this development progresses and remain hopeful it isn't fake. I hope we will be granted access to the code!

@tomathan12 keep us updated. i thought they would be happy to share knowledge.

@tropicalstream I thought so too, isn't this the whole point of huggingface? :)

The dataset will only be available to educational institutions that have signed our request form. Sorry, the code is not currently planned to be open source.

I'm beginning to think to this project may be fake?

This technology has been used on Alibaba Taobao products. Not opening the source code doesn't mean it's fake.

@tropicalstream @PixelAI-Team
I have tried the dataset, and it's great! It provides impressive multi-view image sequences and SMPL-X fittings. The quality of the data is excellent, and I appreciate the detailed documentation provided.
While I have successfully set up the dataset and run the visualization utilities included with TalkBody4D, I am particularly interested in the complete TaoAvatar framework described in your recent paper.
Are there any additional details that could offer insights into creating an interactive talking avatar as showcased in your paper? Are there any plans for future releases in this direction?
Again, thanks for the opportunity you have provided, and much respect from Switzerland.

tomathan12 changed discussion status to closed

Sign up or log in to comment