Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pose Normalization code #21

Open
carmelofascella opened this issue Jun 4, 2021 · 7 comments
Open

Pose Normalization code #21

carmelofascella opened this issue Jun 4, 2021 · 7 comments

Comments

@carmelofascella
Copy link

Hi, has anyone trained to run the script graph_posenorm.py, obtaining good results as shown in the paper?
I have tried it, but it doesn't work at all. Could someone suggest to me how to do it?
I'm trying to modify it, but I still don't have good results.

Thank you in advance

@alamayreh
Copy link

Hi,

I'm facing the same problems.

@carmelofascella
Copy link
Author

These are some hints and modification that I have done in the script to have good results in normalization step.

  • For each frame of your source and target folder, get the .yml files (keypoints of skeletons) using the pose output format BODY_25, which is the standard of Openpose. So each frame has 25 coordinates of points of the skeleton.

  • I have noticed that in the EverybodyDanceNow dataset, the label images have only 23 points (maybe they rendered them using the .yml in BODY_23 format that has less and a different order of points.

  • In graph_posenorm.py, after line 83 I have added posepts = map_25_to_23(posepts), to render the labels in the same way done in their dataset.

  • Modify poselen=69 (instead of poselen = 75) in graphposenorm.py. It’s the number of coordinates, considering that for each of the 23 points of the skeleton, you have 3 coordinates (x, y, confidence)

  • I have noticed that the labels in the dataset are kind of blurred, compared to the ones that you generate with that script. To blur the images I use this “trick”: after line 348 in graphposenorm.py, I do canvas = canvas.resize((1080, 1920), Image.ANTIALIAS) and then canvas = canvas.resize((2 * SIZE, SIZE), Image.ANTIALIAS)

  • I have noticed that the normalization function get_keypoints_stats() works correctly if you choose only the frames of the target subjects where he/she is more or less in the same position in the y-plane.

@eastchun
Copy link

eastchun commented Nov 1, 2021

How can we access to the following datasets ???

  • /data/scratch/caroline/keypoints/wholedance_keys
  • /data/scratch/caroline/keypoints/dubstep_keypointsFOOT

@carmelofascella
Copy link
Author

I think it is not available

@eastchun
Copy link

eastchun commented Nov 3, 2021

I looked at the pose normalization code and found that it doesn't works when the image (either target or source) doesn't show the left ear (when the head is turned left). Looks like a program bug.

@Delicious-Bitter-Melon
Copy link

@carmelofascella Do you successfully reproduce the reported paper result by using their official implementation later?

@carmelofascella
Copy link
Author

@carmelofascella Do you successfully reproduce the reported paper result by using their official implementation later?

Yes, I did (now a long time ago)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants