Demos¶
Animal Pose Estimation¶
We provide a demo script to test a single image or video with top-down pose estimators and animal detectors. Assume that you have already installed mmdet with version >= 3.0.
2D Animal Pose Image Demo¶
python demo/topdown_demo_with_mmdet.py \
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} --det-cat-id ${DET_CAT_ID} \
[--show] [--output-root ${OUTPUT_DIR}] [--save-predictions] \
[--draw-heatmap ${DRAW_HEATMAP}] [--radius ${KPT_RADIUS}] \
[--kpt-thr ${KPT_SCORE_THR}] [--bbox-thr ${BBOX_SCORE_THR}] \
[--device ${GPU_ID or CPU}]
The pre-trained animal pose estimation model can be found from model zoo. Take animalpose model as an example:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_8xb32-300e_coco.py \
https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_m_8xb32-300e_coco/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth \
configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py \
https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth \
--input tests/data/animalpose/ca110.jpeg \
--show --draw-heatmap --det-cat-id=15
Visualization result:
If you use a heatmap-based model and set argument --draw-heatmap
, the predicted heatmap will be visualized together with the keypoints.
The augement --det-cat-id=15
selected detected bounding boxes with label ‘cat’. 15 is the index of category ‘cat’ in COCO dataset, on which the detection model is trained.
COCO-animals
In COCO dataset, there are 80 object categories, including 10 common animal
categories (14: ‘bird’, 15: ‘cat’, 16: ‘dog’, 17: ‘horse’, 18: ‘sheep’, 19: ‘cow’, 20: ‘elephant’, 21: ‘bear’, 22: ‘zebra’, 23: ‘giraffe’).
For other animals, we have also provided some pre-trained animal detection models. Supported models can be found in detection model zoo.
To save visualized results on disk:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_8xb32-300e_coco.py \
https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_m_8xb32-300e_coco/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth \
configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py \
https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth \
--input tests/data/animalpose/ca110.jpeg \
--output-root vis_results --draw-heatmap --det-cat-id=15
To save predicted results on disk:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_8xb32-300e_coco.py \
https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_m_8xb32-300e_coco/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth \
configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py \
https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth \
--input tests/data/animalpose/ca110.jpeg \
--output-root vis_results --save-predictions --draw-heatmap --det-cat-id=15
To run demos on CPU:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_tiny_8xb32-300e_coco.py \
https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_tiny_8xb32-300e_coco/rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth \
configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py \
https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth \
--input tests/data/animalpose/ca110.jpeg \
--show --draw-heatmap --det-cat-id=15 --device cpu
2D Animal Pose Video Demo¶
Videos share the same interface with images. The difference is that the ${INPUT_PATH}
for videos can be the local path or URL link to video file.
For example,
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_8xb32-300e_coco.py \
https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_m_8xb32-300e_coco/rtmdet_m_8xb32-300e_coco_20220719_112220-229f527c.pth \
configs/animal_2d_keypoint/topdown_heatmap/animalpose/td-hm_hrnet-w32_8xb64-210e_animalpose-256x256.py \
https://download.openmmlab.com/mmpose/animal/hrnet/hrnet_w32_animalpose_256x256-1aa7f075_20210426.pth \
--input demo/resources/<demo_dog.mp4> \
--output-root vis_results --draw-heatmap --det-cat-id=16
The original video can be downloaded from Google Drive.
2D Animal Pose Demo with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/ap10k \
--pose2d animal --vis-out-dir vis_results/ap10k
This command infers all images located in tests/data/ap10k
and saves the visualization results in the vis_results/ap10k
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Speed Up Inference¶
Some tips to speed up MMPose inference:
set
model.test_cfg.flip_test=False
in animalpose_hrnet-w32.use faster human bounding box detector, see MMDetection.
Face Keypoint Estimation¶
We provide a demo script to test a single image or video with face detectors and top-down pose estimators. Assume that you have already installed mmdet with version >= 3.0.
Face Bounding Box Model Preparation: The pre-trained face box estimation model can be found in mmdet model zoo.
2D Face Image Demo¶
python demo/topdown_demo_with_mmdet.py \
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} [--output-root ${OUTPUT_DIR}] \
[--show] [--device ${GPU_ID or CPU}] [--save-predictions] \
[--draw-heatmap ${DRAW_HEATMAP}] [--radius ${KPT_RADIUS}] \
[--kpt-thr ${KPT_SCORE_THR}] [--bbox-thr ${BBOX_SCORE_THR}]
The pre-trained face keypoint estimation models can be found from model zoo. Take aflw model as an example:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/yolox-s_8xb8-300e_coco-face.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/yolo-x_8xb8-300e_coco-face_13274d7c.pth \
configs/face_2d_keypoint/rtmpose/face6/rtmpose-m_8xb256-120e_face6-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-face6_pt-in1k_120e-256x256-72a37400_20230529.pth \
--input tests/data/cofw/001766.jpg \
--show --draw-heatmap
Visualization result:
If you use a heatmap-based model and set argument --draw-heatmap
, the predicted heatmap will be visualized together with the keypoints.
To save visualized results on disk:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/yolox-s_8xb8-300e_coco-face.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/yolo-x_8xb8-300e_coco-face_13274d7c.pth \
configs/face_2d_keypoint/rtmpose/face6/rtmpose-m_8xb256-120e_face6-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-face6_pt-in1k_120e-256x256-72a37400_20230529.pth \
--input tests/data/cofw/001766.jpg \
--draw-heatmap --output-root vis_results
To save the predicted results on disk, please specify --save-predictions
.
To run demos on CPU:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/yolox-s_8xb8-300e_coco-face.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/yolo-x_8xb8-300e_coco-face_13274d7c.pth \
configs/face_2d_keypoint/rtmpose/face6/rtmpose-m_8xb256-120e_face6-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-face6_pt-in1k_120e-256x256-72a37400_20230529.pth \
--input tests/data/cofw/001766.jpg \
--show --draw-heatmap --device=cpu
2D Face Video Demo¶
Videos share the same interface with images. The difference is that the ${INPUT_PATH}
for videos can be the local path or URL link to video file.
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/yolox-s_8xb8-300e_coco-face.py \
https://download.openmmlab.com/mmpose/mmdet_pretrained/yolo-x_8xb8-300e_coco-face_13274d7c.pth \
configs/face_2d_keypoint/rtmpose/face6/rtmpose-m_8xb256-120e_face6-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-face6_pt-in1k_120e-256x256-72a37400_20230529.pth \
--input demo/resources/<demo_face.mp4> \
--show --output-root vis_results --radius 1
The original video can be downloaded from Google Drive.
2D Face Pose Demo with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/wflw \
--pose2d face --vis-out-dir vis_results/wflw --radius 1
This command infers all images located in tests/data/wflw
and saves the visualization results in the vis_results/wflw
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Speed Up Inference¶
For 2D face keypoint estimation models, try to edit the config file. For example, set model.test_cfg.flip_test=False
in line 90 of aflw_hrnetv2.
Hand Keypoint Estimation¶
We provide a demo script to test a single image or video with hand detectors and top-down pose estimators. Assume that you have already installed mmdet with version >= 3.0.
Hand Box Model Preparation: The pre-trained hand box estimation model can be found in mmdet model zoo.
2D Hand Image Demo¶
python demo/topdown_demo_with_mmdet.py \
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} [--output-root ${OUTPUT_DIR}] \
[--show] [--device ${GPU_ID or CPU}] [--save-predictions] \
[--draw-heatmap ${DRAW_HEATMAP}] [--radius ${KPT_RADIUS}] \
[--kpt-thr ${KPT_SCORE_THR}] [--bbox-thr ${BBOX_SCORE_THR}]
The pre-trained hand pose estimation model can be downloaded from model zoo. Take onehand10k model as an example:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_nano_320-8xb32_hand.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmdet_nano_8xb32-300e_hand-267f9c8f.pth \
configs/hand_2d_keypoint/rtmpose/hand5/rtmpose-m_8xb256-210e_hand5-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-hand5_pt-aic-coco_210e-256x256-74fb594_20230320.pth \
--input tests/data/onehand10k/9.jpg \
--show --draw-heatmap
Visualization result:
If you use a heatmap-based model and set argument --draw-heatmap
, the predicted heatmap will be visualized together with the keypoints.
To save visualized results on disk:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_nano_320-8xb32_hand.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmdet_nano_8xb32-300e_hand-267f9c8f.pth \
configs/hand_2d_keypoint/rtmpose/hand5/rtmpose-m_8xb256-210e_hand5-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-hand5_pt-aic-coco_210e-256x256-74fb594_20230320.pth \
--input tests/data/onehand10k/9.jpg \
--output-root vis_results --show --draw-heatmap
To save the predicted results on disk, please specify --save-predictions
.
To run demos on CPU:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_nano_320-8xb32_hand.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmdet_nano_8xb32-300e_hand-267f9c8f.pth \
configs/hand_2d_keypoint/rtmpose/hand5/rtmpose-m_8xb256-210e_hand5-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-hand5_pt-aic-coco_210e-256x256-74fb594_20230320.pth \
--input tests/data/onehand10k/9.jpg \
--show --draw-heatmap --device cpu
2D Hand Keypoints Video Demo¶
Videos share the same interface with images. The difference is that the ${INPUT_PATH}
for videos can be the local path or URL link to video file.
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_nano_320-8xb32_hand.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmdet_nano_8xb32-300e_hand-267f9c8f.pth \
configs/hand_2d_keypoint/rtmpose/hand5/rtmpose-m_8xb256-210e_hand5-256x256.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-hand5_pt-aic-coco_210e-256x256-74fb594_20230320.pth \
--input data/tests_data_nvgesture_sk_color.avi \
--output-root vis_results --kpt-thr 0.1
The original video can be downloaded from Github.
2D Hand Keypoints Demo with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/onehand10k \
--pose2d hand --vis-out-dir vis_results/onehand10k \
--bbox-thr 0.5 --kpt-thr 0.05
This command infers all images located in tests/data/onehand10k
and saves the visualization results in the vis_results/onehand10k
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Speed Up Inference¶
For 2D hand keypoint estimation models, try to edit the config file. For example, set model.test_cfg.flip_test=False
in onehand10k_hrnetv2.
Human Pose Estimation¶
We provide demo scripts to perform human pose estimation on images or videos.
2D Human Pose Top-Down Image Demo¶
Use full image as input¶
We provide a demo script to test a single image, using the full image as input bounding box.
python demo/image_demo.py \
${IMG_FILE} ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--out-file ${OUTPUT_FILE} \
[--device ${GPU_ID or CPU}] \
[--draw_heatmap]
If you use a heatmap-based model and set argument --draw-heatmap
, the predicted heatmap will be visualized together with the keypoints.
The pre-trained human pose estimation models can be downloaded from model zoo. Take coco model as an example:
python demo/image_demo.py \
tests/data/coco/000000000785.jpg \
configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
--out-file vis_results.jpg \
--draw-heatmap
To run this demo on CPU:
python demo/image_demo.py \
tests/data/coco/000000000785.jpg \
configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
--out-file vis_results.jpg \
--draw-heatmap \
--device=cpu
Visualization result:
Use mmdet for human bounding box detection¶
We provide a demo script to run mmdet for human detection, and mmpose for pose estimation.
Assume that you have already installed mmdet with version >= 3.0.
python demo/topdown_demo_with_mmdet.py \
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} \
[--output-root ${OUTPUT_DIR}] [--save-predictions] \
[--show] [--draw-heatmap] [--device ${GPU_ID or CPU}] \
[--bbox-thr ${BBOX_SCORE_THR}] [--kpt-thr ${KPT_SCORE_THR}]
Example:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/body_2d_keypoint/rtmpose/body8/rtmpose-m_8xb256-420e_body8-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.pth \
--input tests/data/coco/000000197388.jpg --show --draw-heatmap \
--output-root vis_results/
Visualization result:
To save the predicted results on disk, please specify --save-predictions
.
2D Human Pose Top-Down Video Demo¶
The above demo script can also take video as input, and run mmdet for human detection, and mmpose for pose estimation. The difference is, the ${INPUT_PATH}
for videos can be the local path or URL link to video file.
Assume that you have already installed mmdet with version >= 3.0.
Example:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/body_2d_keypoint/rtmpose/body8/rtmpose-m_8xb256-420e_body8-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.pth \
--input tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \
--output-root=vis_results/demo --show --draw-heatmap
2D Human Pose Bottom-up Image/Video Demo¶
We also provide a demo script using bottom-up models to estimate the human pose in an image or a video, which does not rely on human detectors.
python demo/bottomup_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} \
[--output-root ${OUTPUT_DIR}] [--save-predictions] \
[--show] [--device ${GPU_ID or CPU}] \
[--kpt-thr ${KPT_SCORE_THR}]
Example:
python demo/bottomup_demo.py \
configs/body_2d_keypoint/dekr/coco/dekr_hrnet-w32_8xb10-140e_coco-512x512.py \
https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/dekr/coco/dekr_hrnet-w32_8xb10-140e_coco-512x512_ac7c17bf-20221228.pth \
--input tests/data/coco/000000197388.jpg --output-root=vis_results \
--show --save-predictions
Visualization result:
2D Human Pose Estimation with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py \
tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \
--pose2d human --vis-out-dir vis_results/posetrack18
This command infers the video and saves the visualization results in the vis_results/posetrack18
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Speed Up Inference¶
Some tips to speed up MMPose inference:
For top-down models, try to edit the config file. For example,
set
model.test_cfg.flip_test=False
in topdown-res50.use faster human bounding box detector, see MMDetection.
Human Whole-Body Pose Estimation¶
2D Human Whole-Body Pose Top-Down Image Demo¶
Use full image as input¶
We provide a demo script to test a single image, using the full image as input bounding box.
python demo/image_demo.py \
${IMG_FILE} ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--out-file ${OUTPUT_FILE} \
[--device ${GPU_ID or CPU}] \
[--draw_heatmap]
The pre-trained hand pose estimation models can be downloaded from model zoo. Take coco-wholebody_vipnas_res50_dark model as an example:
python demo/image_demo.py \
tests/data/coco/000000000785.jpg \
configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_vipnas-res50_dark-8xb64-210e_coco-wholebody-256x192.py \
https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth \
--out-file vis_results.jpg
To run demos on CPU:
python demo/image_demo.py \
tests/data/coco/000000000785.jpg \
configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_vipnas-res50_dark-8xb64-210e_coco-wholebody-256x192.py \
https://download.openmmlab.com/mmpose/top_down/vipnas/vipnas_res50_wholebody_256x192_dark-67c0ce35_20211112.pth \
--out-file vis_results.jpg \
--device=cpu
Use mmdet for human bounding box detection¶
We provide a demo script to run mmdet for human detection, and mmpose for pose estimation.
Assume that you have already installed mmdet with version >= 3.0.
python demo/topdown_demo_with_mmdet.py \
${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_PATH} \
[--output-root ${OUTPUT_DIR}] [--save-predictions] \
[--show] [--draw-heatmap] [--device ${GPU_ID or CPU}] \
[--bbox-thr ${BBOX_SCORE_THR}] [--kpt-thr ${KPT_SCORE_THR}]
Examples:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_hrnet-w48_dark-8xb32-210e_coco-wholebody-384x288.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_wholebody_384x288_dark-f5726563_20200918.pth \
--input tests/data/coco/000000196141.jpg \
--output-root vis_results/ --show
To save the predicted results on disk, please specify --save-predictions
.
2D Human Whole-Body Pose Top-Down Video Demo¶
The above demo script can also take video as input, and run mmdet for human detection, and mmpose for pose estimation.
Assume that you have already installed mmdet.
Examples:
python demo/topdown_demo_with_mmdet.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/wholebody_2d_keypoint/topdown_heatmap/coco-wholebody/td-hm_hrnet-w48_dark-8xb32-210e_coco-wholebody-384x288.py \
https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_wholebody_384x288_dark-f5726563_20200918.pth \
--input https://user-images.githubusercontent.com/87690686/137440639-fb08603d-9a35-474e-b65f-46b5c06b68d6.mp4 \
--output-root vis_results/ --show
Visualization result:
2D Human Whole-Body Pose Estimation with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/crowdpose \
--pose2d wholebody --vis-out-dir vis_results/crowdpose
This command infers all images located in tests/data/crowdpose
and saves the visualization results in the vis_results/crowdpose
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Speed Up Inference¶
Some tips to speed up MMPose inference:
For top-down models, try to edit the config file. For example,
set
model.test_cfg.flip_test=False
in pose_hrnet_w48_dark+.use faster human bounding box detector, see MMDetection.
3D Hand Demo¶
3D Hand Estimation Image Demo¶
Using gt hand bounding boxes as input¶
We provide a demo script to test a single image, given gt json file.
python demo/hand3d_internet_demo.py \
${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
--input ${INPUT_FILE} \
--output-root ${OUTPUT_ROOT} \
[--save-predictions] \
[--gt-joints-file ${GT_JOINTS_FILE}]\
[--disable-rebase-keypoint] \
[--show] \
[--device ${GPU_ID or CPU}] \
[--kpt-thr ${KPT_THR}] \
[--show-kpt-idx] \
[--show-interval] \
[--radius ${RADIUS}] \
[--thickness ${THICKNESS}]
The pre-trained hand pose estimation model can be downloaded from model zoo. Take internet model as an example:
python demo/hand3d_internet_demo.py \
configs/hand_3d_keypoint/internet/interhand3d/internet_res50_4xb16-20e_interhand3d-256x256.py \
https://download.openmmlab.com/mmpose/hand3d/internet/res50_intehand3dv1.0_all_256x256-42b7f2ac_20210702.pth \
--input tests/data/interhand2.6m/image69148.jpg \
--save-predictions \
--output-root vis_results
3D Hand Pose Estimation with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/interhand2.6m/image29590.jpg --pose3d hand3d --vis-out-dir vis_results/hand3d
This command infers the image and saves the visualization results in the vis_results/hand3d
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
3D Human Pose Demo¶
3D Human Pose Two-stage Estimation Demo¶
Using mmdet for human bounding box detection and top-down model for the 1st stage (2D pose detection), and inference the 2nd stage (2D-to-3D lifting)¶
Assume that you have already installed mmdet.
python demo/body3d_pose_lifter_demo.py \
${MMDET_CONFIG_FILE} \
${MMDET_CHECKPOINT_FILE} \
${MMPOSE_CONFIG_FILE_2D} \
${MMPOSE_CHECKPOINT_FILE_2D} \
${MMPOSE_CONFIG_FILE_3D} \
${MMPOSE_CHECKPOINT_FILE_3D} \
--input ${VIDEO_PATH or IMAGE_PATH or 'webcam'} \
[--show] \
[--disable-rebase-keypoint] \
[--disable-norm-pose-2d] \
[--num-instances ${NUM_INSTANCES}] \
[--output-root ${OUT_VIDEO_ROOT}] \
[--save-predictions] \
[--device ${GPU_ID or CPU}] \
[--det-cat-id ${DET_CAT_ID}] \
[--bbox-thr ${BBOX_THR}] \
[--kpt-thr ${KPT_THR}] \
[--use-oks-tracking] \
[--tracking-thr ${TRACKING_THR}] \
[--show-interval ${INTERVAL}] \
[--thickness ${THICKNESS}] \
[--radius ${RADIUS}] \
[--online]
Note that
${VIDEO_PATH}
can be the local path or URL link to video file.If the
[--online]
option is set to True, future frame information can not be used when using multi frames for inference in the 2D pose detection stage.
Examples:
During 2D pose detection, for single-frame inference that do not rely on extra frames to get the final results of the current frame and save the prediction results, try this:
python demo/body3d_pose_lifter_demo.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/body_2d_keypoint/rtmpose/body8/rtmpose-m_8xb256-420e_body8-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.pth \
configs/body_3d_keypoint/video_pose_lift/h36m/video-pose-lift_tcn-243frm-supv-cpn-ft_8xb128-200e_h36m.py \
https://download.openmmlab.com/mmpose/body3d/videopose/videopose_h36m_243frames_fullconv_supervised_cpn_ft-88f5abbb_20210527.pth \
--input https://user-images.githubusercontent.com/87690686/164970135-b14e424c-765a-4180-9bc8-fa8d6abc5510.mp4 \
--output-root vis_results \
--save-predictions
During 2D pose detection, for multi-frame inference that rely on extra frames to get the final results of the current frame, try this:
python demo/body3d_pose_lifter_demo.py \
demo/mmdetection_cfg/rtmdet_m_640-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_m_8xb32-100e_coco-obj365-person-235e8209.pth \
configs/body_2d_keypoint/rtmpose/body8/rtmpose-m_8xb256-420e_body8-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.pth \
configs/body_3d_keypoint/video_pose_lift/h36m/video-pose-lift_tcn-243frm-supv-cpn-ft_8xb128-200e_h36m.py \
https://download.openmmlab.com/mmpose/body3d/videopose/videopose_h36m_243frames_fullconv_supervised_cpn_ft-88f5abbb_20210527.pth \
--input https://user-images.githubusercontent.com/87690686/164970135-b14e424c-765a-4180-9bc8-fa8d6abc5510.mp4 \
--output-root vis_results \
--online
3D Human Pose Demo with Inferencer¶
The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:
python demo/inferencer_demo.py tests/data/coco/000000000785.jpg \
--pose3d human3d --vis-out-dir vis_results/human3d
This command infers the image and saves the visualization results in the vis_results/human3d
directory.
In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.
Webcam Demo¶
The original Webcam API has been deprecated starting from version v1.1.0. Users now have the option to utilize either the Inferencer or the demo script for conducting pose estimation using webcam input.
Webcam Demo with Inferencer¶
Users can utilize the MMPose Inferencer to estimate human poses in webcam inputs by executing the following command:
python demo/inferencer_demo.py webcam --pose2d 'human'
For additional information about the arguments of Inferencer, please refer to the Inferencer Documentation.
Webcam Demo with Demo Script¶
All of the demo scripts, except for demo/image_demo.py
, support webcam input.
Take demo/topdown_demo_with_mmdet.py
as example, users can utilize this script with webcam input by specifying --input webcam
in the command:
# inference with webcam
python demo/topdown_demo_with_mmdet.py \
projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \
projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-m_8xb256-420e_coco-256x192.py \
https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/rtmpose-m_simcc-aic-coco_pt-aic-coco_420e-256x192-63eb25f7_20230126.pth \
--input webcam \
--show