Epipolar Transformers

Epipolar Transformers (best paper award, CVPR 2020 workshop)

Yihui He
yihui-he.github.io
twitter@he_yi_hui
YouTube
GitHub (2k followers)
知乎@何宜晖 (8k 关注)

Epipolar Transformers PWC

Yihui He, Rui Yan, Katerina Fragkiadaki, Shoou-I Yu (Carnegie Mellon University, Facebook Reality Labs)

CVPR 2020, CVPR workshop Best Paper Award

Oral presentation and human pose demo videos (playlist):

Models

config MPJPE (mm) model & log
ResNet-50 256×256, triangulate 45.3 download
ResNet-50 256×256, triangulate, epipolar transformer 33.1 download
ResNet-50 256×256, triangulate, epipolar transformer (augmentation) 30.4 download
ResNet-152 384×384, triangulate, epipolar transformer (extra data) 19.0  

We also provide 2D to 3D lifting network implementations for these two papers:

Setup

Requirements

Python 3, pytorch > 1.2+ and pytorch < 1.4

pip install -r requirements.txt
conda install pytorch cudatoolkit=10.0 -c pytorch

Pretrained weights download

mkdir outs
cd datasets/
bash get_pretrained_models.sh

Please follow the instructions in datasets/README.md for preparing the dataset

Training

python main.py --cfg path/to/config
tensorboard --logdir outs/

Testing

Testing with latest checkpoints

python main.py --cfg configs/xxx.yaml DOTRAIN False

Testing with weights

python main.py --cfg configs/xxx.yaml DOTRAIN False WEIGHTS xxx.pth

Visualization

Epipolar Transformers Visualization

Human 3.6M input visualization

python main.py --cfg configs/epipolar/keypoint_h36m.yaml DOTRAIN False DOTEST False EPIPOLAR.VIS True  VIS.H36M True SOLVER.IMS_PER_BATCH 1
python main.py --cfg configs/epipolar/keypoint_h36m.yaml DOTRAIN False DOTEST False VIS.MULTIVIEWH36M True EPIPOLAR.VIS True SOLVER.IMS_PER_BATCH 1

Human 3.6M prediction visualization

# generate images
python main.py --cfg configs/epipolar/keypoint_h36m_zresidual_fixed.yaml DOTRAIN False DOTEST True VIS.VIDEO True DATASETS.H36M.TEST_SAMPLE 2
# generate images
python main.py --cfg configs/benchmark/keypoint_h36m.yaml DOTRAIN False DOTEST True VIS.VIDEO True DATASETS.H36M.TEST_SAMPLE 2
# use https://github.com/yihui-he/multiview-human-pose-estimation-pytorch to generate images for ICCV 19
python run/pose2d/valid.py --cfg experiments-local/mixed/resnet50/256_fusion.yaml # set test batch size to 1 and PRINT_FREQ to 2
# generate video
python scripts/video.py --src outs/epipolar/keypoint_h36m_fixed/video/multiview_h36m_val/

Citing Epipolar Transformers

If you find Epipolar Transformers helps your research, please cite the paper:

@inproceedings{epipolartransformers,
  title={Epipolar Transformers},
  author={He, Yihui and Yan, Rui and Fragkiadaki, Katerina and Yu, Shoou-I},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7779--7788},
  year={2020}
}

FAQ

Please create a new issue.