face-key-point-pytorch
1. Data structure
The structure of landmarks_jpg is like below:
|--landmarks_jpg
|----AFW
|------AFW_134212_1_0.jpg
|------AFW_134212_1_1.jpg
|----HELEN
|-------HELEN_232194_1_0.jpg
|-------HELEN_232194_1_1.jpg
|----IBUG
|------IBUG_image_003_1_0.jpg
|------IBUG_image_003_1_1.jpg
|----LFPW
|------LFPW_image_test_0001_0.jpg
|------LFPW_image_test_0001_1.jpg
The structure of landmarks_label is like below:
|--landmarks_label
|----AFW
|------AFW_134212_1_0_pts
|------AFW_134212_1_1_pts
|----HELEN
|-------HELEN_232194_1_0_pts
|-------HELEN_232194_1_1_pts
|----IBUG
|------IBUG_image_003_1_0_pts
|------IBUG_image_003_1_1_pts
|----LFPW
|------LFPW_image_test_0001_0_pts
|------LFPW_image_test_0001_1_pts
You can download it by yourself. You can also download the data from the cloud drive:
name | link |
---|---|
landmarks_jpg.zip | https://pan.baidu.com/s/1AJKpa0ac-6ZPWBASiMv87Q code: nujr |
landmarks_label.zip | https://pan.baidu.com/s/1wBAZMFkNQS6R6KLkRl6ktw code: zgl0 |
2. how to train
First, install the third-party package:
pip install -r requirements.txt
Then just simply run the below command:
python3 train.py
if you want to use the pretrained models, you can revise the below code as you need:
load_pretrain_model = False
model_dir=r".\pretrain_models\face-keypoint-vgg16-0.pth"
if load_pretrain_model:
checkpoint = torch.load(model_dir)
net.load_state_dict(checkpoint)
3. how to test
Revise the test file name in predict.py and then run the below command:
python3 predict.py