Skip to content

Re-Implementation of "FaceBoxes: A CPU Real-time Face Detector with High Accuracy"

License

Notifications You must be signed in to change notification settings

yakhyo/faceboxes-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceBoxes: A CPU Real-time Face Detector with High Accuracy

Downloads GitHub Repository

https://arxiv.org/abs/1708.05234

FaceBoxes is a high-performance face detection model. This repository provides the code for performing face detection using the FaceBoxes model.

AI generated image

Table of Contents

Project Description

FaceBoxes is a high-performance, real-time face detection model specifically designed for efficient and accurate face detection on CPUs. The model architecture is optimized for speed, making it suitable for applications that require quick and reliable face detection without the need for powerful GPUs.

Following updates have been made so far:

  • Re-written the training code and model architecture.
  • Pre-trained weights and checkpoint file is available under weights folder.
  • Made several auxiliary updates to the code.

To Do

  • [] torch to onnx convert
  • [] onnx inference

Installation

  1. Clone the repository:
git clone https://github.com/yakhyo/faceboxes-pytorch.git
cd faceboxes-pytorch
  1. Install dependencies:

    Create a virtual environment and install the required packages:

conda create -n faceboxes
conda activate faceboxes
pip install -r requirements.txt
The `requirements.txt` should include the necessary libraries such as `torch`, `opencv-python`, `numpy`, etc.

Training

  1. Download WIDER_FACE dataset, place the images under this directory:
./data/WIDER_FACE/images
  1. Convert WIDER FACE annotations to VOC format or download from here, place them under this directory:
./data/WIDER_FACE/annotations
  1. Train the model using WIDER_FACE train set data:
python train.py --train-data ./data/WIDER_FACE

train.py file arguments:

usage: train.py [-h] [--train-data TRAIN_DATA] [--num-workers NUM_WORKERS] [--num-classes NUM_CLASSES] [--batch-size BATCH_SIZE] [--epochs EPOCHS] [--print-freq PRINT_FREQ] [--learning-rate LEARNING_RATE]
                [--lr-warmup-epochs LR_WARMUP_EPOCHS] [--power POWER] [--momentum MOMENTUM] [--weight-decay WEIGHT_DECAY] [--gamma GAMMA] [--save-dir SAVE_DIR] [--resume]

Training Arguments for FaceBoxes Model

options:
  -h, --help            show this help message and exit
  --train-data TRAIN_DATA
                        Path to the training dataset directory.
  --num-workers NUM_WORKERS
                        Number of workers to use for data loading.
  --num-classes NUM_CLASSES
                        Number of classes in the dataset.
  --batch-size BATCH_SIZE
                        Number of samples in each batch during training.
  --epochs EPOCHS       max epoch for retraining.
  --print-freq PRINT_FREQ
                        Print frequency during training.
  --learning-rate LEARNING_RATE
                        Initial learning rate.
  --lr-warmup-epochs LR_WARMUP_EPOCHS
                        Number of warmup epochs.
  --power POWER         Power for learning rate policy.
  --momentum MOMENTUM   Momentum factor in SGD optimizer.
  --weight-decay WEIGHT_DECAY
                        Weight decay (L2 penalty) for the optimizer.
  --gamma GAMMA         Gamma update for SGD.
  --save-dir SAVE_DIR   Directory where trained model checkpoints will be saved.
  --resume              Resume training from checkpoint

Dataset Folder Structure

data/|
     ├── AFW/
     │   ├── images/
     │   └── img_list.txt
     ├── FDDB/
     │   ├── images/
     │   └── img_list.txt
     ├── PASCAL/
     │   ├── images/
     │   └── img_list.txt
     └── WIDER_FACE/                 <= Used for training
         ├── annotations/
         ├── images/
         └── img_list.txt

Testing

test.py file arguments:

usage: test.py [-h] [--weights WEIGHTS] [--save-dir SAVE_DIR] [--dataset {AFW,PASCAL,FDDB}] [--conf-threshold CONF_THRESHOLD] [--pre-nms-top-k PRE_NMS_TOP_K] [--nms-threshold NMS_THRESHOLD]
               [--post-nms-top-k POST_NMS_TOP_K] [--show-image] [--vis-threshold VIS_THRESHOLD]

Testing Arguments for FaceBoxes Model

options:
  -h, --help            show this help message and exit
  --weights WEIGHTS     Path to the trained model state dict file.
  --save-dir SAVE_DIR   Directory to save the detection results.
  --dataset {AFW,PASCAL,FDDB}
                        Select the dataset to evaluate on.
  --conf-threshold CONF_THRESHOLD
                        Minimum confidence threshold for considering detections.
  --pre-nms-top-k PRE_NMS_TOP_K
                        Number of top bounding boxes to consider for NMS.
  --nms-threshold NMS_THRESHOLD
                        Non-maximum suppression threshold.
  --post-nms-top-k POST_NMS_TOP_K
                        Number of top bounding boxes to keep after NMS.
  --show-image          Display detection results on images.
  --vis-threshold VIS_THRESHOLD
                        Visualization threshold for bounding boxes

Example run command:

python test --weights ./weights/faceboxes.pth --dataset PASCAL

It creates a folder with eval name and stores there detection results in a text file.

Usage

To inference on a single image:

python detecty.py --weights ./weights/faceboxes.pth --image-path sample.jpg

Resulting file will be saved under ./results folder.

Contributing

Contributions to improve the FaceBoxes Model are welcome. Feel free to fork the repository and submit pull requests, or open issues to suggest features or report bugs.

License

The project is licensed under the MIT license.

Reference

The project is built on top of FaceBoxes.PyTorch. Model architecture and training strategy have been re-written for better performance.

About

Re-Implementation of "FaceBoxes: A CPU Real-time Face Detector with High Accuracy"

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages