Skip to content

EDTER: Edge Detection with Transformer, in CVPR 2022

License

Notifications You must be signed in to change notification settings

MengyangPu/EDTER

Repository files navigation

EDTER

EDTER: Edge Detection with Transformer
Mengyang Pu, Yaping Huang, Yuming Liu, Qingji Guan and Haibin Ling
CVPR 2022

🔥Update
More detailed usage
The comparison of the reported results and the reproduced results
All training logs, including the experimental environment

Contents

0 Issues and Answers
1 Usage
    1.1 Linux
    1.2 Datasets
    1.3 Initial weights
2 Traning
    2.1 Step1: The training of EDTER-Stage I on BSDS500
    2.2 Step2: The training of EDTER-Stage II on BSDS500
    2.3 How to train the EDTER model on BSDS-VOC (BSDS500 and PASCAL VOC Context):
        Step1: The training of EDTER-VOC-Stage I on PASCAL VOC Context
    2.4 Step2: The training of EDTER-VOC-Stage I on BSDS500
    2.5 Step3: The training of EDTER-VOC-Stage II on BSDS500
3 Testing
    3.1 EDTER-Stage I with single-scale testing
    3.2 EDTER-Stage I with multi-scale testing
    3.4 EDTER-Stage II with single-scale testing
    3.5 EDTER-Stage II with multi-scale testing
4 🔥🔥The comparison of the reported results and the reproduced results🔥🔥
    4.1 The results of EDTER-Stage I on BSDS500
    4.2 The results of EDTER-Stage II on BSDS500
    4.3 The EDTER model pre-trained on the PASCAL VOC Context dataset
    4.4 The results of EDTER-VOC-Stage I on BSDS500
    4.5 The results of EDTER-VOC-Stage II on BSDS500
5 Eval
6 Results
7 Download the Pre-trained model
Important notes
Acknowledgments
Reference

Issues and Answers

🔥Q: How to change batch_size?
🔥A: the batch size of training = samples_per_gpu * GPU_NUM If you want to set samples_per_gpu, please refer to

data = dict(samples_per_gpu=4)
For example, data = dict(samples_per_gpu=4) means that each GPU can process 4 images. If training batch_size=8, please set samples_per_gpu=4 and GPU_NUM=2.

🔥Q: KeyError: 'BSDSDataset is not in the dataset registry'.
🔥A:

cd EDTER
pip install -e .  # or "python setup.py develop"
pip install -r requirements/optional.txt

🔥Q: Dataset download.
🔥A: Please refer to 1.2 Datasets

🔥🔥Q: Reproduced results.
🔥🔥A: Please refer to 4 The comparison of the reported results and the reproduced results, and we upload all reproduced results on BaiDuNetdisk.
❗Note: The capacity of our Google Drive is limited, and all training files (including .log files, .mat files, .png files, and .pth files) for each model are approximately 20GB, so we upload them to BaiDuNetdisk. If you cannot download it, please contact me (email:[email protected]).

Reproduced results Download
EDTER-Stage I BaiDuNetdisk or Google Drive
EDTER-Stage II BaiDuNetdisk or Google Drive
EDTER-VOC-Stage I pre-train BaiDuNetdisk or Google Drive
EDTER-VOC-Stage I BaiDuNetdisk or Google Drive
EDTER-VOC-Stage II BaiDuNetdisk or Google Drive

1 Usage

Our project is developed based on MMsegmentation. Please follow the official MMsegmentation INSTALL.md and getting_started.md for installation and dataset preparation.

1.1 Linux

The full script for setting up EDTER with conda is following SETR.

conda create -n edter python=3.7 -y
conda activate edter
conda install pytorch=1.6.0 torchvision cudatoolkit=10.1 -c pytorch -y
pip install mmcv-full==1.2.2 -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html
cd EDTER
pip install -e .  # or "python setup.py develop"
pip install -r requirements/optional.txt

1.2 Datasets

BSDS500

Download the augmented BSDS500 data (1.2GB) from HED-BSDS.
The original BSDS500 dataset can be downloaded from Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500).

|-- data
    |-- BSDS
        |-- ImageSets
        |   |-- train_pair.txt
        |   |-- test.txt
        |   |-- pascal_train_pair.txt
        |-- train
        |   |-- aug_data
        |   |-- aug_data_scale_0.5
        |   |-- aug_data_scale_1.5
        |   |-- aug_gt
        |   |-- aug_gt_scale_0.5
        |   |-- aug_gt_scale_1.5
        |-- test
        |   |-- 2018.jpg
        ......

PASCAL VOC

Download the augmented PASCAL VOC data from Google Drive or BaiDuNetdisk.

|-- data
    |-- PASCAL
        |-- ImageSets
        |   |-- pascal_train_pair.txt
        |   |-- test.txt
        |-- aug_data
            |-- 0.0_0
            |   |-- 2008_000002.jpg
            ......
            |-- 0.0_1
            |   |-- 2008_000002.jpg
            ......
        |-- aug_gt
            |-- 0.0_0
            |   |-- 2008_000002.png
            ......
            |-- 0.0_1
            |   |-- 2008_000002. png
            ......

NYUD

Download the augmented NYUD data (~11GB) from Google Drive or BaiDuNetdisk.

|-- data
    |-- NYUD
        |-- ImageSets
        |   |-- hha-test.txt
        |   |-- hha-train.txt
        |   |-- image-test.txt
        |   |-- image-train.txt
        |-- train
            |-- GT
            |-- GT_05
            |-- GT_15
            |-- HHA
            |-- HHA_05
            |-- HHA_15
            |-- Images
            |-- Images_05
            |-- Images_15
        |-- test
            |-- HHA
            |   |-- img_5001.png
            ......
            |-- Images
            |   |-- img_5001.png
            ......

1.3 Initial weights

If you are unable to download due to network reasons, you can download the initial weights from here(VIT-base-p16) and here(VIT-large-p16).
The two .pth files of initial weights should be placed in the folder -- ./pretrain.

|-- EDTER
    |-- pretrain
        |-- jx_vit_base_p16_384-83fb41ba.pth
        |-- jx_vit_large_p16_384-b3be5167.pth

2 Training

❗❗❗ Note: Our project only supports distributed training on multiple GPUs on one machine or a single GPU on one machine.

2.1 Step1: The training of EDTER-Stage I on BSDS500

If you want to set the batch size in each GPU, please refer to

data = dict(samples_per_gpu=4)
For example, data = dict(samples_per_gpu=4) means that each GPU can process 4 images.
Therefore, the batch size of training = samples_per_gpu * GPU_NUM. In the experiments, we set the training batch size to 8, where samples_per_gpu=4 and GPU_NUM=2.

The command to train the first-stage model is as follows

cd EDTER
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} 
# For example, train Stage I on the BSDS500 dataset with 2 GPUs
cd EDTER
bash ./tools/dist_train.sh configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8.py 2

2.2 Step2: The training of EDTER-Stage II on BSDS500

Change the '--global-model-path' in tools/train_local.py

parser.add_argument('--global-model-path', type=str, default='/....Your Path..../XXXXXXXX.pth',
help='the dir of the best global model')

cd EDTER
bash ./tools/dist_train_local.sh ${GLOBALCONFIG_FILE} ${CONFIG_FILE} ${GPU_NUM} 
# For example, train Stage II on the BSDS500 dataset with 2 GPUs
cd EDTER
bash ./tools/dist_train_local.sh configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8.py configs/bsds/EDTER_BIMLA_320x320_80k_bsds_local8x8_bs_8.py 2

2.3 How to train the EDTER model on BSDS-VOC (BSDS500 and PASCAL VOC Context):

Step 1: The training of EDTER-VOC-Stage I on PASCAL VOC Context

We first pre-train Stage I on PASCAL VOC Context Dataset (Google Drive, BaiDuNetdisk).
The command to train the first stage model on PASCAL VOC Context is as follows

cd EDTER
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} 
# For example, train Stage I on the PASCAL VOC Context dataset with 2 GPUs
cd EDTER
bash ./tools/dist_train.sh configs/bsds/EDTER_BIMLA_320x320_80k_pascal_bs_8.py 2

❗Note: The model trained on the PASCAL VOC Context dataset is used as the initialization model in Step 2.

2.4 Step 2: The training of EDTER-VOC-Stage I on BSDS500

First, we set the path of the pre-training model in train.py

EDTER/tools/train.py

Lines 28 to 30 in 3b1751a

parser.add_argument(
'--load-from', #type=str, default='',
help='the checkpoint file to load weights from')
For example, parser.add_argument( '--load-from', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_pascal_bs_8/iter_X0000.pth ', help='the checkpoint file to load weights from')

Then, we execute the following command to train the first stage model on bsds500:

cd EDTER
bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} 
# For example, train Stage I on the BSDS500 dataset with 2 GPUs
cd EDTER
bash ./tools/dist_train.sh configs/bsds/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8.py 2

2.5 Step3: The training of EDTER-VOC-Stage II on BSDS500

Change the '--global-model-path' in train_local.py.

parser.add_argument('--global-model-path', type=str, default='/....Your Path..../XXXXXXXX.pth',
help='the dir of the best global model')
❗Note: According to the results in stage one, we select the best model as the global model. Thus, we set: parser.add_argument('--global-model-path', type=str, default=' ../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_X0000.pth', help='the dir of the best global model').

Then, the command to train the second stage model is as follows:

cd EDTER
./tools/dist_train_local.sh ${GLOBALCONFIG_FILE} ${CONFIG_FILE} ${GPU_NUM} 
# For example, train Stage II on the BSDS500 dataset with 2 GPUs
cd EDTER
./tools/dist_train_local.sh configs/bsds/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8.py configs/bsds/EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8.py 2

3 Testing

3.1 EDTER-Stage I with single-scale testing

First, please set the '--config', '--checkpoint', and '--tmpdir' in test.py.
'--config':

parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8.py', help='train config file path')
'--checkpoint':
parser.add_argument('--checkpoint', type=str, default='/....Your Path..../XXXXX.pth')
'--tmpdir':

EDTER/tools/test.py

Lines 47 to 50 in f060fd3

parser.add_argument(
'--tmpdir', type =str, default='/...Save Path.../results',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu_collect is not specified')
For example:

#If you want to test EDTER-Stage I, please set:
parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8.py', help='train config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_bs_8/iter_XXXXX.pth')
#If you want to test EDTER-VOC-Stage I, please set:
parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8.py', help='train config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_XXXXX.pth')

Then, please execute the command:

cd EDTER
python ./tools/test.py

3.2 EDTER-Stage I with multi-scale testing

First, please set the '--config', '--checkpoint', and '--tmpdir' in test.py.
'--config':

parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8.py', help='train config file path')
'--checkpoint':
parser.add_argument('--checkpoint', type=str, default='/....Your Path..../XXXXX.pth')
'--tmpdir':

EDTER/tools/test.py

Lines 47 to 50 in f060fd3

parser.add_argument(
'--tmpdir', type =str, default='/...Save Path.../results',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu_collect is not specified')
For example:

#If you want to test EDTER-Stage I, please set:
parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_bs_8_ms.py', help='train config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_bs_8/iter_XXXXX.pth')
#If you want to test EDTER-VOC-Stage I, please set:
parser.add_argument('--config', type=str, default='configs/bsds/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8_ms.py', help='train config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_XXXXX.pth')

❗Note: Use the config file ending in _ms.py in configs/EDTER.

Then, please execute the command:

cd EDTER
python ./tools/test.py

3.3 EDTER-Stage II with single-scale testing

First, please set the '--globalconfig', '--config', '--global-checkpoint', '--checkpoint', and '--tmpdir' in test_local.py.
'--globalconfig':

parser.add_argument('--globalconfig', type=str, default='configs/bsds/VIT_BIMLA_320x320_80k_bsds_bs_8.py',
help='train global config file path')
'--config':
parser.add_argument('--config', type=str, default='configs/bsds/VIT_BIMLA_320x320_80k_bsds_local8x8_bs_8.py',
help='train local config file path')
'--checkpoint':
parser.add_argument('--checkpoint', type=str, default='/your path........../xxxxx.pth',
help='the dir of local model')
'--global-checkpoint':
parser.add_argument('--global-checkpoint', type=str,
default='/your path........../xxxxxx.pth',
help='the dir of global model')
'--tmpdir':
parser.add_argument(
'--tmpdir', type =str, default='/save path........../local_results',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu_collect is not specified')

For example:

#If you want to test EDTER-Stage II, please set:
parser.add_argument('--globalconfig', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_bs_8.py**', help='train global config file path')
parser.add_argument('--config', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_local8x8_bs_8.py**', help='train local config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_local8x8_bs_8/iter_XXXXX.pth', help='the dir of local model')
parser.add_argument('--global-checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_bs_8/iter_XXXXX.pth', help='the dir of global model')
#If you want to test EDTER-VOC-Stage II, please set:
parser.add_argument('--globalconfig', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_aug_bs_8.py**', help='train global config file path')
parser.add_argument('--config', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8.py**', help='train local config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8/iter_XXXXX.pth', help='the dir of local model')
parser.add_argument('--global-checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_XXXXX.pth', help='the dir of global model')

Please execute the command:

cd EDTER
python ./tools/test_local.py

3.4 EDTER-Stage II with multi-scale testing

First, please set the '--globalconfig', '--config', '--global-checkpoint', '--checkpoint', and '--tmpdir' in test_local.py.
'--globalconfig':

parser.add_argument('--globalconfig', type=str, default='configs/bsds/VIT_BIMLA_320x320_80k_bsds_bs_8.py',
help='train global config file path')
'--config':
parser.add_argument('--config', type=str, default='configs/bsds/VIT_BIMLA_320x320_80k_bsds_local8x8_bs_8.py',
help='train local config file path')
'--checkpoint':
parser.add_argument('--checkpoint', type=str, default='/your path........../xxxxx.pth',
help='the dir of local model')
'--global-checkpoint':
parser.add_argument('--global-checkpoint', type=str,
default='/your path........../xxxxxx.pth',
help='the dir of global model')
'--tmpdir':
parser.add_argument(
'--tmpdir', type =str, default='/save path........../local_results',
help='tmp directory used for collecting results from multiple '
'workers, available when gpu_collect is not specified')

For example:

#If you want to test EDTER-Stage II, please set:
parser.add_argument('--globalconfig', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_bs_8_ms.py**', help='train global config file path')
parser.add_argument('--config', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_local8x8_bs_8_ms.py**', help='train local config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8/iter_XXXXX.pth', help='the dir of local model')
parser.add_argument('--global-checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_XXXXX.pth', help='the dir of global model')
#If you want to test EDTER-VOC-Stage II, please set:
parser.add_argument('--globalconfig', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_aug_bs_8_ms.py**', help='train global config file path')
parser.add_argument('--config', type=str, default='configs/bsds/**EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8_ms.py**', help='train local config file path')
parser.add_argument('--checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_local8x8_bs_8/iter_XXXXX.pth', help='the dir of local model')
parser.add_argument('--global-checkpoint', type=str, default='../work_dirs/EDTER_BIMLA_320x320_80k_bsds_aug_bs_8/iter_XXXXX.pth', help='the dir of global model')

❗Note: Use the config file ending in _ms.py in configs/EDTER.

Please execute the command:

cd EDTER
python ./tools/test_local.py

🔥🔥4 The comparison of the reported results and the reproduced results🔥🔥

4.1 The results of EDTER-Stage I on BSDS500

The original results reported in the paper (row 1 of Table 2) are as:

Model ODS OIS AP
🔥EDTER-StageI(SS) 0.817 0.835 0.867

The reproduced results of EDTER-Stage I on BSDS500 are shown in the table:

iter ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
10k 0.813 0.830 0.861 0.837 0.854 0.890
20k 0.816 0.832 0.865 0.837 0.853 0.889
🔥30k(best) 0.817 0.833 0.866 0.837 0.853 0.888
40k 0.815 0.832 0.866 0.836 0.853 0.888
50k 0.815 0.832 0.866 0.834 0.852 0.887
60k 0.813 0.828 0.862 0.832 0.849 0.885
70k 0.813 0.829 0.864 0.832 0.849 0.884
80k 0.813 0.829 0.863 0.831 0.849 0.884

SS: Single-Scale testing, MS: Multi-Scale testing

🔥All files generated during the training process, including the models and test results (.png and .mat files) for every 10k iterations, and the training logs can be downloaded through Google Drive or BaiDuNetdisk.

4.2 The results of EDTER-Stage II on BSDS500

The original results reported in the paper (Table 3, EDTER) are as:

Model ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
🔥EDTER-StageII 0.824 0.841 0.880 0.840 0.858 0.896

The reproduced results of EDTER-Stage II on BSDS500 are shown in the table:

iter ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
10k 0.821 0.838 0.874 0.839 0.856 0.893
20k 0.822 0.839 0.876 0.838 0.856 0.893
30k 0.824 0.841 0.878 0.837 0.855 0.893
🔥40k(best) 0.825 0.841 0.880 0.838 0.855 0.894
50k 0.823 0.840 0.877 0.835 0.852 0.892
60k 0.822 0.839 0.876 0.834 0.852 0.889
70k 0.820 0.837 0.875 0.833 0.851 0.890
80k 0.817 0.836 0.873 0.829 0.848 0.888

🔥All files generated during the training process, including the models and test results (.png and .mat files) for every 10k iterations, and the training logs can be downloaded through Google Drive or BaiDuNetdisk.

4.3 The EDTER model pre-trained on the PASCAL VOC Context dataset

On the testing set of BSDS500, we report the results of the EDTER model pre-trained on the PASCAL VOC Context dataset, as shown in the table:

iter ODS(SS) OIS(SS) AP(SS)
🔥10k(best) 0.775 0.795 0.835
20k 0.767 0.788 0.827
30k 0.760 0.777 0.816
40k 0.762 0.779 0.815
50k 0.755 0.769 0.809
60k 0.757 0.771 0.810
70k 0.757 0.771 0.810
80k 0.757 0.771 0.810

🔥All files generated during the training process, including the models and test results (.png and .mat files) for every 10k iterations, and the training logs can be downloaded through Google Drive or BaiDuNetdisk.

4.4 The results of EDTER-VOC-Stage I on BSDS500

The original results reported in the paper are null.

The reproduced results of EDTER-VOC-Stage I on BSDS500 are shown in the table:

iter ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
10k 0.823 0.837 0.871 0.845 0.861 0.897
🔥20k(best) 0.824 0.839 0.872 0.844 0.860 0.896
30k 0.822 0.838 0.873 0.842 0.858 0.895
40k 0.821 0.837 0.871 0.842 0.857 0.893
50k 0.821 0.836 0.870 0.839 0.855 0.891
60k 0.820 0.834 0.869 0.840 0.855 0.891
70k 0.819 0.835 0.869 0.838 0.854 0.890
80k 0.819 0.834 0.868 0.838 0.854 0.890

🔥All files generated during the training process, including the models and test results (.png and .mat files) for every 10k iterations, and the training logs can be downloaded through Google Drive or BaiDuNetdisk.

4.5 The results of EDTER-VOC-Stage II on BSDS500

The original results reported in the paper (Table 3, EDTER-VOC) are as:

Model ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
🔥EDTER-VOC-Stage II 0.832 0.847 0.886 0.848 0.865 0.903

The reproduced results of EDTER-VOC-Stage II on BSDS500 are shown in the table:

iter ODS(SS) OIS(SS) AP(SS) ODS(MS) OIS(MS) AP(MS)
10k 0.827 0.844 0.880 0.846 0.861 0.900
🔥20k(best) 0.829 0.845 0.883 0.846 0.862 0.901
30k 0.829 0.845 0.883 0.843 0.860 0.899
40k 0.826 0.842 0.882 0.841 0.858 0.897
50k 0.823 0.838 0.878 0.837 0.854 0.893
60k 0.821 0.837 0.878 0.834 0.852 0.892
70k 0.816 0.833 0.872 0.831 0.848 0.888
80k 0.815 0.832 0.871 0.830 0.848 0.887

🔥All files generated during the training process, including the models and test results (.png and .mat files) for every 10k iterations, and the training logs can be downloaded through Google Drive or BaiDuNetdisk.

BSDS500

cd eval
run eval_bsds.m

NYUD

Download the matfile (NYUD) from Google Drive or BaiDuNetdisk.

cd eval
run eval_nyud.m

6 Results

If you want to compare your method with EDTER, you can download the pre-computed results:
BSDS500: Google Drive.
NYUD: Google Drive or BaiDuNetdisk.

7 Download Pre-trained model

model Pre-trained Model
EDTER-BSDS-VOC-StageI BaiDuNetdisk or Google Drive
EDTER-BSDS-VOC-StageII BaiDuNetdisk or Google Drive
EDTER-NYUD-RGB-StageI BaiDuNetdisk or Google Drive
EDTER-NYUD-RGB-StageII BaiDuNetdisk or Google Drive
EDTER-NYUD-HHA-StageI BaiDuNetdisk or Google Drive
EDTER-NYUD-HHA-StageII BaiDuNetdisk or Google Drive

❗❗❗Important notes

  • ❗❗❗All the models are trained and tested on a single machine with multiple NVIDIA-V100-32G GPUs.
  • ❗❗❗Training on distributed GPUs is not supported.

Acknowledgements

  • We thank the anonymous reviewers for their valuable and inspiring comments and suggestions.
  • Thanks to the previous open-sourced repo:
    SETR
    MMsegmentation

Reference

@InProceedings{Pu_2022_CVPR,
    author    = {Pu, Mengyang and Huang, Yaping and Liu, Yuming and Guan, Qingji and Ling, Haibin},
    title     = {EDTER: Edge Detection With Transformer},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {1402-1412}
}