Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dataset to test the model, and evaluate the model. #11

Open
KeenNest opened this issue Feb 8, 2024 · 33 comments
Open

dataset to test the model, and evaluate the model. #11

KeenNest opened this issue Feb 8, 2024 · 33 comments

Comments

@KeenNest
Copy link

KeenNest commented Feb 8, 2024

To run this command ,
python evaluate.py --lr_dir= --key_dir= --target_dir= --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt=<file format eg., "%08d.png">

i want these paths right -path of LR,Key and ground-truth .
can you please share the dataset link from where i download .

@KeenNest KeenNest changed the title dataset to test the model dataset to test the model, and evaluate the model. Feb 20, 2024
@KeenNest
Copy link
Author

why i am getting this error while running evaluate.py
...,
[0.6902, 0.7098, 0.7176, ..., 0.2078, 0.2078, 0.2157],
[0.6941, 0.7098, 0.7176, ..., 0.1804, 0.1451, 0.1608],
[0.6784, 0.7294, 0.7412, ..., 0.1882, 0.1961, 0.1843]]]]]), 'key_frame_int': tensor([15])} ('office-video',)
Illegal instruction (core dumped)

@Justarrrrr
Copy link

hello, I meet the same demand, can you please share the dataset link from where i download

@KeenNest
Copy link
Author

KeenNest commented Apr 8, 2024

@Justarrrrr i created my own dataset ,
if you're getting illegal instruction (core dumped ) then reduce the size of frame to 160 * 120

Thanks

@KeenNest
Copy link
Author

KeenNest commented Apr 8, 2024

@Justarrrrr what kind of problem you're facing ?

@KeenNest
Copy link
Author

KeenNest commented Apr 8, 2024

this model is already trained you have to just download it from given link .

@KeenNest
Copy link
Author

KeenNest commented Apr 8, 2024

python3 evaluate.py --lr_dir=lr-set --key_dir=key-set --target_dir=hr-set --output_dir=proj --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt="frame%d.jpg"

@Justarrrrr
Copy link

hello,KeenNest! Thank you for the help you provided earlier, but now I have a new question. Can you help me answer it? I would like to know why there are three folders in the output of the 'evaluate' model: 'key', 'target', and 'lr'. What is the purpose of these three outputs?

@KeenNest
Copy link
Author

basically evaluate argument used as a input like lr (low resolution) used as a input from direct live camera and it takes input as a form of frames . key(high resolution keys) its takes live frame into interval. target used for match the converted frames are match to lr frames ..

@Justarrrrr
Copy link

I can understand these two inputs, but why we should provide the hr_set, and eventually we get the frames reconstructed are same resolution as hr_set

@KeenNest
Copy link
Author

hi @Justarrrrr
basically, we need hr_set to check performance of that model.
are u able to produce output from that?

@Justarrrrr
Copy link

yeah, I can produce the output ,but as my understanding, we just need the low resolution images and some key frames, the hr_set is just needed to compute some metrics like loss rate, is that?

@KeenNest
Copy link
Author

yes, but can you share some of doubt i have to produce output files .
and what's you system requirement you're using .
?

@Justarrrrr
Copy link

what doubt do you have? I use the Vid4 dataset
--lr_dir=./Vid4/BDx4 --key_dir=./Vid4/GT --target_dir=./Vid4/GT --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt=%08d.png --output_dir=./output
You place the Vid4 dataset in the project folder, and in the end, the output can be obtained in the generated 'output' folder

@KeenNest
Copy link
Author

i am using python3 evaluate.py --lr_dir=/home/ashish/proj/dataset/lr-set/ --key_dir=/home/ashish/proj/dataset/key-set/ --target_dir=/home/ashish/proj/dataset/hr-set/ --model_dir=experiments/bix4_
keyvsrc_attn/ --restore_file=pretrained --file_fmt="frame%d.png" to run code amd my code got killed after sometime and i am using jetson nano with 4 gb ram.

@Justarrrrr
Copy link

what's the Traceback?

@KeenNest
Copy link
Author

ashish@ashish-desktop:~/proj/NeuriCam$ python3 evaluate.py --lr_dir=/home/ashish/proj/dataset/lr-set/ --key_dir=/home/ashish/proj/dataset/key-set/ --target_dir=/home/ashish/proj/dataset/hr-set/ --model_dir=experiments/bix4_keyvsrc_attn/ --restore_file=pretrained --file_fmt="frame%d.png" --output=./output
/usr/local/lib/python3.6/dist-packages/mmcv/init.py:21: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
'On January 1, 2023, MMCV will release v2.0.0, in which it will remove '
Creating the dataset...

  • done.
    load checkpoint from local path: /home/ashish/proj/NeuriCam/model/keyvsrc/spynet_20210409-c6c1bd09.pth
    Evaluating keyvsrc
    Starting evaluation
    Writing results to ./output...
    0%| | 0/1 [00:00<?, ?it/s]Killed

@Justarrrrr
Copy link

what dataset you use?

@KeenNest
Copy link
Author

I created my own dataset ..

@Justarrrrr
Copy link

May I ask what kind of preprocessing you have applied to your dataset? I tried running the model on the standard Vid4 dataset with some modifications but encountered issues

@KeenNest
Copy link
Author

KeenNest commented Apr 18, 2024

first I reduce the size to 160*120, and divide dataset into three parts
like, lr-set ,hr-set and key-set.
something else i have to do

@Justarrrrr
Copy link

can you sent your dataset to me have a try?

@KeenNest
Copy link
Author

KeenNest commented Apr 18, 2024

@Justarrrrr
Copy link

that's what i want to ask you haha , i use remote machine 2080Ti and 4090, but now i don't have idle GPU

@KeenNest
Copy link
Author

I am using. jetson nano: -

128-core NVIDIA Maxwell™ architecture GPU
GPU Max Frequency | 921MHz
CPU | Quad-core ARM® Cortex®-A57 MPCore processor
CPU Max Frequency | 1.43GHz
Memory | 4GB 64-bit LPDDR425.6GB/s

@Justarrrrr
Copy link

First, you need to create a new folder eg. named "work" under the lr_set and other folders. Secondly, this dataset cannot be processed successfully. I have tried modifying the Vid4 dataset to 160x120, but encountered the same error.

@KeenNest
Copy link
Author

but i already created the live folder under lr-set. and what the resolution i have to made as for you ,

@KeenNest
Copy link
Author

Hi @Justarrrrr , any idea why it's not running.

@Justarrrrr
Copy link

Sorry for the late reply, I haven't found a solution either. I wanted to ask if you have the REDS4 dataset

@KeenNest
Copy link
Author

KeenNest commented May 8, 2024

no i don't have .

@KeenNest
Copy link
Author

KeenNest commented May 9, 2024

hi @Justarrrrr
can send me the dataset link which you used to test this repo .
and also the command which you're using .

@Justarrrrr
Copy link

Sure, have you successfully run the model?

@KeenNest
Copy link
Author

not yet, i done down sampling on that dataset but now getting this error .
RuntimeError: [enforce fail at CPUAllocator.cpp:68] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 5245378560 bytes. Error code 12 (Cannot allocate memory)

@KeenNest
Copy link
Author

Hi @Justarrrrr
can you send me the list of library version you install for run this respo ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants