Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BraTS-preprocess stuck at "status reveived: { 'code': 201, 'message': 'nifti examination queued!'}." #26

Open
Lucas-rbnt opened this issue Feb 6, 2023 · 32 comments
Assignees
Labels
question Further information is requested

Comments

@Lucas-rbnt
Copy link

Hi everyone,
I work on a computing server. And by wanting to use single preprocessing I get stuck at:

status reveived: {'code': 201, 'message': 'input inspection queued!'}
status reveived: { 'code': 201, 'message': 'nifti examination queued!'}.

I assume the BraTS server is running locally?
If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?

Sorry for the inconvenience,
Lucas Robinet.

@neuronflow
Copy link
Owner

Thank for your interest in BraTS Toolkit.

"I assume the BraTS server is running locally?"
it should 🙈

What happens if you start the nvidia docker hello-world?
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

@Lucas-rbnt
Copy link
Author

Hi,

Sorry for the answer delay.
I'm using Docker daily so I stuck with my workflow.

I assumed it was possible since NVIDIA docker is a wrapper around but maybe I am wrong here ?

Thanks again for your answer,
Lucas.

@neuronflow
Copy link
Owner

What happens if you start the nvidia docker hello-world?
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

@Lucas-rbnt
Copy link
Author

Hello from Docker!
This message shows that your installation appears to be working correctly.

....

@neuronflow
Copy link
Owner

Can you please show the full output, including GPU and cuda version?

"If yes then I guess it's due to the lack of a web server on the computing server, is it possible to disable it and use the Python API only?"

Can you elaborate what you mean here? :)

@Lucas-rbnt
Copy link
Author

Lucas-rbnt commented Feb 13, 2023

I'm working on a computing server with no web server, I thought that maybe the problem comes from here ?

You mean the BraTS output ?
Because even trying to work with cpu-only mode, I'm still stuck at this part of the process.

Otherwise,
The compute server has 4 GeForce 2080 Ti, Cuda 11.6.

@neuronflow
Copy link
Owner

"I'm working on a computing server with no web server,"
Cannot follow you, sorry, please elaborate.

What happens internally: The backend is started in a docker, and it opens a local flask server that is communicating with the python frontend via WebSockets.

We do have another preprocessing pipeline not requiring docker that will be published soon.

@Lucas-rbnt
Copy link
Author

Thank you for your answer.

So I guess, my problem might come from the lack of a graphics server on my compute server.

@neuronflow
Copy link
Owner

No, BraTS Toolkit can run headless without trouble.

@Lucas-rbnt
Copy link
Author

Oh thanks again.

Then I have no idea why it's blocked at this stage.

@neuronflow
Copy link
Owner

Are other dockers running on the system, which ports are taken already?
Can you show the full output from the hello-world?

https://github.com/neuronflow/BraTS-Toolkit/blob/master/0_preprocessing_single.py
did you confirm processing of the exam? otherwise try setting the confirm parameter to False.

@Lucas-rbnt
Copy link
Author

Lucas-rbnt commented Feb 14, 2023

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Some ports are already taken of course but not the 5000 dedicated to Flask. No container are running currently on the system.
I tried both cpu and gpu mode and confirm=True and confirm=False

Is BraTS-Toolkit using a docker image that needs login to be pulled ?

@neuronflow
Copy link
Owner

This appears to be the wrong hello world.

What happens if you start the nvidia docker hello-world? https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

@Lucas-rbnt
Copy link
Author

Lucas-rbnt commented Feb 14, 2023

Can you elaborate "wrong hello world" ?

I tried my regular Docker installation and I also change my config to do it the nvidia-ctk way.
Everything works as expected in their documentation and my outputs (including the hello-world one) match the documentation ones

@neuronflow
Copy link
Owner

neuronflow commented Feb 14, 2023

Please read the link:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

Perhaps the hello world is confusing you, please run with and without sudo:

sudo docker run --rm --runtime=nvidia --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

and post the output.

@Lucas-rbnt
Copy link
Author

Ah yes I thought you wanted the output of the hello-world container ahah, I didn't quite understand why.

It seems to work fine, I'm back on my GPUs

Unable to find image 'nvidia/cuda:11.6.2-base-ubuntu20.04' locally
11.6.2-base-ubuntu20.04: Pulling from nvidia/cuda
[PULLING PROCESS]
Status: Downloaded newer image for nvidia/cuda:11.6.2-base-ubuntu20.04
Tue Feb 14 11:56:02 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.161.03   Driver Version: 470.161.03   CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0 Off |                  N/A |
| 25%   41C    P2    57W / 250W |   1774MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:43:00.0 Off |                  N/A |
| 29%   30C    P8     1W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  Off  | 00000000:81:00.0 Off |                  N/A |
| 29%   26C    P8     2W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  Off  | 00000000:C1:00.0 Off |                  N/A |
| 29%   26C    P8    15W / 250W |      8MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

@neuronflow
Copy link
Owner

neuronflow commented Feb 14, 2023

Okay, your docker installation seems to be fine. Which data are you trying to process?
What do you see if you type docker ps ?

@Lucas-rbnt
Copy link
Author

Trying to process private data. Every sample is a *.nii file.
docker ps returns my greedy_elephant container running

@neuronflow
Copy link
Owner

What happens if you process the example data?

@Lucas-rbnt
Copy link
Author

Lucas-rbnt commented Feb 14, 2023

Well I'm sorry, I think I know the origin finally, by rereading your BraTS Toolkit paper, the registration is done on the T1 (and not on the T1ce as on similar tools).
In my case I don't have these 4 modalities and I use only two (FLAIR and T1ce) and I put the t1ce file as t1 but I guess the registration doesn't manage to be done because trying the BraTS toolkit on the 4-modalities BraTS data it worked.

Is that the problem?

@neuronflow
Copy link
Owner

Yes, very likely.

I have an alternative t1-c centric preprocessing pipeline that can deal with fewer modalities that we can hopefully publish soon.

@Lucas-rbnt
Copy link
Author

Yes, sorry to have wasted your time on this issue. Do you have a date for the t1-c centric alternative?
We try to harmonise our pre-processing as much as possible and the python API of your tool offers a considerable advantage which makes it a big plus in our processing phases.

@neuronflow
Copy link
Owner

No worries.

Would you be interested in investing time and serving as a beta tester? If so, we can set up a call and discuss :)

@Lucas-rbnt
Copy link
Author

Yes of course it could be very interesting.
Also I would really like to be able to integrate the tool into my Python routine for my research.

@neuronflow
Copy link
Owner

@Lucas-rbnt still interested? It would be ready for the first tests now.

@neuronflow neuronflow self-assigned this Oct 18, 2023
@Lucas-rbnt
Copy link
Author

Yes I am !

@neuronflow
Copy link
Owner

I wrote you on LinkedIn let's coordinate there :)

@neuronflow
Copy link
Owner

neuronflow commented Oct 30, 2023

@Lucas-rbnt please see the post above. Also:

Trying to process private data. Every sample is a *.nii file. docker ps returns my greedy_elephant container running

can you try with .nii.gz files?

see: #18

@neuronflow neuronflow added the question Further information is requested label Oct 30, 2023
@abdullahbas
Copy link

Hey, sorry to bother but I have the same problem although I have all the modalities. It hangs at

status received: {'code': 201, 'message': 'input inspection queued!'}
status received: {'code': 201, 'message': 'nifti examination queued!'}

I don't know what to do. My files are .nii not .nii.gz.

@neuronflow
Copy link
Owner

Until this issue #18 is closed you need .nii.gz files. Just renaming is enough, you don't actually need to compress them. You can also try our new preprocessing toolkit which is much more capable and under active development:
https://github.com/BrainLesion/preprocessing
You can use it like this:
https://github.com/BrainLesion/preprocessing/blob/main/example_modality_centric_preprocessor.py

@abdullahbas
Copy link

This is way faster than I expected thanks for informing me. I will try the new one. Nevertheless, adding .gz didn't solve the issue. In that issue #18 it was about output. It says no such file or directory after changing nii to nii.gz without doing any compression. I have tried it both from CLI and Python. I am trying to do the preprocessing step only.

@Lucas-rbnt
Copy link
Author

Hello, I've been able to identify the problem and fix it, can you share the full output to see if we, indeed, have the same problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants