Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPARE challenge data: Half-fan 4D Cone Beam CT with LEAP #98

Open
Gstevenson3 opened this issue Aug 29, 2024 · 5 comments
Open

SPARE challenge data: Half-fan 4D Cone Beam CT with LEAP #98

Gstevenson3 opened this issue Aug 29, 2024 · 5 comments

Comments

@Gstevenson3
Copy link

Hello,

I'm trying to use LEAP to reconstruct data from the SPARE Challenge and specifically the ClinicalVarianDataset.

I'm having some trouble setting up LEAP's parameters.
The Clinical Varian dataset is very large and broken into 21 7zip parts (which all have to be downloaded to decompress), so I've decided to start with the first scan of the first patient (CV_P1_T_01) and I've zipped the relevant parts of that data up to this Google Drive link to hopefully make it easier to help me out!


Here are some of the relevant parameters provided by SPARE:

  • Projection Dimension:
    For P1 and P2 - [1024,768]
    For P3, P4, and P5 - [1008,752]

  • PixelSpacing:
    [0.388,0.388] mm

  • Geometry:
    Detector lateral offset: 148 mm (half-fan)
    Source-to-isocenter distance: 1000 mm
    Source-to-detector distance: 1500 mm

  • Reconstruction dimension:
    Dimension: [450,220,450] (LR,SI,AP)
    PixelSpacing: [1,1,1] mm (LR,SI,AP)

Note that which projection dimension is rows and which is columns is not declared in the dataset or the documentation.
(I have assumed [rows, columns])
Here is an example gif (from the FDKRecon directory) of what I'm trying to recreate from the projection data (in the Proj folder) they provide.
FDK4D
However, my results are far from getting even a noisier version of this reconstruction.
I've set the LEAP parameters to the following:

img:
dimx: 450
dimy: 220
dimz: 450
pwidth: 1
pheight: 1
offsetx: 0
offsety: 0
offsetz: 0
proj:
nangles: 680
nrows: 1024
ncols: 768
pheight: 0.388
pwidth: 0.388
crow: 511.5
ccol: 381.44
sod: 1000
sdd: 1500

Where I have adjusted the ccol according to the instructions in the offsetScan example in LEAP. Note that I am assuming directionality with the ccol direction and I am not sure if it is in the correct direction.
I've set .set_offsetScan(True) on my projector as well, and I'm including all projections together even though (eventually) I would like to break them up into the Respiratory Bins provided with them.

All that to say, my results (in 2D form, where I am slicing out the middle of each axis) look like this:

xaxis
yaxis
zaxis

Clearly, I've set something up wrong about LEAP's parameters and this Varian dataset. Your advice would be much appreciated and I'm excited other people seem to be interested in the same / similar 4D-CBCT datasets.

Thanks!

@kylechampley
Copy link
Collaborator

Hi Garrett,

Thanks for posting this issue!

The script below reconstructs the data using FBP. I did not use the information tagged as "Matrix" in the xml file. I don't quite understand how to interpret it. It could perform a detector rotation which may improve the results. Let me know if you need help with this or whether you are fine with just ignoring it.

import sys
import os
import time
import numpy as np
import matplotlib.pyplot as plt
from leapctype import *
leapct = tomographicModels()

dataPath = r'D:\tomography\CV_P1_T_01\Proj'

import xml.etree.ElementTree as ET
tree = ET.parse(os.path.join(dataPath, 'Geometry.xml'))
root = tree.getroot()
allEntries = root.findall('.//GantryAngle')
numAngles = len(allEntries)
angles = np.array(range(numAngles), dtype=np.float32)
for n in range(numAngles):
    angles[n] = allEntries[n].text
angles = np.unwrap(angles*np.pi/180.0)*180.0/np.pi

sdd = 1500.0
sod = 1000.0
pixelSize = 0.388
numCols = 1024
numRows = 768
centerCol = 0.5*(numCols-1) - 148.0/pixelSize

leapct.set_conebeam(numAngles, numRows, numCols, pixelSize, pixelSize, 0.5*(numRows-1), centerCol, angles, sod, sdd)
leapct.set_offsetScan(True)
leapct.set_truncatedScan(True)
leapct.set_volume(450, 450, 220, 1.0, 1.0)

g = leapct.allocate_projections()
f = leapct.allocate_volume()

files = glob.glob(os.path.join(dataPath, 'Proj_*[0-9].bin'))
for n in range(len(files)):
    anImage = np.reshape(np.fromfile(files[n], dtype=np.float32), (numRows, numCols))
    g[n,:,:] = anImage[:,:]

leapct.FBP(g, f)
leapct.display(f)

@Gstevenson3
Copy link
Author

Thank you Kyle!

I was making several mistakes on the data reading and LEAP side, but have a working single-frame FBP result now.

I'm currently working on integrating your advice from this issue.

This particular dataset has 10 distinct respiratory bins and in-turn 10 GT images to compare against and some bins have less than 360 degrees of data. Hence, the need to integrate your "copy first projection to the end to make 360".
This "10 images from 680 projections" is foreign to me, because (at least in parallel beam world) I'm used to making numImages = numProjections.

So I'd like to leave this issue open for now, in case I can formulate any more advanced follow-up questions!

@kylechampley
Copy link
Collaborator

Garrett, I'm afraid I don't understand your comments. There is no relation between the number of projections and the number of slices in the reconstruction. For parallel-beam, the number of slices in the reconstruction does match the number of detector rows. Is that what you meant?

@Gstevenson3
Copy link
Author

Hi Kyle,

Apologies. I'm conflating a few things together into word vomit.

At a high level, I was trying to convey that the set_offsetScan(True) requirement that 360 degrees of angular range are included in every reconstructed frame is new for me. In other non-offset, parallel beam datasets, I've not had this constraint and have gotten used to generating a reconstructed frame for every projection (which is what I meant by numImages = numProjections).

I've gotten that part of my problem working.

I don't have any ongoing issues at the moment, but would like to leave this thread open for a little longer in case any come up.

Thanks again for all your support!

@kylechampley
Copy link
Collaborator

Garrett,

Tomography is very geometrical, so many things can be explained by visualizing the geometry.

OK, so assume just a 2D CT scan. Then one must collection projections over all directions in order to uniquely reconstruct an point in the object. Now rays that travel in opposite directions (separated by 180 degrees) are measuring the same thing. So this means that one must collect projections over 180 degrees in order to reconstruct. But in the half-fan case, the detector only covers half the object. Now imaging this detector rotating around. You can see that a fixed point in space will only be in the projection half of the time because, again, the detector only covers half the object. Thus one needs a full 360 degrees of projections in order to guarantee that each point in the object is covered by 180 degrees of projections.

Hopefully this makes sense. It may help to draw a picture of what is happening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@Gstevenson3 @kylechampley and others