Skip to content

On XY precision and image stitching

Nemo Andrea edited this page Apr 11, 2023 · 3 revisions

Generating a high resolution (sub micron) pattern is not very complicated - or expensive. A good objective with decent NA, and you should be good to go. If you want to stitch images, you quickly run into problem that you need to move either your sample or illumination system (the former is generally smarter) with a precision better than your pixel size. At 1um pixel size, that may mean you need to have a maximum position deviation of, say, 250nm to not notice stitching errors. Maybe 100nm if you really want to have a fully seamless experience.

Meeting such high precision standards on a mechanical platform is restricted to linear inductive motors on air bearings. Beautiful pieces of kit, but with a pricetag to match. Luckily for us, we can use some tricks to get reasonable results without having to shell out the equivalent of a nice car for a stage.

Encoders 📏

On any precision motion system you will want to measure the location of the stage directly (in contrast to e.g. counting the number of rotations of the lead screw). This is done via encoders. Encoders come in magnetic, optical and capacitive variety with the first two being the most common. Magnetic encoders will go down to 100nm resolution (interpolated) while optical encoders will go down to 10s of nm. They consist of a readhead and some kind of 'tape' with optical or magnetic markers at a very precise interval, which is then read out and reported.

The three resolutions 📌

Now that we have an idea what goes into the XY axis, we can try to think of strategies for our stitching problem. We have three kinds of resolution in our system:

In an ideal case, we would have Re > Rs > Ro. That way we can simply stitch images directly, by moving the stage with subpixel precision. But I am not a millionaire, so this option is reserved for only the highest end of systems and would involve an air-bearing induction linear stage.

The other, more interesting, case is where Re > Ro > Rs. This is where we have a high resolution encoder system (e.g. magnetic system with 244nm smallest step) a pixel size of e.g. 3um and a stage with a precision of e.g. 20um. This situation applies to the openMLA Medjed, and is probably the most common, as encoders are comparatively affordable.

In this case we can use the encoder information to only have a (in the worst case) half a Ro offset (i.e. ±0.5px) between exposure fields! This assumes we can dynamically sample any 'exposure field' from the total pattern. It also means that exposure of the total pattern will take a bit longer as we will have some overlap between the fields. Still, if we imagine we expose a field of 600x600 pixels at 5um, we cover an area of 3x3mm. If our Rs is only 50um (i.e. 10 pixels), the overlap error that we get is only a few percent, so write time should not be affected too much.

Still, half a pixel of alignment error if we get unlucky is still a bit jarring and can probably ruin a design's functionality. So our strategy above is nice, but for applications where good stitching is imperative it probably won't cut it. For some applications - like cosmetic parts or areas with bulky features - it might be acceptable.

Whatever the tolerable amount of offset between stitched images is will depend on your application, but I would say that 1/4 pixel is acceptable for most applications. We can apply one more trick to get this 1/4th worst case scenario covered. Let's say we want to target 5um projected pixel size. If we set our optics such that each DMD pixel is projected at half that, so 2.5um, and divide the array up in groups of 2x2 pixels, our effective maximum pixel error becomes 1/4 project pixel. Of course this means we only get half the DMD array size (e.g. instead of 600x600 pixels we now only get 300x300).

superpixel

Deliberate exposure field overlap

In the diagrams above I assumed we do not design any overlap between different field exposures. Adding a few pixels (theoretically just one should be enough, assuming the strategies above are employed to ensure max 0.25/0.5px offset) that overlap between exposure fields (splitting the exposure time between the fields) can further reduce exposure differences at the edge.

edgepixel

Clone this wiki locally