Generating a 3D model from 2D slices of a nucleus
1
0
Entering edit mode
6 months ago

Hello,

I've been thinking about generating a pipeline to piece together a 3D model from a large group of 2D images.

These images include 1 pixel slices of a nucleus and structures within the nucleus. The nucleus itself can be rotated in any direction when taking an image, so its unclear where the slice would be situated in 3D space. The structures within the nucleus have been mapped in 2D space before, however the Z axis of that particular slice is unclear.

In a nutshell, I have a group of images that can be situated on either the x, y, or z axis in 3D space and I only have one slice as a general reference for where these other slices may be placed.

I've been considering using a scoring method to generate the best possible prediction of a 3D model. This would involve placing that reference slice somewhere along the z axis as an anchor, then placing a slice that is unmapped in all possible slice positions. For example, if I'm using slices that are 80 pixels in width and length, I would have 240 possible placements for that slice. I would calculate a score for each placement based on the degree of structure overlap between my reference slice and the unmapped slice. After reaching a decision, the program would lock in the best slice placement, and repeat the workflow this time with an added reference slice.

My main concerns are computation time and accuracy. If I go back to my 80 pixel example, each unmapped slice would have 4 rotated variations, and if I want to consider making variations that are shifted over by 20 pixels maximum in the x and y axes I would have 32000 variations that would each be scored at 240 different placements. Over 7mill decisions would have to be made to place 1 slice.

As for accuracy, I'm worried that 1 reference slice is too little to accurately place each slice. I would need some metric to calculate my confidence in the alignment. (i.e. how many slices have fits with a score over some threshold). I'm also thinking after I've placed each slice, I could rerun the code this time using the filled in 3d model as a reference to replace poorly placed slices.

If anyone has some improvements to what I'm planning, or other considerations for solving this problem, it would be greatly appreciated!

Matt

imaging modeling • 482 views
ADD COMMENT
0
Entering edit mode
6 months ago
LChart 3.9k

This question doesn't fit in nicely with the other set of questions (which tend to focus more on omics). However this sounds an awful lot like "thin-slice reconstruction" used in CT scans; and google scholar turns up some applications to cell microscopy: https://scholar.google.com/scholar?q=thin+slice+reconstruction+cell+imaging&hl=en&as_sdt=0&as_vis=1&oi=scholart That might be a good start.

ADD COMMENT
0
Entering edit mode

Hi LChart,

This problem isn't the same as the CT scan example because in that situation the z axis of each slice is known (because you are cutting the tissue into sections). In my situation, I have no idea how the cell is oriented in each slice and due costs/compatibility with my visualization method I can't do as they did. Thanks for looking into this.

ADD REPLY

Login before adding your answer.

Traffic: 2928 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats

Use of this site constitutes acceptance of our User Agreement and Privacy Policy.

Powered by the version 2.3.6