MICROSCOPY & IMAGING
The Allen Institute for Cell Science team
Example of a demo cell from the Allen Institute of Cell Sciences
“Without our technology, lab technicians performing ova and parasite detection tests can easily take up to 10 minutes to perform an evaluation. By taking care of the searching and identification of objects, and presenting them with just the objects of interest, they can reduce that time dramatically, to around 10 seconds,” he says.
CELL ANALYSIS Meanwhile, a team of scientists at the Allen Institute for Cell Science have developed a technique that uses a combination of microscopy and machine learning to train computers to see parts of the cell the human eye cannot easily
distinguish. As Greg Johnson, a scientist in the Allen Institute for Cell Science team, explains, cell biologists really want to be able to know how all parts of the cell are organised and work together. “To be able to see the precise location of cellular machinery in live cells – the location of a specific type of protein, for example – we need to use a fluorescence microscope and we need to attach a fluorescent molecule to the particular type of cellular machinery, so we can see where that part is, and only that part,” he says. However, although confocal
fluorescence microscopes like this can take three-dimensional images, they need to hit the cell with powerful lasers to get fluorescent molecules to light up, in the process essentially ‘cooking’ the cell like ants under a magnifying glass. It is in part because of this that Johnson says scientists can see only two or three types of different fluorescent molecules at a time. In an effort to remedy this limitation,
Johnson and his team took brightfield and fluorescence images of the same cell, and built a machine-learning model capable of predicting what the fluorescence image looks like based on viewing the brightfield image. Johnson believes this is ‘super useful’ because it means he and his team can ‘take brightfield-fluorescence image pairs from a bunch of different
experiments and train a model for each different type of fluorescence probe.’ “So now I have a bunch of models, each one is pretty good a seeing where a different part of the cell is. We can take one brightfield image from a new sample, and give it to each model, and get a representation of where many of the cells parts are. Tis allows us to stitch together many different experiments to see tons of stuff that we weren’t able to before,” he says. To implement its model, Johnson
reveals that the Allen Institute team use a basic fluorescence confocal microscope and a desktop computer with a graphics card. Some cells are also labelled with a fluorescent probe. “Brightfield images were previously seen as having little utility but now we can leverage this cheap data to do powerful things. Tis is something that a normal lab can do. We are also giving away software and data for free. Anyone can use and change it for whatever project they want,” he says. “For the next steps, we want to be able to see what other types of things we can predict from these cheap images – gene expression, for example. We want smarter more creative people to take these ideas and apply it to their data,” he adds.
www.scientistlive.com 57
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72