search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
NetNotes


rows, i.e. 25×2048 pixels, for sCMOS (sure, some CCDs and EMCCDs have the acquisition area in the corner near the readout, so?).


b. point scanning confocal: just scan the area of interest (and maybe a few more pixels to give the GPU decon- volver a little more work). For example, 25×25 pixels. Tweak the zoom as desired.


4. if we “change the game” a little … reflectance (i.e. nanogold, nannodiamond in reflectance), point scanning confocal is both trivial to get just the in focus light, and effectively infi- nite number of photons available, so shrink the pinhole, and shorter wavelength, as much as desired; for widefield, good luck finding anyone’s research epi-illumination microscope to be clean enough and glare free enough for this to work well (maybe some absolutely pristine light path darkfield condenser and back of specimen might work … good luck with that). George McNamara geomcnamara@earthlink.net


True, if it’s not acquired it’s not data, but that’s semantics


because those same photons would be acquired as data in widefield. Te point is that a small fraction of the light that is acquired with widefield illumination is acquired in a typical confocal configura- tion using the same objective lens, so I guess I should have said that more data of the same sample is acquired with widefield…..which is simply stating the obvious when you look at a convolved, widefield, blurry image. To clarify the rejection of 90–98% estimate, this means relative to widefield collection. Even widefield fails to collect the large majority because it’s only collecting a cone out of a 3D sphere of emit- ted fluorescence, influenced somewhat by the polarity of the fluo- rophores which generally are randomly distributed. Partly because of the geometry, most of the out-of-focus light from any particular object acquired in widefield is within a couple microns of the focal plane, not at a large distance. Much of the light from a large distance from the focal plane is dispersed outside the collection angles. Good deconvolution algorithms take this into account. Regarding the bead example [*one or two 40 nm beads with some


gap (or DNA origami), at the coverglass*] this doesn’t strike me as very representative of most real-world biological specimens, which tend towards many structures in or on a cell (and not in an ideally perpendicular plane), surrounded by many cells, with lots of fluores- cence from different focal planes. However, even in this case, deconvo- lution could well provide the same resolution. Even with no additional out-of-focus fluorescent objects to muddle the situation, the widefield collection will collect far more fluorescence. You simply image a cube (image stack) even though the beads are on one plane, acquiring z planes above and below, just like you do when acquiring a PSF with a bead. Now, voila - lots of out-of-focus fluorescence (real data)…..all of which can be used to fuel the deconvolution to describe the size and shape and separation of the objects with ever-increasing accuracy. Regarding your earlier email George, I am always impressed by


your encyclopedic knowledge and deep understanding of imaging, and I can’t compete, nor do I wish to :) Jeff Carmichael jcarmichael@ chroma.com


Deconvolution needs Nyquist sampling and this oſten means


lots of z-slices causing bleaching and potential bleaching of live samples. Te latest implementation of lattice Structured Illumina- tion Microscopy from Zeiss in the Elyra7 (no commercial interest) has a “leap mode” which basically skips some z-slices. If I remem- ber correctly, they claim that the missing information is recovered from the out-of-focus part of the signal. I think this would work only for samples where the signal is sufficiently sparse so that the in-focus signal is not swamped by the out-of-focus signal. When it works, it speeds up image acquisition and reduces bleaching. Otherwise the ApoTomes come to mind, using grid pattern illumination without super-resolution. Elyra7 also has an ApoTome mode. According to my experience, samples which are very inhomogeneous such as cells in hydrogels, cells on silicon, or other weird substrates, plants,


54


and dense tissue slices are better imaged with a confocal. Adaptive optics with a guide star approach to set the parameters as in Eric Betzig’s lattice light sheet might help to image some of these samples. Andreas Bruckbauer bruckbaua@aol.com


Microscopy Listserver 8 bit vs 16 bit Images in All Microscopy (Thread Started


May 1, 2019) I am curious as to how people package digital data for them-


selves and other users. A few months ago on the confocal listserver we had a discussion regarding the sanctity of light microscopy data. One of the issues discussed was how to represent 12 to 16 bit data in an eight bit space. Question: how do people routinely compress data into RGB for


display and archiving and when is it permissible to not preserve the raw 12 to 16 bit raw data? I have a similar question for TEM data. Te new cameras on


TEMs result in 16 bit images. However, for stained material, there cannot be real intensity information needing more than a few bits (or am I wrong about this?). When reducing bit depth, what is the best algorithm? Most reduction to 8 bits that I’ve seen involve putting the bottom x% into 0, for instance bottom 0.3% of pixel values are assigned to black. Does this risk losing the ability to see fine structure, or is this ok? Would it be preferable to not clip in the darks or is this ok? Is it ok to save only the 8 bit data and not bother with the 16 bit data? For instance, most people may not be prepared to deal with 16 bit data or consider it inconvenient. I would very much like to know what is common practice and


considered acceptable for both optical and electron microscopy images, both biological and material sciences. Michael Cammer Michael. Cammer@med.nyu.edu


Tis is an excellent discussion topic, and I can offer a practical


perspective based on some time spent working with a variety of users in a shared instrument facility. 1. Te ease of dealing with 16 bit images depends strongly on what camera and soſtware packages you are using. Gatan hardware and Digital Micrograph work fairly seamlessly with 16 bit images (Ultrascan and above lines), and it’s easy to down-convert to 8 bit TIFFs. So if your users spend most of their time within the Digital Micrograph envi- ronment, then there’s no real reason to use anything but the highest bit depth offered, which is the default saving option within DM.


2. For other camera manufacturers, I’ve noticed that saving data in 16 bit format TIFFs causes some issues with Win- dows (and Mac?) not being able to interpret the bit depth correctly, resulting in images being displayed as nothing but a flat gray frame in the file explorer view. Tis can obviously cause some frustration and confusion with users, especially corporate customers. For these cameras/soſtware packages, I usually direct users to save their data in 8 bit format, which Windows (and Mac?) are able to correctly display.


3. It also depends on what the users are planning on doing with the data. If a user is doing quantitative analysis of their image, for example correlating intensity at an atom site with number of atoms in that column or measuring sample roughness by looking at raw intensity values at neighboring atomic columns, then they would obviously need the maxi- mum bit depth offered by the camera. Likewise, diffraction experiments benefit from as much dynamic range--and bit depth--as possible, so 16 bit is the way to go. On the other hand, if a user is simply using the microscope to image mor- phology or measure particle sizes, for example, then the bit depth of the images doesn’t matter much. Yes, information is forever lost when saving in 8 bit, as it must be when going from 65 k gray scale values to 256, but for most applications


www.microscopy-today.com • 2019 July


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68