NetNotes
with one soſtware program over another. For example Amira now has advanced quantification capabilities but I do not know them well because if I am doing that kind of work I simply open Imaris instead. Terefore, my opinion is biased. **I completely agree with Mike Nelson who stressed the impor-
tance of the hardware, and as he suggests, I purchased the video card that Imaris/Bitplane recommended for using their soſtware. Tese companies have tech specialists that can help guide you in making these decisions. Brian Armstrong
BArmstrong@coh.org
One other soſtware package to consider is Avizo. When we
bought it, it was one of the cheapest options, as well as one of the most powerful, especially for large data. Bonus points for the fact it lets you load and display multiple datasets simultaneously, and even tune the shading to make 3D images especially clear for publications. We also had Imaris, but we found ourselves turning more oſten to Avizo, except for object tracking, which Imaris is very good at. Benjamin Smith
benjamin.smith@
berkeley.edu
Commercial response: Many users, many analysis questions and
many preferences. With this in mind, we would like to add to this discussion another option. Many of you know and may use Huygens for its deconvolution, but it is also possible to add 3D-4D image anal- ysis functions to the same user interface avoiding file transfer and re-scaling of data. Analysis options are available for a very affordable price and for many situations (beginners/expert users, and floating or node-locked licenses). Te best advice that we can give is to ask for a free test version of each package and see whether it fulfils your needs and budget. Vincent Schoonderwoert
vincent@svi.nl
Commercial response: Dear Mike, I’m glad to hear that you like
the 360 video export option. On behalf of the team at Arivis, I’d like to correct an apparent misunderstanding. Te visualization in Vision4d does **not** have an intrinsic
limitation of 512^3. By default Arivis Vision4d will render the largest down-sampled version of the image which fits in the graphics card memory. Te down-sampling is dynamic to fit as much of the data set into the current graphics card memory as possible. Tis is intention- ally done to maximize rendering speed on any size graphics card. In principle you will experience interactive rendering of large images even on a laptop with limited graphics memory. Furthermore, Arivis Vision4d offers a dynamic level of detail rendering mode which will display the raw data in the current viewing area, but due to its lower performance it is not enabled by default. But there is no need to take my word for it, please take a look at some recent papers which cite Arivis Vision4d as being used to render and interactively work with significantly larger images. In particular please see T Chakraborty et al., Nat Methods* 16, 1109–1113 (2019) and R Cai et al., Nat Neuro- sci 22, 317–327 (2019). Finally, I would like to echo what Gary L said: demo, demo,
demo. It is very important to do a thorough test of analysis soſtware, to ensure that it meets the specific research requirements. With that, please don’t hesitate to email me with any further Arivis Vision4d rendering questions. Arvonn Tully
arvonn.tully@arivis.com
I just wanted to comment that it is possible to open more than
one image in a single Imaris instance. Tis was introduced in Imaris 9.2 (“Add image” in the File menu; see video called “Loading Multiple Images and Alignment” in
https://imaris.oxinst.com/versions/9-2). Te applications mentioned in the video are to embed higher resolu- tion or higher-dimensional data in a spatially larger dataset taken at lower resolution or with fewer dimensions, or to put things side by side. We have used it a bit for the latter: qualitative comparisons of a
2020 March •
www.microscopy-today.com
few different light-sheet datasets set side by side. I don’t know how its performance compares with this functionality in other soſtware, but it can be done. Pablo Ariel
pablo_ariel@med.unc.edu
IT Infrastructure Confocal Microscopy Listserver We are producing massive amounts of data that legally need to
be stored for up to ten years. Tis data also needs to be analyzed fairly quickly - hence storage on local servers is required. Furthermore, some soſtware companies (Huygens, Imaris…) are now offering server-based applications for image analysis where all the hard work is done on the server and your computer is just a terminal. Currently, we tell users to transfer their data to NAS immediately aſter imaging, so the micro- scope computers are not clogged. Image files found on these computers are fair game aſter a month. We expect users to then back up their files from the NAS to the cloud. All our image analysis is done on fairly beefy workstations and is not server-based. As I’d like to future proof our IT, it would be great to know what servers/storage space others have as part of their facilities/institutes? What solutions are in place for long-term storage vs short- to medium-term storage? Are server based applica- tions in place? Mattieu Vermeren
matthieu.vermeren@
ed.ac.uk
In a previous thread I posted about what we have - some 10 Gbe
computers, 10 Gbe server (40 Tb HDD RAID array), a switch 10 Gbe Ethernet, some PCs with a ASUS Hyper M.2 PCIe card with 4 Silicon Power, 2 Tb NVMe SSDs, at ∼$1,050 for 8 Tb fast local storage (PCIe x16 slot, BIOS configure the slot as x4x4x4x4). Te goal is fast local acquisition saving. Johns Hopkins University now provides 5 Tb Microsoſt OneDrive
cloud storage per employee/student, and in principle a principal inves- tigator could call IT and get more. So far, this is not easily “aggregated” by lab. I hope this gets changed so a PI of a 20-person lab gets 100 Tb to start with, organized by user so that 1 Tb private space to each user (visible to the PI) and mostly organized by projects. Cost-wise, 20 TB WD My Book Duo USB 3.1 drives are $700 and
WD 20 TB My Cloud EX2 Ultra Network Attached Storage is $999. You may want to have one or more options in your facility for backup (move data older than a month) AND encourage PIs to have storage in their labs (most PIs think about their research, not about IT and need for data backup). George McNamara
geomcnamara@earthlink.net
We just purchased 5 PB and migrated to an Isilon. We do run
some applications on the server, but time there is at a premium. We also have a few fairly beefy PCs to crunch data locally. Right now, this is our short-medium-long-term solution. Not sure if we will even- tually migrate to something like Glacier for long-term storage. We are collecting cryo and light-sheet data, so the numbers are getting up there. We are also updating applicable systems to 10 GB fiber for transfer. I am interested to hear other solutions as well. Big Data is always a favorite topic at our NAMS meetings, and it’s also the name of our light-sheet workshop next summer as well (plug intended!)! Gary Laevsky
glaevsky.lists@
gmail.com
We are lucky to benefit from a new infrastructure that the uni-
versity recently set up. Tis is a central server in the basement of our building (mirrored on the second campus) which, I was told, is expand- able to zetabytes and can be expanded in increments of 10 TB plug- and-play modules added when needed. Our microscopes are directly connected to the server via a 1 G b/sec fiber. Tat is Gbit/sec, not GB, which is GByte and is 8 times more. So 10 Gb/sec transfers around 120 MBytes/sec. Tis will likely be upgraded to 10 Gb/sec switch (which will allow us to transfer over 1.2 GB/sec). Te data are acquired locally
61
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76