virtualisation & cloud computing
concept where multiple users share the processing power of a single computer. This approach has several advantages over the traditional PC model, including lower overall costs, better energy efficiency, and simplified administration. Over the past 30 years, PCs have changed the way we work, play, learn, and think about computer technology. From the first single chip microprocessor in 1971 to the latest multi-core CPUs powering today’s PCs, users have come to rely upon owning and controlling their own processing power. In large part, the PC became successful because it took the processing power out of the data centre and placed it directly on our desktops. But with that desktop power and control also came responsibility – the responsibility to maintain, troubleshoot, and upgrade the PC when needed. After all, the PC is a machine and all machines need regular care. As PC buyers and users, we may have welcomed the capabilities and productivity increases the PC brought, but no one warned us that even with help from a dedicated IT department, we’d have to spend over 17 hours annually maintaining each PC.
Alternatives
Going the extra inch Could the answer be found in virtualising the desktop? Is there more to this technology than just as an enabler to cloud computing? With upfront hardware costs reduced by 75 per cent, maintenance by 75 per cent and power consumption by 90 per cent, and the zero client data integrity risk – it is starting to look like it could be worth going the extra inch for. We’ve all become accustomed to the PC model, which allows every user to have their own CPU, hard disk, and memory to run their applications. But personal computers have now become so powerful that most users simply don’t need, in fact can’t possibly use, all the processing power. Desktop virtualisation is a modern take on the time honored
IT services costs are trending upward with ever increasing software and support costs. Security, data privacy, manageability, uptime, space, power, and cooling challenges are driving many organisations to look for alternatives to the traditional distributed PC model. Thin clients faltered because they are still ‘too fat’ with PC like local operating systems (Windows XP Embedded, Linux, etc.), full power processors, PC memory, local flash drives, virus vulnerabilities, and the management challenges associated with these components.
While the traditional PC market is not growing very fast, its enormous size continues to drive significant innovations such as multi-core processors. The result is that today’s PCs can outperform high end servers of just a few years ago. This opens the door to a new age of virtual computing where the power of an everyday PC gets used as efficiently as possible by multiple users at once. Since we’ve been putting PCs on people’s desks all these years, many have forgotten how computers worked before the PC came along. In pre-PC days, computing was done on mainframes – large boxes that sat in specially cooled rooms on raised floors – and they had connections to dumb terminals spread out
special feature
over the premises. This single centralised computer performed the processing for all of the users. Users also didn’t have to administer the box – that was the responsibility of the technicians of the day. If a user had a problem, all they had to do was call the computer room and ask for help, since support had to be centralised along with the computer.
Of course, the IBM System/360’s big disadvantages were cost and environmental considerations (space, power and cooling). It also required dedicated staff to support and maintain the system. People spent years training to understand and learn the tasks necessary to keep these systems running. This meant that the number of people qualified to maintain a System/360 was very small. This relegated the System/360 to large corporations, governments and educational institutions. The next step was the minicomputer, which also used centralised resources, but at a much lower cost than a mainframe. With the arrival of the PC (and its close cousin, the PC based server), mainframes fell out of fashion. Servers replaced mainframes in the data centre and many were called upon to perform the same duty. This gave rise to the concept of server based computing (SBC), which is like mainframe computing with a few minor differences. The dumb terminal is replaced by a PC that communicates with a server and receives a full screen interface that is transferred across the network. The most popular application of server based computing has been to host a small subset of applications on a server that are accessed by a PC client in this way. In this case the PC is still used to run local applications in addition to running the server based applications hosted with Citrix or Microsoft Terminal
27
Desktop virtualisation
enables a single PC to simultaneously support two or more users.
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48