Frequently asked questions
In most cases, the real question is not whether desktop virtualisation is worth doing, but rather how much to virtualise. As a company, you are already operating your own small data centre once you reach a certain number of employees: You will have at least one server and various network components such as switches and routers, all kept in a (hopefully) air-conditioned room that all workstation PCs are connected to. That means your first step should be to find out how much this equipment actually costs each year: power, cooling, maintenance, acquisition costs, service provider costs for fixing small malfunctions and the possible costs of a total system failure as well as downtime until recovery. Experience shows that many of these companies have at least one major system failure in any given five-year period – and everything comes to a standstill during this time. Rien ne va plus. And that can easily last two to three days, if not longer. A minor disaster that could at least be heavily mitigated or even avoided with planned IT outsourcing and desktop virtualisation.
This can be an attractive proposition for some budget managers since it eliminates almost all capital expenditure on server and storage environments and transfers virtualised desktops, which are rented monthly, to opex budgets. This often produces additional positive effects.
Exactly. Companies who have conducted a risk analysis can go into a conversation with a potential service provider well prepared. The two parties can then clarify what requirements to meet. For example, a company might want to keep a server in-house for certain reasons – that is by no means incompatible with desktop virtualisation. The service provider will advise on available integration solutions and educate the client about the costs. It is generally cheaper for companies to combine desktop virtualisation with IT outsourcing. Once the customer and service provider have agreed on an appropriate plan, the next step is implementation.
That's something that reputable companies always do. Desktop PCs are virtualised on the data centre servers - including the operating system, required applications and all licenses. Users can then check the services to see if they meet their needs and expectations. Any required improvements are done by the service provider. Once everything has been squared away, it is time to roll out the solution across the enterprise. Users can then access their cloud workstations from anywhere.
It depends on each company's particular requirements. If a company goes for full desktop virtualisation, it could theoretically use a zero or thin client because everything happens on the server. However, many companies still have desktops that they would like to continue using in order to get the most out of their investment. That is not a problem. Put plainly, just about any PC that does not boot from floppy will make a decent client. Your choice of hardware will depend on whether you want to also use certain applications offline and therefore have to install them locally. The more of these applications you have and the more computing power they require, the more your client will have to be able to do. Companies should generally seek personalised advice on whether and how special requirements can be implemented in an ideal client infrastructure. Service providers generally offer to provide proven and tested client hardware if needed.
On the client side, there is basically nothing that you cannot virtualise. In IT outsourcing, old server systems in some industries, including logistics, may be impossible to virtualise. This is particularly true of the IBM AS/400 mainframes that some logistics providers still use. However, if customers wish, these systems can also be integrated into a virtual infrastructure. Reputable providers will always address these kinds of requirements and find workable compromises where necessary. However, many older systems that manufacturers no longer support can still be used through virtualisation while maintaining a reasonable level of security. This includes Windows 2008 servers, which Microsoft no longer supports. Nonetheless, some companies have to keep using them because they are the only systems that support key applications. This is perfectly feasible, albeit with limited liability for the hosting service provider.
Not usually. In fact, the data packets sent between the virtual desktop in the data centre and the workstation are generally quite small. This is because only the screen content is transmitted – and only from places where something has changed. Going by our experience, we can say: Ten employees will need a bandwidth of 1-2 mpbs. We estimate that each employee will continuously require approx. 128 kbps. This can generally be achieved in most regions, including those that still lack a fibre-optic network or other forms of fast internet. However, we have been supporting a longtime customer who has connected 60 employees over a 4 mbps line and can work without any problems once we have done all the necessary fine-tuning.
Given the importance of hosting, customers should ask their service provider for details about their data centre in advance. Data centres are grouped into four tiers (there are also subtiers). They range from basic hosting without redundancies – i.e. without a safety net – and annual downtimes of around 29 hours to completely redundant structures and annual downtimes of no more than 45 minutes. The more powerful the data centre, the more expensive it is. Be forewarned, through: This is the wrong place to scrimp and save since that will only hurt your security and availability. Ideal performance requires, at the very least, redundant servers and the ability to perform maintenance while the server is running. Those features are not offered until Tier 3, though. In any case, the chosen data centre should be ISO 27001 certified.