Friday, May 13, 2011

Server Virtualization scheme Risks

Server virtualization project risks

It is well known that no project is risk-free. Things can go bad and unfortunately they often go bad. Identification and pathology of project risks is a topic with an ample literature. There are risks that are coarse to all projects (generic risks), and others that are due to the specific features of the project (specific risks). So for instance, since every project has an end date, every project has the generic risk of not being completed in time. In this report we shall focus on those risks that are specific of server virtualization projects and to the specific features of generic risks in server virtualization projects.

Data Collection Tools

Performance risks in server sever virtualization projects

In a new application implementation project it is very difficult to size the systems because no workload data is available. On the contrary in a server virtualization project fellowships have ample workload data. Unfortunately, not all the time there is the will to accumulate and analyze them.

There are basically three strategies to mitigate the risk of undersizing systems and therefore of having an immoderate response latency:

Oversizing; Extensive experimentation; and Data collection and analysis.

Oversizing is a very coarse strategy. The basic rationale is that Hw is so cheap that it has tiny sense to spend time to identify the exact requirements. However, it is important to remember that unless you make experimentations or an in-depth assessment, you do not know either you are beyond doubt oversizing or undersizing the systems. You even do not know either you are virtualizing the right applications. You can adopt an aggressive approach, and then as a consequence have complaints from users about principles performance; or you can adopt a cautious approach, and then have a virtual server farm scope much smaller of what could have been. ample experimentation is a good but precious alternative. Typically systems are sized according to rule-of-thumbs and generic policies (e.g., Dbmss should not be virtualized) and only those that are supposed to have requisite overheads are beyond doubt tested. Unfortunately rule-of-thumbs are often unreliable and generic policies gloss over the specific features of virtual servers. Data collection and pathology is the ideal approach. There are any way any important challenges:

Simultaneous data collection from tens or hundreds or servers. Cleaning and pathology of workload data containing tens of thousands of data points. Identification of the optimal virtual server farm out of the collected data. Estimate of the virtualization layer impact on the workloads.

Each of these challenges can be efficiently handled with accepted data collection tools. The Wasfo Data collector and Wasfo pathology and Optimization tools (see references below) have been developed and designed exactly with this purpose.

High Availability risks in server virtualization projects

In a non virtual server farms there are very few applications that are classified mission requisite and are protected with High Availability (Ha) clusters. Ha clusters can significantly heighten the assistance availability insofar as Hw and application failures are concerned. Unfortunately they are costly and complicated to maintain. Ha clusters protect against server, Os and application failure but they require:

Shared storage (not all Ha technologies require shared storage but those most widely distributed do); Ha software; Scripts or Dlls that identify failed applications and shut them down in a quarterly way; and Overall certification of the clarification to get support from all the curious vendors.

Hypervisors (also known as Virtual motor Monitors or virtualization layer) thanks to the fact that Virtual Machines images are beyond doubt files hosted on a shared storage make it possible to create server farms in which all application instances are protected from server failure. If any server fails, a monitoring assistance will detect the failure and turn on the Vm on another server. Unfortunately these technologies monitor and act at the hypervisor level so they do not deliver any security in case of application failure or freeze. If such a security is required Ha heap Sw can be used on top of the virtualization layer.

Another important point is that hypervisors, thanks to the fact that Virtual Machines can be moved at runtime with no assistance interruption (live migration), make minimal the impact of planned server outages. If, for instance, a server needs to be rebooted to change a failed component, the server can be first moved to another server so that the user activity is not interrupted.

Security risks in server virtualization projects

If you crusade Google for "virtualization risk" you will find tens of articles on security risks. That proves that security is the most important concern citizen have as far as virtualization project risks are concerned. citizen are regularly concerned about what they do not know well because one of the basic determinants of human behaviour is the need to have some form of control of the surrounding environment. Virtualization is not an exception. In these projects a whole new set of products is introduced; and those that already are up and running need to be configured in new ways. So a cautious approach is not only recommended by mandatory.

Since there are so many articles on virtualization and security colse to in the web we shall not spend time here to go straight through all security concerns. We shall limit ourselves to point out that unless strong control processes are in places in a virtual server farm it is far easier to create new Os systems. So it is not surprising that citizen after a while examine plenty of Virtual Machines that have been created for development or testing purposes and that are beyond doubt not managed in expert way. They could for instance not have the latest security patches or not being configured according to the firm security standards. Strong control processes may look to be the literal, clarification but strong control significantly diminishes the advantage of increased flexibility we get straight through virtualization. A best alternative is likely to use a weaker control process and then periodically commence a server farm inventory to spot possible security holes.

Costs

Project costs all too often exceed expectations and there are thousands of pages written on how to control the project so that costs do not exceed the budget. Virtualization projects have a specific issues related to Sw licensing. Depending on the Sw licensing rules the virtualization project can yield requisite savings or cost increases. If the application is licensed according to the estimate of bodily cores even when it runs on top of a Virtual motor Monitor the cost will likely increase, seeing that virtual servers have typically many more processor cores than those required by any of the hosted Sw applications. If on the contrary the application license takes into inventory the estimate of logical cores or the principles utilization you may perceive requisite savings.

Conclusions

There are many risks in server virtualization projects that could offset or even exceed the project benefits. literal, planning and pathology are required to mitigate the performance, availability and security risks; as well as to ensure that the improbable financial benefits are accrued.

Server Virtualization scheme Risks

No comments:

Post a Comment