One of the major data center expenses is related to server power usage. This has forced many an organization to go looking for working alternatives for mitigating energy costs. While improving server hardware has been the most opted for approach for many years, virtualization today is making the biggest impact in this regard. This means allowing one physical server to perform the work of many. By so doing, a business organization only uses a few power hungry physical systems, and hence doing away with operating expenses for servers and cooling.
How does virtualization help in energy saving?
Virtualization alone has no direct effect on server power utilization. Energy savings in this regard is a controllable and indirect by-product of virtualization. The improved utilization of computing resources is a noteworthy virtualization benefit. Considering that a standard conventional server only utilizes only a small fraction of the basic hardware, it means that the remaining I/O, memory, CPU cycles and a few other resources are essentially unused. Virtualization luckily frees resources and workloads from the computing hardware and thus minimizing resource wastage while at it. This allows numerous workloads to be run on the same computing platform on the server using the system’s available resources.
Server consolidation and virtualization
Virtualization allows an organization to operate a number of the same computing workloads on fewer physical servers. This concept is referred to as server consolidation. The usage of fewer physical servers in an organization means that a considerable reduction in energy demands is achievable. This all results in substantial overall energy savings in the data center, though operating unrestrained.
Is predictable energy saving possible with virtualization?
As a matter of fact, virtualization never brings out a predictable energy saving for data centers or IT departments because administrators exercise a lot of control over computing consolidation. It is however possible for one server in an organization to operate only a few workloads with another supporting 15 or more workloads. Simply put, the exact consolidation amount levels varies from one organization to another based on the entire computing capacity the server, the desired consolidation levels, and workload demands.
While old servers with just a few computing resources can only support a few workloads, newer servers have the capacity to support tens of workloads. Furthermore, some administrators are in the habit of leaving a number of computing resources unused so as to allow the workloads to migrate to other servers if need be.
Why are data administrators constantly redistributing workloads to different servers?
While virtualization can be used to coerce energy savings onto a downward trend, the savings amount will vary from one organization to another depending on aspects such as business motivations, information technology staff and the data center’s equipment. It however allows computer administrators to make changes that spectacularly affect the consolidation levels. It is therefore a common thing to come across data or computing administrators rebalancing workloads and allocation across different servers for performance optimization. In this regard, it is common to see lightly used or noncritical workloads being restructured and reallocated onto fewer systems, and thus freeing computing or server resources for other uses.
In a perfect virtualized computing environment, lightly used or idle workloads are transferred to properly consolidated servers. This has the effect of freeing up some servers that the administrators then power down to save on energy usage. As computing demands increases, these servers are then powered up.
It should be noted that there are only a few tools that can be used to carry out this form of dynamic consolidation. A good example of such tools is the DPM (Distributed Power Management) feature in DRS (Distributed Resource Scheduler). While DRS can be used to allocate resources and to transfer workloads within a DRS cluster, DPM makes it possible to optimize the power consumption by merging the workloads onto only a few servers within a DRS cluster. This in turn powers down some servers (unused) until they are once again required due to computing demand increases.
Potential problems in a virtualized environment
There are only a few problems that might be encountered when using power management tools and automated workload migration in a virtualized environment. Almost all of these issues can be easily solved by the administrators as they arise. It should be remembered that there is no replacement for authentic testing when it comes to virtualization and server consolidation. It is clear however that an underlying virtualization (hypervisor) platform does not have a direct control on the server power utilization.
When all's said and done, server consolidation brought about by virtualization is changing server energy usage technology as we know it today and is being opted for by almost all types of organizations, their sizes notwithstanding. The intelligence level of nowadays virtualization infrastructure is forever expanding. This has allowed organizations to shift the underlying consolidation levels to suit their computing demands and thus being able to maximize on energy savings via the available automated tools.
We are a reputable Bay area IT support company that can help you understand more about saving energy through virtualization. If you require other types of IT services, we can render our assistance too so call us today and find out more!
You must be logged in to post a comment.