Virtualization in Manufacturing
Article by Jayabalan Subramanian
- Filed under:
While the benefits of server Virtualization at the corporate datacenter are receiving a great deal of attention, its ability to address lifecycle management issues may make Virtualization even more compelling at the manufacturing plant. Along with the advantages, however, come additional challenges and risks.In this post we discuss some of the best practices that can be used to benefit from server Virtualization today, while avoiding mistakes that could affect the availability and performance of mission-critical manufacturing IT.
Virtualization – a panacea for all manufacturing woes
Manufacturers are adopting new IT solutions on an unprecedented scale to meet efficiency, quality, and regulatory compliance goals. Most manufacturing companies are eager to embrace server Virtualization for gaining the much-needed agility in business operations. Among many other advantages that Virtualization has to offer are cost and operational efficiencies derived from running a number of applications on the same physical server.
For a manufacturing organization, unplanned downtime is unacceptable and results in lost production time. In regulated industries like pharmaceuticals, loss of data and/or control can compromise the integrity of a batch record and require in-process product to be destroyed. Even minor system interruptions can call into question the value of the IT solution.
Implementing Virtualization in manufacturing environments moves the technology into the mission-critical realm. Manufacturing Execution Systems (MES) and related automation applications can gain significant advantages from Virtualization that go beyond even those seen in typical enterprise software applications.
So What is Virtualization?
Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources. It can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and workloads.
A Virtual Machine (VM) behaves exactly like a physical computer – it contains its own “virtual” CPU, RAM, hard disk and NIC (network interface card), and runs as an isolated guest OS installation within a host OS. The terms “host” and “guest” are used to help distinguish the software that runs on the actual machine (host) from the software that runs on the VM (guest).
Virtualization works by inserting a layer of software called a “Hypervisor” directly on the computer hardware or on a host OS. A Hypervisor allows multiple OSs, or “guests,” to run concurrently on a host computer (the actual machine on which the Virtualization takes place).
Conceptually, a Hypervisor is one level higher than a supervisory program. It presents to the guest OS a virtual operating platform and manages the execution of the guest OSs.
Virtualization software allows VMs to access the physical hardware resources of the computer on which they reside. Benefits include reduced power consumption, saved rackspace and cost efficiencies. Having the ability to run multiple VMs on one physical computer allows optimum utilization of assets and the processing power, which often is under-utilized.
Organizations typically run one application per server to avoid the risk of one application’s vulnerabilities affecting the availability of another application on the same server. As a result, typical x86 server deployments achieve an average utilization of only 10-15% of total CPU capacity. Virtualization allows applications to share servers’ computing resources, which enable manufacturers to reassess how many servers, are needed.
Virtualization in Action
However, the benefits of Virtualization go far beyond server consolidation. Many manufacturers use Virtualization to extend their software’s longevity. Virtualized assets also help increase productivity. Benefits of Virtualization for the manufacturing sector include:
Many applications in the enterprise require as little as 20-30% of a server’s CPU capacity. Companies can consolidate a number of such applications on a single physical server, while planning capacity for growth in the total workload.
Initial savings from consolidation result primarily from reducing the number of servers that must be acquired, deployed, and managed, which in turn reduces the cost of hardware, software, and personnel. In the longer term, savings can be achieved through more efficient resource utilization, improved availability, and reduced operating costs.
MES and associated applications are distinguished by the need for long lifecycles.Lifecycles of seven years and up are common. After the IT solution is in production, companies want to ensure stability and reduce risk by avoiding changes to the software application, the operating system, and the server hardware.
Achieving this objective becomes challenging because vendors often do not support the original operating system version throughout your desired lifecycle. This means you have to seek out extended support and pay a premium. Moreover, most server hardware is obsolete after three or four years.
Virtualization allows you to abstract the application and OS away from the server hardware. You can effectively extend the lifecycle of your application as a result. The ability of a Hypervisor to support older guest operating systems allows you to upgrade the hardware platform without affecting applications or their operating systems. A related benefit is that the ability to upgrade server hardware eliminates the need to stock hard-to-obtain components required to maintain older computer servers.
Capabilities that Virtualization can enable over the extended lifecycles of MES and related applications include:
- Provisioning VMs on-demand. Server Virtualization allows you to create a standard Virtual Machine – consisting of software files that include the application and an operating system – that can be copied onto a server in a matter of minutes when additional capacity is required, or when you need to distribute an identical application configuration to different plants. The Virtual Machine can be qualified and tested in advance to ensure it will work as expected.Besides the obvious implications for system stability in regulated industries such as life sciences, using a pre-validated Virtual Machine may eliminate platform qualification testing that would be required to install and validate new server hardware.
- Hardware and capacity upgrades. When more processing power or storage capacity is needed, Virtualization can help you move the Virtual Machine to newer hardware with no change to the application or operating system.
- Failover and rollbacks. A Virtual Machine’s image – including configuration state, disk state, and so on – residing on one physical server can also be periodically replicated to another physical server for backup or fast restart. Some Virtualization software also allows for point-in-time rollbacks. Useful when data corruption has occurred, rollback lets an administrator revert the Virtual Machine to an earlier known good state.
- Application development. Taking as little as minutes to deploy, Virtual Machines effectively isolate each application developer in his or her own logical partition. Other developers are unaffected if a developer crashes a Virtual Machine on which application was hosted.
- Upgrades without downtime. A capability known as live migration allows for planned hardware and operating system upgrades (in cases where the operating system is not visible to the application) with virtually no interruption to the application and little perceived impact by users. Note that the operating system that can be upgraded is at the host OS/Hypervisor layer; guest operating systems cannot be upgraded online.Live migration works by replicatingthe system state iteratively while the application continues to run. Shortly before a final copy of the Virtual Machine is ready for migration, only a brief application blackout (perhaps milliseconds) is necessary to synchronize the second Virtual Machine with the original.
Best Practices for Mission-Critical Manufacturing Applications
The risk and cost of service interruptions become higher as manufacturing applications become more powerful and integrated. Employing best practices can help you achieve the advantages you seek from Virtualization today, without compromising the availability and performance of your mission-critical manufacturing application.
Know Your Application
Begin by characterizing your software application and its workload correctly. How much? When? How much headroom do you need for peak times and temporary surges in demand? In the event of performance degradation, the application could become unavailable and provide poor response time to users and processes.
Also make sure to conduct an appropriate risk assessment. Even if you are starting with non-critical applications, the server on which you are consolidating them often becomes essential when it drives numerous applications. In addition, not every application is a good candidate for Virtualization. Typical examples are I/O-heavy applications and performance-sensitive environments that are not easily characterized.
Seek Enterprise-Strength Technology
Remember that the Virtualization layer has the potential to be a single point of failure for all of the Virtual Machines it supports. Follow the simple rule: Software reliability increases as the amount of code and its complexity decrease.
Look for Virtualization software that is small, compact, and controlled – as appliance-like in nature as possible. Virtualization and availability solutions that are simple to configure and maintain provide crucial advantages not only by reducing operating cost, but also by significantly reducing your exposure to outages.
Plan for Business Continuity
High Availability and End User Performance are important criteria while considering Virtualization. To mitigate the risk of plant operations being interrupted, institute backup and Disaster Recovery measures for physical servers that run your Virtual Machines.
Simplify with Robust Hardware
High Availability can be achieved by clustering multiple servers in a virtual environment. Implementing Virtualization on a server cluster adds another layer to deploying and administering a cluster. Consider a fault-tolerant server that automatically protects reliability and availability without requiring changes to your mission-critical application. This approach uses redundant components while appearing as a single server to Virtualization and application software. Ideally, the emphasis should be on preventing downtime and data loss instead of simply on quick recovery.
Don’t Let I/O Sink the Ship
Incompatibilities related to I/O interfaces are a known cause of system instability and performance problems. Establish that I/O devices and drivers are compatible with the Virtualization technology you plan to use. Be ready, willing, and able to resolve incompatibilities up front if you need to use legacy or proprietary I/O cards to access specialized plant equipment networks, as is common with supervisory control and data acquisition (SCADA).
As server Virtualization technology matures, it will become more suitable for the exacting demands of mission-critical manufacturing applications. Server Virtualization can be a boon for managing the lifecycles of the many applications that make up an integrated MES environment and other mission-critical manufacturing applications. Manufacturing organizations can gain new capabilities and reduce costs through server Virtualization, as long as they choose appropriate technology and plan properly.