In Part One of Virtualization Requirements for DevOps, we learned about the basics of how software development cycle goes in real, types of virtualization, reason to think about using virtualization in practice. Here is the Second Part of Virtualization Requirements for DevOps. In this article, will give brief idea about the benefits of virtualization such as efficiency and reducing costs, utilization of distributed computer systems, faster provisioning, availability, migration etc established, commonly agreed matters as continuation of the first article.
Virtualization Requirements for DevOps : Benefits of Virtualization
Increasing efficiency and reducing costs
Power consumption is an important reason for the virtualization of servers. To reduce this to a minimum, the use of multiple virtual machines can efficiently share computing power, as described in the following section. Despite the shared processing power, each server has enough resources and is unaffected by the other systems on the physical hardware, the host. The more efficient utilization of the hosts not only saves physical servers, but also the cost of electricity, acquisition and maintenance costs. Accordingly, it often pays back to the initial investment particularly for large servers, which are then virtually divided, as in many small. Also, new servers are difficult to get with just cheap, small servers.
---
Better utilization of distributed computer systems
Often several physical systems are used to achieve the perfect utilization. Virtual machines can be distributed there quickly and easily. If virtual machine experience performance problems arise because the host may be unable to provide the required performance, it can quickly be moved to another, more powerful, host, which it does. This process can also take place fully automatically and without interrupting the service. With a good infrastructure, the transmission of a machine takes only a few seconds.
In this way we can also reduce the number of physical hosts. In most cases, the CPU of servers is only weakly utilized. By way of example, the requests for web pages always come in bursts and are processed after less than a second. In this way, we can run a lot of web servers without the risk that the CPU is busy for a long time and it comes to restrictions in the service. Likewise, you can well accommodate a web server on a host, which still has enough RAM free, but its CPU is often busy.
Fast Server Provisioning
It takes a relatively long time until a physical server is finally ready for use. If necessary, the server must first be assembled before it finally comes in the rack, put into operation and put on. The best solution is to “store” a handful of servers with estimated performance parameters (CPU, RAM, hard disk etc) on suspicion and to deploy them as needed.
This is easy with a virtual environment because you only need to assemble the virtual machine and, if necessary, transfer it to the desired target system. This process can also be fully automated, such as at the end of the deployment chain, so that the QA department can subsequently approve the life of the changes.
The form of container virtualization, especially Docker, is designed only for this form of provisioning. Docker expects to have a target system on which to launch one or more containers of the created image. However, Docker keeps out of the choice of the destination host, this must be done by the user or the control program.
High availability of particularly important systems
Throughout the virtualized IT, server virtualization makes it possible to deliver unified high availability (HA) solutions. The requirements can be scaled arbitrarily according to the level of reliability required. The possibility of a failure of the company can be ruled out by creating the redundancy of servers. To accomplish this, the corresponding virtual machine is cloned and held on another host, possibly even in a different location. The on-hold redundant system automatically activates in the event of a physical hardware failure or a malfunctioning virtual server system. In this case, the users do not get anything from a server failure that has taken place.
Live migration of systems
In order to become even more flexible, such as when the performance of a host is no longer sufficient, but you can not or do not want to shut down the VM, many virtualization programs have the function of live migration.
The VM is frozen, transferred to another host, and continued there. The virtualization software also takes care of the redirection of the IP, so that existing connections are not reset or interrupted for other reasons.
However, that depends on the choice of platform. For example, Hyper-V, VMWare, and VirtualBox support live migration, not Docker. Docker will not receive this feature too soon. Here it must be distinguished, what must be transferred, and from which level this comes. For a VM, this is relatively simple: the entire virtual hardware, ie the configuration of the VM, the hard disks and the current RAM must be transferred. A VM is a self-contained biotope that is completely managed by the virtualization software. The host system has nothing to do with the VM except to run the virtualization software. The hard disks may already be in external storage. So we can save this step synonymous, the transferred VM can simply be accessed.
This looks different with container virtualization. Here the kernel of the host is actively deployed and the virtualization software only sends a few instructions to the kernel for it to work as intended. This means that all objects created by the kernel must be transferred as namespaces for the process IDs. Due to the low age of this technology, there is still no reliable way to make such changes in the system (kernel). In addition, the source and target systems must be 100% identical, since no hardware emulation takes place.
Instead of live migration, it is advisable to rely on container virtualization to put its software on multiple instances so that the problem of High availability can be solved by means of load balancing.
Faster and easier backup and restore
Continuous protection of corporate data can be offered at low cost through dedicated virtualization backup methods. It ensures a continuous backup of data on an external medium such as another server or network storage. The secured systems are available around the clock without restrictions. A loss of integrity and loss of server data means data loss for most organizations and backup strategy. With Continous Backup Strategy, all data can be backed up every minute so that all data can be restored to near-perfect condition even in the event of a failure, with minimal or no loss. Using the so-called snapshot method, virtualized computer systems can create a complete image and reliably restore it (on any hardware). When restoring a defective server system, hardware that is as identical as possible should be provided. Backup procedures, such as file dumping, require manual rebuilding of the operating system and configurations.
Facilitate hardware procurement and renewal
The complete separation between virtual and physical hardware makes it easier to discard and replace old hosts. The only condition that a virtual machine can run on the host is only the virtualization software and/or the host operating system, depending on the type of virtualization. The only restriction here is the CPU, which should have the same architecture. So it is not possible to start a VM that builds on an x86 CPU on a host with ARM CPU, or a 64bit system on a 32bit host CPU. Likewise, are exemplary binaries on Linux, depending on the processor architecture, so you can not compile well on a 64bit system and then start the program on a 32bit system.
More flexible adaptations of the landscape to new requirements
For most companies, virtual servers are independent of the hardware used. Response time and flexibility are two key benefits of virtual machines in the enterprise. New test systems and the rollout of new software are fully available for product distribution within a few minutes. Simplified software tests can be implemented quickly with virtual machines. For some programs, virtual machines are offered for download – out-of-the-box.
For example, there is Internet Explorer for testing web applications directly from Microsoft for download. So we can also test your site if you develop locally on an OS X or Linux system.
Virtual networks for splitting the computers
In corporate IT, the network plays an important role. VLANs can be used to create virtual local networks, for example, to virtually separate departments from each other. This simplifies or increases the control and clarity of the individual departments through VLAN segments. Network virtualization also increases security by allowing attackers to manipulate their own, but not the other, areas in the worst case scenario. In addition, a better utilization of the existing network infrastructure and its address ranges can be achieved. Often, it is also a good idea to create virtual networks for specific servers so that they can work in the protected area. In this way, the servers can each receive internal networks corresponding to their tasks, which can then only be accessed by the VPN or the like for the developers.
Cost of Virtualization
Every company, every project that has the goal of cost reduction, needs investments in new technologies. This investment usually shows up after a few years. However, running costs from day to tomorrow can not be easily reduced. If you want to virtualize your company, you need an exact plan of what you want to virtualize, how you want to virtualize it, and what virtualization entails over time. An example is the virtualization of a data center. If you want to virtualize your data center you need several components. 2 components are the hardware and the software. Both are very expensive components in the first place. The hardware is made up of clients, servers, storage, and printers, but it is widely used by every employee in the organization.
Conclusion of Part 2 of Virtualization Requirements for DevOps
In this article, we learned about lesser know practical matters behind real life software development of inside any company. Power consumption, cost of hardware, difficulty in procuring hardware parts are not what commonly known matters. In next and final part of this series, we will finish with requirements and brief discussion on the whole topic – click here to continue to read the part 3 of Virtualization Requirements for DevOps.
Tagged With cpu requirement for devops