This article mainly refers to the technical side of the DevOps idea. However, if one wants to implement them in own company, the employees must be trained accordingly. In addition, DevOps is more likely to target larger projects where a lot of coordination is needed. In order to be able to react quickly and flexibly to new business requirements, development and operations must work together better and coordinate their processes more closely. In principle, DevOps itself is a strategic approach to accelerate processes around software development. By principle, there is agreement that DevOps will enable more efficient and smoother development and operations collaboration. Comprehensive automation in the design, configuration and operation of systems is almost imperative for the success of DevOps. Virtualization also plays a crucial role here. Many software vendors, including classic infrastructure virtualization such as VMware, continue to expand their capabilities. Thus, automation and orchestration options are present in almost all tools. The overlap of early dedicated tools are getting bigger and bigger. This causes uncertainty to many companies. In addition, there is a relatively wide range of open source tools that have matured and are already being used on a large scale. Companies like Facebook, Google and eBay are making that up.
Virtualization Requirements for DevOps : Objective and Approach
The goal of this series of article is to provide an overview of the virtualization requirements for DevOps, an idea of virtualization, project management and administration as well as a rough understanding of business processes. First, we will analyze the virtualization of the services, the software and hardware virtualization, the orchestration tools, and the deployment chain. In the further series, the requirements for virtualization will be explained in addition. Finally, two exemplary providers will be analyzed.
Fundamentals on Virtualization Requirements for DevOps
Classical role allocation
---
Classically, developers have their local machines and work with them. There, every required software is installed. Ideally, there is a ticket system or a task management and a code management (repository), so that all developers, project managers and the customer are on the same level. The administrators of the company have set up the server(s) and install the software that is currently in use. All servers may receive updates at regular intervals.
Either the developers send their codes to the admins or the admins have set the developers a (S)FTP access, so that the developers can upload their changes. Ideally, there are also several servers, at least one for live operation, and a test server for the QA and/or the customer. The servers are not dynamically virtualized, so they always stay in the state they are in. However, this does not exclude virtualization, for example, to avoid that different customers get in each other’s way.
Classic Development Cycle
Developers develop own code on a different system than runs in the end. It is particularly dangerous if other software or software versions are used. For example, PHP web application could be developed on a Windows machine on the XAMPP stack, but the live system runs on a Linux and uses Nginx with another database. Especially with scripting languages that do not throw errors at compilation time and behave differently under different configurations that can lead to massive problems. With software updates, the behavior of code changes is difficult to comprehend. Change requests of the software (eg libraries) are to be reported to the admin, then each server is updated individually. Of course, tests are important
DevOps idea : Alternative role allocation
The advantages of various supporting systems are used. Obligatory are a ticket system/task management as well as a code administration. Administrators and developers work with the customer to plan the infrastructure and servers that administrators set up and maintain. The developers work on local machines that are as close to the live systems as possible. If necessary, the developer’s computer can run on the same operating system as the servers or even within a virtual machine that works with the same software. Although this does not completely avoid the incidental problem with the different environments, it does minimize it.
Alternative development cycle
The developers will put together a package at the end of their work (and first local tests) which can then be tested by the QA. This package must be executable/bootable and can be a VM, a Docker image, or something else, but not an archive file such as ZIP or RAR. If all tests have been passed, the changes can be passed on to the customer, who can then make the final acceptance before the live operation. Because of the portability of the technologies, the customer can test the system on their local machines, provided they own the virtualization software. In the end, the package will replace the live system, possibly slightly adapted (more memory, different database config etc). The whole process is called a deployment chain. As the work repeats itself, The deployment chain can be largely automated. Also the creation of the package can be automated by CI and Orchestration Tools.
Reasons Behind Running Virtualization for DevOps
Virtualization is a virtual version of a component that can be applied to computers, storage devices, applications, operating systems, or networks. In principle, it is the software to decouple an operating system or individual applications from the hardware. Support additional functions Current chipsets and processors designed for virtualization are the technology. With virtualization, it is possible to install several, completely separate servers on the same hardware and let them work independently of each other. The available resources are split between the different systems. In companies, fewer servers are needed so that the freed-up capacity in the data center can be used for other tasks. Today’s x86 servers, which are designed to run an application and run an operating system, present major challenges to IT organizations. Small data centers must provide many servers that are highly inefficient, as they are only between 5 and 15 percent busy.
Software simulates the presence of hardware and creates a virtual computer system. Multiple operating systems and applications can run this way on a single server. This allows higher efficiency and economies of scale to be achieved. Software simulates the presence of hardware and creates a virtual computer system. Multiple operating systems and applications can run this way on a single server. This allows higher efficiency and economies of scale to be achieved. Benefits of virtualization include agility, flexibility and scalability. Cost savings are made possible and workloads are deployed faster. Performance and availability are optimized and operating procedures are automated. But even IT components are easier to manage and operate more cost-effectively.
Types of Virtualization
There is a distinction between several types of virtualization. Each type has certain advantages and disadvantages that must be weighed against each other first. A change in virtualization technology is often not possible with larger projects. There is a distinction between:
Hardware Virtualization
The form of hardware virtualization emulates a complete computer. This also includes the provision of proprietary hardware components that belong to the VM. The only exception is the CPU, which can not be not be fully replaced. The instructions of the guest operating system (that which is started within the VM) to the CPU are only intercepted and adapted. This means that an x86 / 64 CPU can only be deployed on a host with an x86 / 64 CPU, but not an ARM-based CPU. The guest operating systems need no modification in order to operate. Extensions may still be installed to allow for features that are not otherwise standard on the operating system. Thus VirtualBox has in its guest extension package the function that the guest screen always adapts to the window on the host.
Hardware virtualization also does not exclude physical components to connect to a VM. For example, disks in VirtualBox are large .vdi files that reside in the host’s file system. But other hard drives physically attached to the host can also be assigned to the VM.
Examples of Hardware Virtualization include VMware vSphere, VMware Workstation, Microsoft Hyper-V, VirtualBox, XEN etc.
Paravirtualization
The form of paravirtualization does not emulate hardware. Alternatively, the host offers a special interface for hardware access. The hardware represented is similar, but not necessarily identical to the physical hardware. However, the kernel API is not modified or replaced. Furthermore, only the same CPU can be used, which is also in the host. There is no software Sending between guest code and CPU. In return, however, the operating systems must be modified to be compatible with this form of virtualization. This form of virtualization achieves higher performance than hardware virtualization.
In this way, the virtual machine knows that it is virtualized because, unlike hardware virtualization, its operating system kernel has been adjusted accordingly. Due to the adjustments, however, one must know the source code of the corresponding kernel, which makes it impossible to virtualize Windows, for example, without the help of Microsoft. In return, the power loss in this form of virtualization is only between 0.1 and 5%.
Examples of paravirtualization is also XEN and VMware vSphere.
Container Virtualization
The form of container virtualization also does not involve hardware emulation. The biggest difference is that even the kernel is shared, including all resources except the file system. This means that all guest systems necessarily have the same kernel as the host system. However, different Linux distributions are possible because they all build on the same kernel and only differ in the software that resides in the file system. Since the software layer is very thin, an almost native performance can be achieved. Operating system calls are not interrupted, modified or replaced. This form of virtualization does not exclude changes to the behavior of the system, so Docker containers have their own virtual network interfaces, which then have to be set accordingly in the container configuration. In this way NAT is possible. Guest systems can be operational after a few seconds, depending on the software and settings. This form of virtualization is the fastest compared to para and hardware virtualization.
However, if changes are made to the kernel, such as one VM loading kernel modules or changing system time, all other systems, whether running in another VM or outside, are affected.
Examples of Container Virtualization includes Linux Containers LXC, OpenVZ, Solaris Containers, FreeBSD Jails and docker.
Conclusion : What We Learned
In this article, we learned :
Basics of how software development cycle goes in real
Basics of types of virtualization
Basic reason to think about using virtualization in practice.
In next article, we mainly discussed the Benefits of the Virtualizations. You can click here to read the Part 2 of this series on Virtualization Requirements for DevOps.
Tagged With job requirements on virtualization dev ops