You can easily find 6GB RAM OpenVZ server at $7/month charge or 1GB RAM server at $2/month charge. Previously we compared OpenVZ with KVM from technical aspects. OpenVZ Virtualization Costs Less But Performs Better. Limitations of OpenVZ Virtualization to Guest Cloud Server/VPS Instances Must Be Known Before Moving to Make Production Server.
Limitations of OpenVZ Virtualization to Guest Cloud Server/VPS is Inherited From Hypervisor
Basically OpenVZ uses different way to offer virtualization, OpenVZ using shared resources in container virtualization and KVM offer full virtualization. OpenVZ uses a shared kernel, shared drivers, and does not use paravirtualization. For most of the common need like running a PHP-FPM, Nginx based server to host WordPress, OpenVZ is great because OpenVZ does not have overhead – instance with 6GB RAM will behave more like a dedicated server due to somewhat “direct connection” with hardware. Commonly, web hosts do not allow to run with load average more than 4 for an hour, they may ban if too many targeted attacks run on your domain, you can not run Tor etc as OpenVZ is container-based virtualization and virtualization at OS level virtualization means that many basic components are used by all the guests. Each container performs and executes exactly like a dedicated server, most configuration files can be separate from hypervisor. However, there are configurations, which are mostly related to security, networking, devices can not be changed by the OpenVZ guest. Limitations of OpenVZ virtualization to the guest cloud server instances or VPS are inherited from the hypervisor.

Limitations of OpenVZ Virtualization : Where You Will Avoid OpenVZ
KVM theoretically like dedicated server. But at that price, the performance penalty regarding I/O and CPU points towards using low end dedicated server – it is meaningless to spend $200/month behind KVM instances. OVH’s dedicated servers at $100 will be faster than Linode, DigitalOcean’s instances – KVM, Xen does not matter. OpenVZ is in-between Docker and Dedicated Server. KVM and Xen also has some differences. For those reasons we used to publish guides on OpenStack based cloud servers – Rackspace, HP Cloud. Even, installation method will create difference. KVM will be more suitable for a when you need access to kernel modules and/or need to modify kernels. OpenVZ is creamy not for running game server, database server with intense load, doubtful to run Java.
---
The ethernet and mac matters are handled by the host. Certain tools will not work as mac address is like a “subset” of host mac address with nickname like venet0. The hardware addresses (HW-addr) will have lot of 0s instead of a MAC. OpenVZ supports veto interface but not all host allow it. It is very difficult work to add IPv6 tunneling on OpenVZ server as example. Also, starting TUN/TAP device may not be possible by a client. Vent will not give own MAC address but is faster. OpenVZ doesn’t let you bind your own IPs freely unless you’ve been given a veth interface. VENET0 doesn’t allow this at all. IPSEC on OpenVZ can be a bit of a pain, but it is possible.
OpenVZ uses simfs filesystem.
OpenVZ Virtualization Calculations For Cheating By Web Hosting
There is red pill, blue pill to if application is running within a Virtualized OS.
The traditional resource allocation system for OpenVZ VPS is called User Bean Counters (UBC). This consists of a umbers that determine what resources a given VPS is allowed to allocate, breaking memory allocation down into numerous specific areas such as Kernel pages, Network buffers, Process counts and the others. Biggest drawbacks of the UBC system is that the performance of the VPS is not consistent with respect to how swap space is handled. In very plain word – Whatever RAM of OpenVZ offers, make it 50% to get the amount of possible Physical RAM, rest is swap. physpages is the number of RAM pages used by processes in OpenVZ VPS. oomguarpages parameter accounts the total amount of memory and swap space used by the processes of a particular VPS aka container. So, subtracting physpages from oomguarpages user will get the number of swapped pages. Although having swapped pages does not always mean that the machine is oversold. If /proc/sys/vm/swappiness
set to 60 and the machines ram usage is 70%, the kernel may decided to move some unused pages to swap to get more space for io-cache. It is only of value if the value of swapped pages is quite high, that is an indicator for heavy overselling. In other words, this :
1 | free -m |
Actually not will say about real physical RAM usage from the host. There is swap on the host, which became “fake physical RAM” on OpenVZ container. It is not correct :
1 2 3 4 | root@lalalaola:/root# free -m total used free shared buff/cache available Men: 6144 131 3087 1691 2924 4168 Swap: 0 0 0 |
Easy manual command to check it is :
1 | sudo cat /proc/user_beancounters | grep -E '(uid|physpages|oomguarpages)' |
or better :
1 | sudo grep -E '(uid|physpages|oomguarpages)' /proc/user_beancounters |
That gave :
1 2 3 4 | root@lalalaola:/root# sudo grep -E '(uid|physpages|oomguarpages)' /proc/user_beancounters uid resource held maxheld barrier limit failcnt physpages 783865 813224 0 1572864 0 oomguarpages 467792 486040 9223372036854775807 9223372036854775807 0 |
As for UBC readouts, calculation is :
1 | (783865 - 467792) * 4096 / 1024 / 1024 = |
That is 1,234 MB has been swapped out. That is normal. Another 1 GB physical RAM actually ate up by the instance if ran a real dedicated server. We need to check for failed count on OpenVZ system to understand cheating by web host. top
is more useful common inside OpenVZ. Why you’ll use the OpenVZ server and how host has configured that matters.