Why not setting up Proxmox to establish free High Availability (HA) in a business and prevent data loss and provide services constantly?

For my project in Cloud Computing i tried to setup HA on a single Proxmox server by myself to see if live migration and failover is possible. The usage of HA is mainly to protect data from getting lost if a server goes down and also to offer the services of a server constantly. Another advantage of HA specially in Proxmox is, that you can easily live migrate nodes to a other host and then make maintenance work or inspection. It is also a open source software which means, it is for everyone free to get it.
Following i will give you a short description of my steps to setup a stable HA on Proxmox.
I used a single PC with this specs:
CPU: 8 x Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (1 Socket)
RAM: 16GB
HDD: 1TB
Network Device (eno1)

Proxmox version: 5.2.1
Proxmox is a complete opensource server virtualization management solution. It offers the ability to manage virtual server (VPS) technology with the Linux OpenVZ and KVM technologies. Proxmox offers a web interface accessible after installation on your server which makes management easy, typically needing only a few clicks.

Ceph version: luminous
Ceph is open source software designed to provide highly scalable object-, block- and file-based storage under a unified system.
Ceph storage clusters are designed to run on commodity hardware, using an algorithm called CRUSH (Controlled Replication Under Scalable Hashing) to ensure data is evenly distributed across the cluster and that all cluster nodes can retrieve data quickly without any centralized bottlenecks.

For HA with Proxmox you need at least 3 nodes which have to be integrated in one cluster.
Now i show you my steps shortly explained with links which helped me to do it:

1. First install the main Proxmox HOST on my PC with static IP (https://pve.proxmox.com/wiki/Installation)

2. Enable SSH and change repository (see https://pve.proxmox.com/wiki/Package_Repositories)

3. Setup virtual bridge on HOST PC to create a subnetwork for the nodes. This makes it possible to use one single IP of the HOST for the internet access of each node. We need internet access of the nodes to be able to use their Web GUI to implement HA and that is why we use port forwarding because every node have the default port 8006 for the Proxmox Web GUI (https://raymii.org/s/tutorials/Proxmox_VE_One_Public_IP.html). I also added a subnetwork virtual bridge for Ceph which is necessary for distributed storage.
My setup in /etc/network/interfaces of HOST PC:
auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address  x.x.x.x (your static IP for internet access which is the official one)
        netmask  x.x.x.x (netmask)
        gateway  x.x.x.x

auto vmbr0 #node subnetwork
iface vmbr0 inet static
        address  192.168.100.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        bridge_vlan_aware yes
        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o eno1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.100.0/24' -o eno1 -j MASQUERADE
        post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 8011 -j DNAT --to 192.168.100.2:8006
        post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 8011 -j DNAT --to 192.168.100.2:8006
        post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 8012 -j DNAT --to 192.168.100.3:8006
        post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 8012 -j DNAT --to 192.168.100.3:8006
        post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 8013 -j DNAT --to 192.168.100.4:8006
        post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 8013 -j DNAT --to 192.168.100.4:8006

auto vmbr1 #Ceph subnetwork
iface vmbr1 inet static
        address  10.10.10.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

4. Created 3 virtual machines for the 3 nodes using the Web GUI of Proxmox HOST (https://pve.proxmox.com/wiki/Graphical_User_Interface)
Specs:
HDD 1: 32GB for Proxmox Distribution
HDD 2: 64GB for Ceph Storage
CPU: 4 cores a 2 sockets
Network Device (net0): Bridge: vmbr0, Model: VirtIO
Network Device (net1): Bridge: vmbr1, Model: VirtIO

5. Install Proxmox 5.2.1 on each node, enable ssh and change repository like above link

6. Setup static subnetwork IP 192.168.100.X/24 for each node
My setup in /etc/network/interfaces of each node:
auto lo
iface lo inet loopback

iface ens18 inet manual

auto ens19 #Ceph network
iface ens19 inet static
        address  10.10.10.2
        netmask  255.255.255.0

auto vmbr0 #node subnetwork
iface vmbr0 inet static
        address  192.168.100.X
        netmask  255.255.255.0
        gateway  192.168.100.1
        bridge-ports ens18
        bridge-stp off
        bridge-fd 0

7. After installation restart every node and do apt-get update & upgrade
Make sure all nodes have the same settings and system version

8. Setup cluster of the nodes over shell of each node.
Commands:
First node (node1):      pvecm create pvecluster
Second node (node2): pvecm add node1
Third node (node3):    pvecm add node1
Here we create the cluster with the name pvecluster on node1 and add to it node2 and node3

9. Setup Ceph for distributed storage on each node with command pveceph install —version luminous on each shell of the nodes

10. Initialize Ceph on special network for ceph 10.10.10.0/24 using command: pveceph init —network 10.10.10.0/24 on each shell of the nodes

11. Create monitor of Ceph on each node command: pveceph createmon

12. Create Ceph pool on node1 using Web GUI with „add“ in Ceph register and then let all settings as default only additional check at „Add Storage“ to add pool to cluster

13. Create TestVM on one node, doesn't matter which one, with ubuntu image and following specs:
HDD: 32GB
CPU: 1 socket and 4 Cores
RAM: 2048 GB
Everything else: default
-> AS STORAGE YOU HAVE TO CHOOSE THE POOL WHICH YOU CREATED BEFORE

14. Setup HA in Web GUI. For that we create HA Group on Datacenter of the nodes at "HA" register -> Groups -> Create and add all node with check to the new Group.
Back to "HA" register we add at "Resources" the TestVM with the new Group in field Group to make sure, that failover is possible. As request state: started. The following link show you the same settings but in shell: https://pve.proxmox.com/wiki/High_Availability

15. After VM get started, now its possible to make live migration with the migrate button at the Web GUI on the TestVM.
You can also shutdown one node to see the failover of the TestVM, means that the TestVM will restart automatically on a new node

In conclusion i have to say it was very interesting to expand my knowledge about open source virtualization. As you can see, it is easy to setup a system like that and i think for startups it is a good possibility to make their business more efficient and safe their server from data loss in a cheap way. There are more options that you can do with this system which i did not show.
You find a all in all detailed description of this project in following link:
https://www.theseus.fi/bitstream/handle/10024/134004/KaloshinaE.Thesis.pdf?sequence=1

Big thanks to Mr. Hung for helping me to make this project possible.









Kommentare