Rev 35 | Blame | Compare with Previous | Last modification | View Log | Download | RSS feed
### Installation
> havirt is designed for a cluster of nodes (hypervisors), with shared block
> devices (iSCSI) for the domains (virtual servers). It does **not** replace
> virsh, but instead adds functionality to aid in a specific situation;
> cluster of nodes, using shared block devices.
> havirt was specifically written to be stored in a share NFS storage, which
> is mounted at the same location on each node it is managing.
### Tested Environment
We have tested this on four Devuan Linux (Chimaera) nodes, with virtlib
installed. A NAS provides iSCSI targets, and an NFS mount, which is
duplicated on all four nodes. **File locking must be enabled on the nfs
server**
In our environment, the NFS share is mounted at /media/shared on each node,
whic contains a subdirectory named havirt. The main script
(/media/shared/havirt/havirt) is then symbolically linked to /usr/local/bin
on each node.
### Installation Process
There is no fancy installer. Just grab the stable version and put it in the
correct directory. In our setup:
svn co http://svn.dailydata.net/svn/havirt/stable /media/shared/havirt
Then, on each node, run the command
ln -s /media/shared/havirt/havirt /usr/local/bin/havirt
Finally, on one node, run the following command to generate the default
config file, and verify all the perl modules are installed
havirt
If you get a complaint of a missing perl module, the following command will
work on most Debian based systems
apt install -y libdata-dump-perl libyaml-tiny-perl
### Setup
Ensure all nodes can talk to each other (including themselves) via public
key encryption (no password). To do this, on each node, run the following
commands as root
ssh-keygen -t rsa -b 4096
When asked for a passphrase, just press the Enter key for no passphrase.
Also, add the key to a file which we will share in the NFS Share
cat /root/.ssh/rsa.pub >> /media/shared/havirt/sshkeys
This will build the file /media/shared/havirt/sshkeys with all of the keys.
Once you have all the keys and they are all in the file, go back to each
machine and make them the *authorized_keys* files for that machine
cp /media/shared/havirt/sshkeys /root/.ssh/authorized_keys
chown root:root /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
Finally, on each node, make sure you can ssh to all other nodes (this can be
combined with the above step).
ssh node1
ssh node2
ssh node3
continue for each node. For each of the nodes, you will receive a message
saying the target is unverified, and it will ask for permission to add it to
it's *known_hosts* file. Type 'yes' to accept.
I generally create a /root/.ssh/config which contains aliases for simplicity.
Once you are sure all of your nodes can talk to each other, run the following
command for each node. This can be done on any node.
havirt node add NAMEOFNODE
where NAMEOFNODE is accessible either via DNS or an ssh alias.
When all nodes are added, you can list them with
havirt node list
which will dump a list of nodes, with capabilities, to the screen (hint: add
the --format=tsv for a tab separated output).
**Optional**: To verify your iSCSI targets are available on all of your
nodes, you can run the command
havirt cluster iscsi add name-or-ip-of-target # adds target to config
havirt cluster iscsi # lists target(s)
havirt cluster iscsi update node1 # update one node's iSCSI targets
# after you are satisfied, run the following to update all nodes
havirt cluster iscsi update # update iSCSI targets on all nodes
Now it is time to populate all of the domains currently running. Note: this
will only add running domains on the nodes at the time you execute the
command, but it can be run at any later time to pick up anything new.
havirt node scan # scan all nodes for running domains
havirt domain update # get conf for all domains and store in conf/
havirt domain list # show all domains
If everything looks good, copy havirt.sample.cron to /etc/cron.d/havirt.
This will run the scan (in quiet mode) every 5 minutes.