24 |
rodolico |
1 |
### Installation
|
|
|
2 |
|
26 |
rodolico |
3 |
> havirt is designed for a cluster of nodes (hypervisors), with shared block
|
|
|
4 |
> devices (iSCSI) for the domains (virtual servers). It does **not** replace
|
|
|
5 |
> virsh, but instead adds functionality to aid in a specific situation;
|
|
|
6 |
> cluster of nodes, using shared block devices.
|
24 |
rodolico |
7 |
|
26 |
rodolico |
8 |
> havirt was specifically written to be stored in a share NFS storage, which
|
|
|
9 |
> is mounted at the same location on each node it is managing.
|
24 |
rodolico |
10 |
|
26 |
rodolico |
11 |
### Tested Environment
|
24 |
rodolico |
12 |
|
26 |
rodolico |
13 |
We have tested this on four Devuan Linux (Chimaera) nodes, with virtlib
|
|
|
14 |
installed. A NAS provides iSCSI targets, and an NFS mount, which is
|
|
|
15 |
duplicated on all four nodes. **File locking must be enabled on the nfs
|
|
|
16 |
server**
|
24 |
rodolico |
17 |
|
26 |
rodolico |
18 |
In our environment, the NFS share is mounted at /media/shared on each node,
|
|
|
19 |
whic contains a subdirectory named havirt. The main script
|
|
|
20 |
(/media/shared/havirt/havirt) is then symbolically linked to /usr/local/bin
|
|
|
21 |
on each node.
|
24 |
rodolico |
22 |
|
26 |
rodolico |
23 |
### Installation Process
|
|
|
24 |
|
|
|
25 |
There is no fancy installer. Just grab the stable version and put it in the
|
|
|
26 |
correct directory. In our setup:
|
24 |
rodolico |
27 |
svn co http://svn.dailydata.net/svn/havirt/stable /media/shared/havirt
|
26 |
rodolico |
28 |
Then, on each node, run the command
|
24 |
rodolico |
29 |
ln -s /media/shared/havirt/havirt /usr/local/bin/havirt
|
|
|
30 |
|
26 |
rodolico |
31 |
Finally, on one node, run the following command to generate the default
|
|
|
32 |
config file, and verify all the perl modules are installed
|
|
|
33 |
havirt
|
|
|
34 |
If you get a complaint of a missing perl module, the following command will
|
|
|
35 |
work on most Debian based systems
|
24 |
rodolico |
36 |
apt install -y libdata-dump-perl libyaml-tiny-perl
|
|
|
37 |
|
|
|
38 |
### Setup
|
|
|
39 |
|
|
|
40 |
Ensure all nodes can talk to each other (including themselves) via public
|
26 |
rodolico |
41 |
key encryption (no password). To do this, on each node, run the following
|
|
|
42 |
commands as root
|
24 |
rodolico |
43 |
|
|
|
44 |
ssh-keygen -t rsa -b 4096
|
|
|
45 |
|
|
|
46 |
When asked for a passphrase, just press the Enter key for no passphrase.
|
|
|
47 |
|
26 |
rodolico |
48 |
Also, add the key to a file which we will share in the NFS Share
|
24 |
rodolico |
49 |
|
|
|
50 |
cat /root/.ssh/rsa.pub >> /media/shared/havirt/sshkeys
|
|
|
51 |
|
|
|
52 |
This will build the file /media/shared/havirt/sshkeys with all of the keys.
|
|
|
53 |
|
26 |
rodolico |
54 |
Once you have all the keys and they are all in the file, go back to each
|
|
|
55 |
machine and make them the *authorized_keys* files for that machine
|
24 |
rodolico |
56 |
|
|
|
57 |
cp /media/shared/havirt/sshkeys /root/.ssh/authorized_keys
|
|
|
58 |
chown root:root /root/.ssh/authorized_keys
|
|
|
59 |
chmod 600 /root/.ssh/authorized_keys
|
|
|
60 |
|
26 |
rodolico |
61 |
Finally, on each node, make sure you can ssh to all other nodes (this can be
|
|
|
62 |
combined with the above step).
|
24 |
rodolico |
63 |
|
26 |
rodolico |
64 |
ssh node1
|
|
|
65 |
ssh node2
|
|
|
66 |
ssh node3
|
|
|
67 |
|
|
|
68 |
continue for each node. For each of the nodes, you will receive a message
|
|
|
69 |
saying the target is unverified, and it will ask for permission to add it to
|
|
|
70 |
it's *known_hosts* file. Type 'yes' to accept.
|
|
|
71 |
|
|
|
72 |
I generally create a /root/.ssh/config which contains aliases for simplicity.
|
|
|
73 |
|
24 |
rodolico |
74 |
Once you are sure all of your nodes can talk to each other, run the following
|
26 |
rodolico |
75 |
command for each node. This can be done on any node.
|
24 |
rodolico |
76 |
|
|
|
77 |
havirt node add NAMEOFNODE
|
|
|
78 |
|
|
|
79 |
where NAMEOFNODE is accessible either via DNS or an ssh alias.
|
|
|
80 |
|
|
|
81 |
When all nodes are added, you can list them with
|
|
|
82 |
|
26 |
rodolico |
83 |
havirt node list
|
24 |
rodolico |
84 |
|
26 |
rodolico |
85 |
which will dump a list of nodes, with capabilities, to the screen (hint: add
|
|
|
86 |
the --format=tsv for a tab separated output).
|
24 |
rodolico |
87 |
|
26 |
rodolico |
88 |
**Optional**: To verify your iSCSI targets are available on all of your
|
|
|
89 |
nodes, you can run the command
|
|
|
90 |
havirt cluster iscsi add name-or-ip-of-target # adds target to config
|
|
|
91 |
havirt cluster iscsi # lists target(s)
|
|
|
92 |
havirt cluster iscsi update node1 # update one node's iSCSI targets
|
|
|
93 |
# after you are satisfied, run the following to update all nodes
|
|
|
94 |
havirt cluster iscsi update # update iSCSI targets on all nodes
|
24 |
rodolico |
95 |
|
26 |
rodolico |
96 |
Now it is time to populate all of the domains currently running. Note: this
|
|
|
97 |
will only add running domains on the nodes at the time you execute the
|
|
|
98 |
command, but it can be run at any later time to pick up anything new.
|
|
|
99 |
|
24 |
rodolico |
100 |
havirt node scan # scan all nodes for running domains
|
|
|
101 |
havirt domain update # get conf for all domains and store in conf/
|
|
|
102 |
havirt domain list # show all domains
|
|
|
103 |
|
26 |
rodolico |
104 |
If everything looks good, copy havirt.sample.cron to /etc/cron.d/havirt.
|
|
|
105 |
This will run the scan (in quiet mode) every 5 minutes.
|