Subversion Repositories havirt

Rev

Rev 24 | Show entire file | Ignore whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 24 Rev 26
Line 1... Line 1...
1
### Installation
1
### Installation
2
 
2
 
3
havirt can be installed on any node, or on a completely separate machine. If
3
> havirt is designed for a cluster of nodes (hypervisors), with shared block
-
 
4
> devices (iSCSI) for the domains (virtual servers). It does **not** replace
4
installed on multiple nodes, it can be safely be run simultaneously on multiple
5
> virsh, but instead adds functionality to aid in a specific situation;
-
 
6
> cluster of nodes, using shared block devices.
-
 
7
 
-
 
8
> havirt was specifically written to be stored in a share NFS storage, which
5
nodes as file locking is enabled on any code which writes to /var
9
> is mounted at the same location on each node it is managing.
-
 
10
 
-
 
11
### Tested Environment
6
 
12
 
7
You need shared storage if havirt will run on multiple nodes, and havirt
13
We have tested this on four Devuan Linux (Chimaera) nodes, with virtlib
8
should be installed in that storage. The shared space is assumed to be
14
installed. A NAS provides iSCSI targets, and an NFS mount, which is
9
mounted a the same location on all nodes. havirt will store temporary files in
15
duplicated on all four nodes. **File locking must be enabled on the nfs
-
 
16
server**
-
 
17
 
10
it's install or install/var directory which allows a process on one node to
18
In our environment, the NFS share is mounted at /media/shared on each node,
11
respect a running process on another. In our implementation, we have an nfs
19
whic contains a subdirectory named havirt. The main script
12
mount at /media/shared/havirt. File locking is required on the shared
20
(/media/shared/havirt/havirt) is then symbolically linked to /usr/local/bin
13
storage.
21
on each node.
14
 
22
 
15
There is no fancy installer. Simply grab the files via subversion.
-
 
16
 
-
 
17
    svn co http://svn.dailydata.net/svn/havirt/stable /path/to/install
-
 
18
    ln -s /path/to/install/havirt /usr/local/bin/havirt
-
 
19
 
-
 
20
If you use the same setup we use, where havirt is to be stored in
-
 
21
/media/shared, the commands are:
23
### Installation Process
22
 
24
 
-
 
25
There is no fancy installer. Just grab the stable version and put it in the
-
 
26
correct directory. In our setup:
23
    svn co http://svn.dailydata.net/svn/havirt/stable /media/shared/havirt
27
    svn co http://svn.dailydata.net/svn/havirt/stable /media/shared/havirt
-
 
28
Then, on each node, run the command
24
    ln -s /media/shared/havirt/havirt /usr/local/bin/havirt
29
    ln -s /media/shared/havirt/havirt /usr/local/bin/havirt
25
 
30
 
26
Some Perl modules are needed. 
31
Finally, on one node, run the following command to generate the default
27
- Exporter
-
 
28
- Getopt::Long
32
config file, and verify all the perl modules are installed
29
- version
33
    havirt
30
- YAML::Tiny
-
 
31
- Data::Dumper
-
 
32
 
-
 
33
Most may be on your system already, but you can run the following command on
34
If you get a complaint of a missing perl module, the following command will
34
Debian based systems to add the ones not in the base install.
35
work on most Debian based systems
35
 
-
 
36
    apt install -y libdata-dump-perl libyaml-tiny-perl
36
    apt install -y libdata-dump-perl libyaml-tiny-perl
37
 
37
 
38
###  Setup
38
###  Setup
39
 
39
 
40
Ensure all nodes can talk to each other (including themselves) via public
40
Ensure all nodes can talk to each other (including themselves) via public
41
key encryption (no password). If havirt is on a separate machine, it
41
key encryption (no password). To do this, on each node, run the following
42
must be able to talk to all nodes. To do this, on each node which will be
-
 
43
managed by havirt, run the following command as root
42
commands as root
44
 
43
 
45
    ssh-keygen -t rsa -b 4096
44
    ssh-keygen -t rsa -b 4096
46
 
45
 
47
When asked for a passphrase, just press the Enter key for no passphrase.
46
When asked for a passphrase, just press the Enter key for no passphrase.
48
 
47
 
49
Now, on each machine, grab the public key for the generated keys.
48
Also, add the key to a file which we will share in the NFS Share
50
 
49
 
51
    cat /root/.ssh/rsa.pub >> /media/shared/havirt/sshkeys
50
    cat /root/.ssh/rsa.pub >> /media/shared/havirt/sshkeys
52
 
51
 
53
This will build the file /media/shared/havirt/sshkeys with all of the keys.
52
This will build the file /media/shared/havirt/sshkeys with all of the keys.
54
 
53
 
-
 
54
Once you have all the keys and they are all in the file, go back to each
55
Finally, on each machine
55
machine and make them the *authorized_keys* files for that machine
56
 
56
 
57
    cp /media/shared/havirt/sshkeys /root/.ssh/authorized_keys
57
    cp /media/shared/havirt/sshkeys /root/.ssh/authorized_keys
58
    chown root:root /root/.ssh/authorized_keys
58
    chown root:root /root/.ssh/authorized_keys
59
    chmod 600 /root/.ssh/authorized_keys
59
    chmod 600 /root/.ssh/authorized_keys
60
 
60
 
61
Each should be able to talk to the other. I generally create a
61
Finally, on each node, make sure you can ssh to all other nodes (this can be
-
 
62
combined with the above step).
-
 
63
 
-
 
64
    ssh node1
-
 
65
    ssh node2
-
 
66
    ssh node3
-
 
67
 
-
 
68
continue for each node. For each of the nodes, you will receive a message
-
 
69
saying the target is unverified, and it will ask for permission to add it to
-
 
70
it's *known_hosts* file. Type 'yes' to accept.
-
 
71
 
62
/root/.ssh/config which contains aliases for simplicity.
72
I generally create a /root/.ssh/config which contains aliases for simplicity.
63
 
73
 
64
Once you are sure all of your nodes can talk to each other, run the following
74
Once you are sure all of your nodes can talk to each other, run the following
65
command for each node.
75
command for each node. This can be done on any node.
66
 
76
 
67
    havirt node add NAMEOFNODE
77
    havirt node add NAMEOFNODE
68
 
78
 
69
where NAMEOFNODE is accessible either via DNS or an ssh alias.
79
where NAMEOFNODE is accessible either via DNS or an ssh alias.
70
 
80
 
71
When all nodes are added, you can list them with
81
When all nodes are added, you can list them with
72
 
82
 
73
    havirt node list -t tsv
83
    havirt node list
74
 
84
 
75
which will dump a tab delimited text file to STDOUT.
85
which will dump a list of nodes, with capabilities, to the screen (hint: add
-
 
86
the --format=tsv for a tab separated output).
76
 
87
 
-
 
88
**Optional**: To verify your iSCSI targets are available on all of your
-
 
89
nodes, you can run the command
-
 
90
    havirt cluster iscsi add name-or-ip-of-target # adds target to config
-
 
91
    havirt cluster iscsi # lists target(s)
-
 
92
    havirt cluster iscsi update node1 # update one node's iSCSI targets
-
 
93
    # after you are satisfied, run the following to update all nodes
-
 
94
    havirt cluster iscsi update # update iSCSI targets on all nodes
-
 
95
 
77
Now it is time to populate all of the domains currently running
96
Now it is time to populate all of the domains currently running. Note: this
-
 
97
will only add running domains on the nodes at the time you execute the
-
 
98
command, but it can be run at any later time to pick up anything new.
78
 
99
 
79
    havirt node scan # scan all nodes for running domains
100
    havirt node scan # scan all nodes for running domains
80
    havirt domain update # get conf for all domains and store in conf/
101
    havirt domain update # get conf for all domains and store in conf/
81
    havirt domain list # show all domains
102
    havirt domain list # show all domains
82
 
103
 
-
 
104
If everything looks good, copy havirt.sample.cron to /etc/cron.d/havirt.
83
Note: domain list will not show nodes which have no domains on them.
105
This will run the scan (in quiet mode) every 5 minutes.