3 To get a taste of ICN, this guide will walk through creating a simple
4 two machine cluster using virtual machines.
6 A total of 3 virtual machines will be used: each with 8 CPUs, 24 GB
7 RAM, and 30 GB disk. So grab a host machine, [install Vagrant with the
8 libvirt provider](https://github.com/vagrant-libvirt/vagrant-libvirt#installation), and let's get started.
12 $ git clone https://gerrit.akraino.org/r/icn
14 $ vagrant up --no-parallel
16 vagrant@jump:~$ sudo su
17 root@jump:/home/vagrant# cd /icn
18 root@jump:/icn# make jump_server
19 root@jump:/icn# make vm_cluster
21 > NOTE: vagrant destroy may fail due to
22 > https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1371. The
23 > workaround is to destroy the machines manually
25 > $ virsh -c qemu:///system destroy vm-machine-1
26 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-1
27 > $ virsh -c qemu:///system destroy vm-machine-2
28 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-2
30 ## Create the virtual environment
32 $ vagrant up --no-parallel
34 Now let's take a closer look at what was created.
36 $ virsh -c qemu:///system list
38 ----------------------------------------------------
40 1208 vm-machine-1 running
41 1209 vm-machine-2 running
43 $ virsh -c qemu:///system net-list
44 Name State Autostart Persistent
45 ----------------------------------------------------------
46 vm-baremetal active yes yes
47 vm-provisioning active no yes
50 +--------------+---------+---------+------+
51 | Domain name | Status | Address | Port |
52 +--------------+---------+---------+------+
53 | vm-machine-1 | running | :: | 6230 |
54 | vm-machine-2 | running | :: | 6231 |
55 +--------------+---------+---------+------+
57 We've created a jump server and the two machines that will form the
58 cluster. The jump server will be responsible for creating the
61 We also created two networks, baremetal and provisioning, and a third
62 network overlaid upon the baremetal network using [VirtualBMC](https://opendev.org/openstack/virtualbmc) for
63 issuing IPMI commands to the virtual machines.
65 It's worth looking at these networks in more detail as they will be
66 important during configuration of the jump server and cluster.
68 $ virsh -c qemu:///system net-dumpxml vm-baremetal
69 <network connections='3' ipv6='yes'>
70 <name>vm-baremetal</name>
71 <uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
74 <port start='1024' end='65535'/>
77 <bridge name='virbr3' stp='on' delay='0'/>
78 <mac address='52:54:00:a3:e7:09'/>
79 <ip address='192.168.151.1' netmask='255.255.255.0'>
81 <range start='192.168.151.1' end='192.168.151.254'/>
86 The baremetal network provides outbound network access through the
87 host and also assigns DHCP addresses in the range `192.168.151.2` to
88 `192.168.151.254` to the virtual machines (the host itself is
91 $ virsh -c qemu:///system net-dumpxml vm-provisioning
92 <network connections='3'>
93 <name>vm-provisioning</name>
94 <uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
95 <bridge name='vm0' stp='on' delay='0'/>
96 <mac address='52:54:00:3e:38:a5'/>
99 The provisioning network is a private network; only the virtual
100 machines may communicate over it. Importantly, no DHCP server is
101 present on this network. The `ironic` component of the jump server will
102 be managing DHCP requests.
104 The virtual baseband management controller controllers provided by
105 VirtualBMC are listening at the address and port listed above on the
106 host. To issue an IPMI command to `vm-machine-1` for example, the
107 command will be issued to `192.168.151.1:6230`, and VirtualBMC will
108 translate the the IPMI command into libvirt calls.
110 Now let's look at the networks from inside the virtual machines.
112 $ virsh -c qemu:///system dumpxml vm-jump
114 <interface type='network'>
115 <mac address='52:54:00:a8:97:6d'/>
116 <source network='vm-baremetal' bridge='virbr3'/>
117 <target dev='vnet0'/>
118 <model type='virtio'/>
119 <alias name='ua-net-0'/>
120 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
122 <interface type='network'>
123 <mac address='52:54:00:80:3d:4c'/>
124 <source network='vm-provisioning' bridge='vm0'/>
125 <target dev='vnet1'/>
126 <model type='virtio'/>
127 <alias name='ua-net-1'/>
128 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
132 The baremetal network NIC in the jump server is the first NIC present
133 in the machine and depending on the device naming scheme in place will
134 be called `ens5` or `eth0`. Similarly, the provisioning network NIC will
137 $ virsh -c qemu:///system dumpxml vm-machine-1
139 <interface type='network'>
140 <mac address='52:54:00:c6:75:40'/>
141 <source network='vm-provisioning' bridge='vm0'/>
142 <target dev='vnet2'/>
143 <model type='virtio'/>
144 <alias name='ua-net-0'/>
145 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
147 <interface type='network'>
148 <mac address='52:54:00:20:a3:0a'/>
149 <source network='vm-baremetal' bridge='virbr3'/>
150 <target dev='vnet4'/>
151 <model type='virtio'/>
152 <alias name='ua-net-1'/>
153 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
157 In contrast to the jump server, the provisioning network NIC is the
158 first NIC present in the machine and will be named `ens5` or `eth0` and
159 the baremetal network NIC will be `ens6` or `eth1`.
161 The order of NICs is crucial here: the provisioning network NIC must
162 be the NIC that the machine PXE boots from, and the BIOS used in this
163 virtual machine is configured to use the first NIC in the machine. A
164 physical machine will typically provide this as a configuration option
165 in the BIOS settings.
168 ## Install the jump server components
171 vagrant@jump:~$ sudo su
172 root@jump:/home/vagrant# cd /icn
174 Before telling ICN to start installing the components, it must first
175 know which is the IPMI network NIC and which is the provisioning
176 network NIC. Recall that in the jump server the IPMI network is
177 overlaid onto the baremetal network and that the baremetal network NIC
178 is `eth0`, and also that the provisioning network NIC is `eth1`.
180 Edit `user_config.sh` to the below.
183 export IRONIC_INTERFACE="eth1"
185 Now install the jump server components.
187 root@jump:/icn# make jump_server
189 Let's walk quickly through some of the components installed. The
190 first, and most fundamental, is that the jump server is now a
191 single-node Kubernetes cluster.
193 root@jump:/icn# kubectl cluster-info
194 Kubernetes control plane is running at https://192.168.151.45:6443
196 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
198 The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is installed, with the [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
199 infrastructure provider and Kubeadm bootstrap provider. These
200 components provide the base for creating clusters with ICN.
202 root@jump:/icn# kubectl get deployments -A
203 NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
204 baremetal-operator-system baremetal-operator-controller-manager 1/1 1 1 96m
205 capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager 1/1 1 1 96m
206 capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager 1/1 1 1 96m
207 capi-system capi-controller-manager 1/1 1 1 96m
208 capm3-system capm3-controller-manager 1/1 1 1 96m
209 capm3-system capm3-ironic 1/1 1 1 98m
210 capm3-system ipam-controller-manager 1/1 1 1 96m
213 A closer look at the above deployments shows two others of interest:
214 `baremetal-operator-controller-manager` and `capm3-ironic`. These
215 components are from the [Metal3](https://metal3.io/) project and are dependencies of the
216 Metal3 infrastructure provider.
218 Before moving on to the next step, let's take one last look at the
219 provisioning NIC we set in `user_config.sh`.
221 root@jump:/icn# ip link show dev eth1
222 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
223 link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
225 The `master provisioning` portion indicates that this interface is now
226 attached to the `provisioning` bridge. The `provisioning` bridge was
227 created during installation and is how the `capm3-ironic` deployment
228 will communicate with the machines to be provisioned when it is time
229 to install an operating system.
234 root@jump:/icn# make vm_cluster
236 Once complete, we'll have a K8s cluster up and running on the machines
237 created earlier with all of the ICN addons configured and validated.
239 root@jump:/icn# clusterctl -n metal3 describe cluster icn
240 NAME READY SEVERITY REASON SINCE MESSAGE
242 ├─ClusterInfrastructure - Metal3Cluster/icn
243 ├─ControlPlane - KubeadmControlPlane/icn True 81m
244 │ └─Machine/icn-qhg4r True 81m
245 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-r8g2f
247 └─MachineDeployment/icn True 73m
248 └─Machine/icn-6b8dfc7f6f-qvrqv True 76m
249 └─MachineInfrastructure - Metal3Machine/icn-workers-bxf52
251 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
252 root@jump:/icn# kubectl --kubeconfig=icn-admin.conf cluster-info
253 Kubernetes control plane is running at https://192.168.151.254:6443
254 CoreDNS is running at https://192.168.151.254:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
256 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
261 At this point you may proceed with the [Installation
262 guide](installation-guide.md) to learn more about the hardware and
263 software configuration in a physical environment or jump directly to
264 the [Deployment](installation-guide.md#Deployment) sub-section to
265 examine the cluster creation process in more detail.
268 <a id="org48e2dc9"></a>