3 To get a taste of ICN, this guide will walk through creating a simple
4 two machine cluster using virtual machines.
6 A total of 3 virtual machines will be used: each with 8 CPUs, 24 GB
7 RAM, and 30 GB disk. So grab a host machine, [install Vagrant with the
8 libvirt provider](https://github.com/vagrant-libvirt/vagrant-libvirt#installation), and let's get started.
12 $ vagrant up --no-parallel
14 vagrant@jump:~$ sudo su
15 root@jump:/home/vagrant# cd /icn
16 root@jump:/icn# make jump_server
17 root@jump:/icn# make vm_cluster
20 ## Create the virtual environment
22 $ vagrant up --no-parallel
24 Now let's take a closer look at what was created.
26 $ virsh -c qemu:///system list
28 ----------------------------------------------------
30 1208 vm-machine-1 running
31 1209 vm-machine-2 running
33 $ virsh -c qemu:///system net-list
34 Name State Autostart Persistent
35 ----------------------------------------------------------
36 vm-baremetal active yes yes
37 vm-provisioning active no yes
40 +--------------+---------+---------+------+
41 | Domain name | Status | Address | Port |
42 +--------------+---------+---------+------+
43 | vm-machine-1 | running | :: | 6230 |
44 | vm-machine-2 | running | :: | 6231 |
45 +--------------+---------+---------+------+
47 We've created a jump server and the two machines that will form the
48 cluster. The jump server will be responsible for creating the
51 We also created two networks, baremetal and provisioning, and a third
52 network overlaid upon the baremetal network using [VirtualBMC](https://opendev.org/openstack/virtualbmc) for
53 issuing IPMI commands to the virtual machines.
55 It's worth looking at these networks in more detail as they will be
56 important during configuration of the jump server and cluster.
58 $ virsh -c qemu:///system net-dumpxml vm-baremetal
59 <network connections='3' ipv6='yes'>
60 <name>vm-baremetal</name>
61 <uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
64 <port start='1024' end='65535'/>
67 <bridge name='virbr3' stp='on' delay='0'/>
68 <mac address='52:54:00:a3:e7:09'/>
69 <ip address='192.168.151.1' netmask='255.255.255.0'>
71 <range start='192.168.151.1' end='192.168.151.254'/>
76 The baremetal network provides outbound network access through the
77 host and also assigns DHCP addresses in the range `192.168.151.2` to
78 `192.168.151.254` to the virtual machines (the host itself is
81 $ virsh -c qemu:///system net-dumpxml vm-provisioning
82 <network connections='3'>
83 <name>vm-provisioning</name>
84 <uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
85 <bridge name='vm0' stp='on' delay='0'/>
86 <mac address='52:54:00:3e:38:a5'/>
89 The provisioning network is a private network; only the virtual
90 machines may communicate over it. Importantly, no DHCP server is
91 present on this network. The `ironic` component of the jump server will
92 be managing DHCP requests.
94 The virtual baseband management controller controllers provided by
95 VirtualBMC are listening at the address and port listed above on the
96 host. To issue an IPMI command to `vm-machine-1` for example, the
97 command will be issued to `192.168.151.1:6230`, and VirtualBMC will
98 translate the the IPMI command into libvirt calls.
100 Now let's look at the networks from inside the virtual machines.
102 $ virsh -c qemu:///system dumpxml vm-jump
104 <interface type='network'>
105 <mac address='52:54:00:a8:97:6d'/>
106 <source network='vm-baremetal' bridge='virbr3'/>
107 <target dev='vnet0'/>
108 <model type='virtio'/>
109 <alias name='ua-net-0'/>
110 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
112 <interface type='network'>
113 <mac address='52:54:00:80:3d:4c'/>
114 <source network='vm-provisioning' bridge='vm0'/>
115 <target dev='vnet1'/>
116 <model type='virtio'/>
117 <alias name='ua-net-1'/>
118 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
122 The baremetal network NIC in the jump server is the first NIC present
123 in the machine and depending on the device naming scheme in place will
124 be called `ens5` or `eth0`. Similarly, the provisioning network NIC will
127 $ virsh -c qemu:///system dumpxml vm-machine-1
129 <interface type='network'>
130 <mac address='52:54:00:c6:75:40'/>
131 <source network='vm-provisioning' bridge='vm0'/>
132 <target dev='vnet2'/>
133 <model type='virtio'/>
134 <alias name='ua-net-0'/>
135 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
137 <interface type='network'>
138 <mac address='52:54:00:20:a3:0a'/>
139 <source network='vm-baremetal' bridge='virbr3'/>
140 <target dev='vnet4'/>
141 <model type='virtio'/>
142 <alias name='ua-net-1'/>
143 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
147 In contrast to the jump server, the provisioning network NIC is the
148 first NIC present in the machine and will be named `ens5` or `eth0` and
149 the baremetal network NIC will be `ens6` or `eth1`.
151 The order of NICs is crucial here: the provisioning network NIC must
152 be the NIC that the machine PXE boots from, and the BIOS used in this
153 virtual machine is configured to use the first NIC in the machine. A
154 physical machine will typically provide this as a configuration option
155 in the BIOS settings.
158 ## Install the jump server components
161 vagrant@jump:~$ sudo su
162 root@jump:/home/vagrant# cd /icn
164 Before telling ICN to start installing the components, it must first
165 know which is the IPMI network NIC and which is the provisioning
166 network NIC. Recall that in the jump server the IPMI network is
167 overlaid onto the baremetal network and that the baremetal network NIC
168 is `eth0`, and also that the provisioning network NIC is `eth1`.
170 Edit `user_config.sh` to the below.
173 export IRONIC_INTERFACE="eth1"
175 Now install the jump server components.
177 root@jump:/icn# make jump_server
179 Let's walk quickly through some of the components installed. The
180 first, and most fundamental, is that the jump server is now a
181 single-node Kubernetes cluster.
183 root@jump:/icn# kubectl cluster-info
184 Kubernetes control plane is running at https://192.168.151.45:6443
186 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
188 The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is installed, with the [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
189 infrastructure provider and Kubeadm bootstrap provider. These
190 components provide the base for creating clusters with ICN.
192 root@jump:/icn# kubectl get deployments -A
193 NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
194 baremetal-operator-system baremetal-operator-controller-manager 1/1 1 1 96m
195 capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager 1/1 1 1 96m
196 capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager 1/1 1 1 96m
197 capi-system capi-controller-manager 1/1 1 1 96m
198 capm3-system capm3-controller-manager 1/1 1 1 96m
199 capm3-system capm3-ironic 1/1 1 1 98m
200 capm3-system ipam-controller-manager 1/1 1 1 96m
203 A closer look at the above deployments shows two others of interest:
204 `baremetal-operator-controller-manager` and `capm3-ironic`. These
205 components are from the [Metal3](https://metal3.io/) project and are dependencies of the
206 Metal3 infrastructure provider.
208 Before moving on to the next step, let's take one last look at the
209 provisioning NIC we set in `user_config.sh`.
211 root@jump:/icn# ip link show dev eth1
212 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
213 link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
215 The `master provisioning` portion indicates that this interface is now
216 attached to the `provisioning` bridge. The `provisioning` bridge was
217 created during installation and is how the `capm3-ironic` deployment
218 will communicate with the machines to be provisioned when it is time
219 to install an operating system.
224 root@jump:/icn# make vm_cluster
226 Once complete, we'll have a K8s cluster up and running on the machines
227 created earlier with all of the ICN addons configured and validated.
229 root@jump:/icn# clusterctl -n metal3 describe cluster icn
230 NAME READY SEVERITY REASON SINCE MESSAGE
232 ├─ClusterInfrastructure - Metal3Cluster/icn
233 ├─ControlPlane - KubeadmControlPlane/icn True 81m
234 │ └─Machine/icn-qhg4r True 81m
235 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-r8g2f
237 └─MachineDeployment/icn True 73m
238 └─Machine/icn-6b8dfc7f6f-qvrqv True 76m
239 └─MachineInfrastructure - Metal3Machine/icn-workers-bxf52
241 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
242 root@jump:/icn# kubectl --kubeconfig=icn-admin.conf cluster-info
243 Kubernetes control plane is running at https://192.168.151.254:6443
244 CoreDNS is running at https://192.168.151.254:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
246 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
251 At this point you may proceed with the [Installation
252 guide](installation-guide.md) to learn more about the hardware and
253 software configuration in a physical environment or jump directly to
254 the [Deployment](installation-guide.md#Deployment) sub-section to
255 examine the cluster creation process in more detail.
258 <a id="org48e2dc9"></a>