3 To get a taste of ICN, this guide will walk through creating a simple
4 two machine cluster using virtual machines.
6 A total of 3 virtual machines will be used: each with 8 CPUs, 24 GB
7 RAM, and 30 GB disk. So grab a host machine, [install Vagrant with the
8 libvirt provider](https://github.com/vagrant-libvirt/vagrant-libvirt#installation), and let's get started.
12 $ git clone https://gerrit.akraino.org/r/icn
14 $ vagrant up --no-parallel
16 vagrant@jump:~$ sudo su
17 root@jump:/home/vagrant# cd /icn
18 root@jump:/icn# make jump_server
19 root@jump:/icn# make vm_cluster
22 ## Create the virtual environment
24 $ vagrant up --no-parallel
26 Now let's take a closer look at what was created.
28 $ virsh -c qemu:///system list
30 ----------------------------------------------------
32 1208 vm-machine-1 running
33 1209 vm-machine-2 running
35 $ virsh -c qemu:///system net-list
36 Name State Autostart Persistent
37 ----------------------------------------------------------
38 vm-baremetal active yes yes
39 vm-provisioning active no yes
42 +--------------+---------+---------+------+
43 | Domain name | Status | Address | Port |
44 +--------------+---------+---------+------+
45 | vm-machine-1 | running | :: | 6230 |
46 | vm-machine-2 | running | :: | 6231 |
47 +--------------+---------+---------+------+
49 We've created a jump server and the two machines that will form the
50 cluster. The jump server will be responsible for creating the
53 We also created two networks, baremetal and provisioning, and a third
54 network overlaid upon the baremetal network using [VirtualBMC](https://opendev.org/openstack/virtualbmc) for
55 issuing IPMI commands to the virtual machines.
57 It's worth looking at these networks in more detail as they will be
58 important during configuration of the jump server and cluster.
60 $ virsh -c qemu:///system net-dumpxml vm-baremetal
61 <network connections='3' ipv6='yes'>
62 <name>vm-baremetal</name>
63 <uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
66 <port start='1024' end='65535'/>
69 <bridge name='virbr3' stp='on' delay='0'/>
70 <mac address='52:54:00:a3:e7:09'/>
71 <ip address='192.168.151.1' netmask='255.255.255.0'>
73 <range start='192.168.151.1' end='192.168.151.254'/>
78 The baremetal network provides outbound network access through the
79 host and also assigns DHCP addresses in the range `192.168.151.2` to
80 `192.168.151.254` to the virtual machines (the host itself is
83 $ virsh -c qemu:///system net-dumpxml vm-provisioning
84 <network connections='3'>
85 <name>vm-provisioning</name>
86 <uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
87 <bridge name='vm0' stp='on' delay='0'/>
88 <mac address='52:54:00:3e:38:a5'/>
91 The provisioning network is a private network; only the virtual
92 machines may communicate over it. Importantly, no DHCP server is
93 present on this network. The `ironic` component of the jump server will
94 be managing DHCP requests.
96 The virtual baseband management controller controllers provided by
97 VirtualBMC are listening at the address and port listed above on the
98 host. To issue an IPMI command to `vm-machine-1` for example, the
99 command will be issued to `192.168.151.1:6230`, and VirtualBMC will
100 translate the the IPMI command into libvirt calls.
102 Now let's look at the networks from inside the virtual machines.
104 $ virsh -c qemu:///system dumpxml vm-jump
106 <interface type='network'>
107 <mac address='52:54:00:a8:97:6d'/>
108 <source network='vm-baremetal' bridge='virbr3'/>
109 <target dev='vnet0'/>
110 <model type='virtio'/>
111 <alias name='ua-net-0'/>
112 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
114 <interface type='network'>
115 <mac address='52:54:00:80:3d:4c'/>
116 <source network='vm-provisioning' bridge='vm0'/>
117 <target dev='vnet1'/>
118 <model type='virtio'/>
119 <alias name='ua-net-1'/>
120 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
124 The baremetal network NIC in the jump server is the first NIC present
125 in the machine and depending on the device naming scheme in place will
126 be called `ens5` or `eth0`. Similarly, the provisioning network NIC will
129 $ virsh -c qemu:///system dumpxml vm-machine-1
131 <interface type='network'>
132 <mac address='52:54:00:c6:75:40'/>
133 <source network='vm-provisioning' bridge='vm0'/>
134 <target dev='vnet2'/>
135 <model type='virtio'/>
136 <alias name='ua-net-0'/>
137 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
139 <interface type='network'>
140 <mac address='52:54:00:20:a3:0a'/>
141 <source network='vm-baremetal' bridge='virbr3'/>
142 <target dev='vnet4'/>
143 <model type='virtio'/>
144 <alias name='ua-net-1'/>
145 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
149 In contrast to the jump server, the provisioning network NIC is the
150 first NIC present in the machine and will be named `ens5` or `eth0` and
151 the baremetal network NIC will be `ens6` or `eth1`.
153 The order of NICs is crucial here: the provisioning network NIC must
154 be the NIC that the machine PXE boots from, and the BIOS used in this
155 virtual machine is configured to use the first NIC in the machine. A
156 physical machine will typically provide this as a configuration option
157 in the BIOS settings.
160 ## Install the jump server components
163 vagrant@jump:~$ sudo su
164 root@jump:/home/vagrant# cd /icn
166 Before telling ICN to start installing the components, it must first
167 know which is the IPMI network NIC and which is the provisioning
168 network NIC. Recall that in the jump server the IPMI network is
169 overlaid onto the baremetal network and that the baremetal network NIC
170 is `eth0`, and also that the provisioning network NIC is `eth1`.
172 Edit `user_config.sh` to the below.
175 export IRONIC_INTERFACE="eth1"
177 Now install the jump server components.
179 root@jump:/icn# make jump_server
181 Let's walk quickly through some of the components installed. The
182 first, and most fundamental, is that the jump server is now a
183 single-node Kubernetes cluster.
185 root@jump:/icn# kubectl cluster-info
186 Kubernetes control plane is running at https://192.168.151.45:6443
188 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
190 The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is installed, with the [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
191 infrastructure provider and Kubeadm bootstrap provider. These
192 components provide the base for creating clusters with ICN.
194 root@jump:/icn# kubectl get deployments -A
195 NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
196 baremetal-operator-system baremetal-operator-controller-manager 1/1 1 1 96m
197 capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager 1/1 1 1 96m
198 capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager 1/1 1 1 96m
199 capi-system capi-controller-manager 1/1 1 1 96m
200 capm3-system capm3-controller-manager 1/1 1 1 96m
201 capm3-system capm3-ironic 1/1 1 1 98m
202 capm3-system ipam-controller-manager 1/1 1 1 96m
205 A closer look at the above deployments shows two others of interest:
206 `baremetal-operator-controller-manager` and `capm3-ironic`. These
207 components are from the [Metal3](https://metal3.io/) project and are dependencies of the
208 Metal3 infrastructure provider.
210 Before moving on to the next step, let's take one last look at the
211 provisioning NIC we set in `user_config.sh`.
213 root@jump:/icn# ip link show dev eth1
214 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
215 link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
217 The `master provisioning` portion indicates that this interface is now
218 attached to the `provisioning` bridge. The `provisioning` bridge was
219 created during installation and is how the `capm3-ironic` deployment
220 will communicate with the machines to be provisioned when it is time
221 to install an operating system.
226 root@jump:/icn# make vm_cluster
228 Once complete, we'll have a K8s cluster up and running on the machines
229 created earlier with all of the ICN addons configured and validated.
231 root@jump:/icn# clusterctl -n metal3 describe cluster icn
232 NAME READY SEVERITY REASON SINCE MESSAGE
234 ├─ClusterInfrastructure - Metal3Cluster/icn
235 ├─ControlPlane - KubeadmControlPlane/icn True 81m
236 │ └─Machine/icn-qhg4r True 81m
237 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-r8g2f
239 └─MachineDeployment/icn True 73m
240 └─Machine/icn-6b8dfc7f6f-qvrqv True 76m
241 └─MachineInfrastructure - Metal3Machine/icn-workers-bxf52
243 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
244 root@jump:/icn# kubectl --kubeconfig=icn-admin.conf cluster-info
245 Kubernetes control plane is running at https://192.168.151.254:6443
246 CoreDNS is running at https://192.168.151.254:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
248 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
253 At this point you may proceed with the [Installation
254 guide](installation-guide.md) to learn more about the hardware and
255 software configuration in a physical environment or jump directly to
256 the [Deployment](installation-guide.md#Deployment) sub-section to
257 examine the cluster creation process in more detail.
260 <a id="org48e2dc9"></a>