3 To get a taste of ICN, this guide will walk through creating a simple
4 two machine cluster using virtual machines.
6 A total of 3 virtual machines will be used: each with 8 CPUs, 24 GB
7 RAM, and 30 GB disk. So grab a host machine, [install Vagrant with the
8 libvirt provider](https://github.com/vagrant-libvirt/vagrant-libvirt#installation), and let's get started.
12 $ git clone https://gerrit.akraino.org/r/icn
14 $ vagrant up --no-parallel
16 vagrant@jump:~$ sudo su
17 root@jump:/home/vagrant# cd /icn
18 root@jump:/icn# make jump_server
19 root@jump:/icn# make vm_cluster
21 > NOTE: vagrant destroy may fail due to
22 > https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1371. The
23 > workaround is to destroy the machines manually
25 > $ virsh -c qemu:///system destroy vm-machine-1
26 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-1
27 > $ virsh -c qemu:///system destroy vm-machine-2
28 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-2
30 ## Create the virtual environment
32 $ vagrant up --no-parallel
34 Now let's take a closer look at what was created.
36 $ virsh -c qemu:///system list --uuid --name
37 0582a3ab-2516-47fe-8a77-2a88c411b550 vm-jump
38 ab389bad-2f4a-4eba-b49e-0d649ff3d237 vm-machine-1
39 8d747997-dcd1-42ca-9e25-b3eedbe326aa vm-machine-2
41 $ virsh -c qemu:///system net-list
42 Name State Autostart Persistent
43 ----------------------------------------------------------
44 vagrant-libvirt active no yes
45 vm-baremetal active no yes
46 vm-provisioning active no yes
48 $ curl --insecure -u admin:password https://192.168.121.1:8000/redfish/v1/Managers
50 "@odata.type": "#ManagerCollection.ManagerCollection",
51 "Name": "Manager Collection",
52 "Members@odata.count": 3,
56 "@odata.id": "/redfish/v1/Managers/0582a3ab-2516-47fe-8a77-2a88c411b550"
60 "@odata.id": "/redfish/v1/Managers/8d747997-dcd1-42ca-9e25-b3eedbe326aa"
64 "@odata.id": "/redfish/v1/Managers/ab389bad-2f4a-4eba-b49e-0d649ff3d237"
69 "@odata.context": "/redfish/v1/$metadata#ManagerCollection.ManagerCollection",
70 "@odata.id": "/redfish/v1/Managers",
71 "@Redfish.Copyright": "Copyright 2014-2017 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."
74 We've created a jump server and the two machines that will form the
75 cluster. The jump server will be responsible for creating the
78 We also created two networks, baremetal and provisioning. The [Virtual
80 BMC](https://docs.openstack.org/sushy-tools/latest/user/dynamic-emulator.html)
81 used for issuing Redfish requests to the virtual machines is overlaid
82 on the vagrant-libvirt network.
84 It's worth looking at these networks in more detail as they will be
85 important during configuration of the jump server and cluster.
87 $ virsh -c qemu:///system net-dumpxml vm-baremetal
88 <network connections='3'>
89 <name>vm-baremetal</name>
90 <uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
93 <port start='1024' end='65535'/>
96 <bridge name='vm0' stp='on' delay='0'/>
97 <mac address='52:54:00:a3:e7:09'/>
98 <ip address='192.168.151.1' netmask='255.255.255.0'>
102 The baremetal network provides outbound network access through the
103 host. No DHCP server is present on this network. Address assignment to
104 the virtual machines is done using the (Metal3
105 IPAM)[https://metal3.io/blog/2020/07/06/IP_address_manager.html] while
106 the host itself is `192.168.151.1`.
108 $ virsh -c qemu:///system net-dumpxml vm-provisioning
109 <network connections='3'>
110 <name>vm-provisioning</name>
111 <uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
112 <bridge name='vm1' stp='on' delay='0'/>
113 <mac address='52:54:00:3e:38:a5'/>
116 The provisioning network is a private network; only the virtual
117 machines may communicate over it. Importantly, no DHCP server is
118 present on this network. The `ironic` component of the jump server will
119 be managing DHCP requests.
121 The virtual baseband management controller provided by the Virtual
122 Redfish BMC is listening at the address and port listed in the curl
123 command above. To issue a Redfish request to `vm-machine-1` for
124 example, the request will be issued to `192.168.121.1:8000`, and the
125 Virtual Redfish BMC will translate the the request into libvirt calls.
127 Now let's look at the networks from inside the virtual machines.
129 $ virsh -c qemu:///system dumpxml vm-jump
131 <interface type='network'>
132 <mac address='52:54:00:fc:a8:01'/>
133 <source network='vagrant-libvirt' bridge='virbr1'/>
134 <target dev='vnet0'/>
135 <model type='virtio'/>
136 <alias name='ua-net-0'/>
137 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
139 <interface type='network'>
140 <mac address='52:54:00:a8:97:6d'/>
141 <source network='vm-baremetal' bridge='vm0'/>
142 <target dev='vnet1'/>
143 <model type='virtio'/>
144 <alias name='ua-net-1'/>
145 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
147 <interface type='network'>
148 <mac address='52:54:00:80:3d:4c'/>
149 <source network='vm-provisioning' bridge='vm1'/>
150 <target dev='vnet2'/>
151 <model type='virtio'/>
152 <alias name='ua-net-2'/>
153 <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
157 The baremetal network NIC in the jump server is the second NIC present
158 in the machine and depending on the device naming scheme in place will
159 be called `ens6` or `eth1`. Similarly, the provisioning network NIC will
162 $ virsh -c qemu:///system dumpxml vm-machine-1
164 <interface type='network'>
165 <mac address='52:54:00:c6:75:40'/>
166 <source network='vm-provisioning' bridge='vm1'/>
167 <target dev='vnet3'/>
168 <model type='virtio'/>
169 <alias name='ua-net-0'/>
170 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
172 <interface type='network'>
173 <mac address='52:54:00:20:a3:0a'/>
174 <source network='vm-baremetal' bridge='vm0'/>
175 <target dev='vnet4'/>
176 <model type='virtio'/>
177 <alias name='ua-net-1'/>
178 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
182 In contrast to the jump server, the provisioning network NIC is the
183 first NIC present in the machine and will be named `ens5` or `eth0` and
184 the baremetal network NIC will be `ens6` or `eth1`.
186 The order of NICs is crucial here: the provisioning network NIC must
187 be the NIC that the machine PXE boots from, and the BIOS used in this
188 virtual machine is configured to use the first NIC in the machine. A
189 physical machine will typically provide this as a configuration option
190 in the BIOS settings.
192 ## Install the jump server components
195 vagrant@jump:~$ sudo su
196 root@jump:/home/vagrant# cd /icn
198 Before telling ICN to start installing the components, it must first
199 know which is the provisioning network NIC. Recall that in the jump
200 server the provisioning network NIC is `eth2`.
202 Edit `user_config.sh` to the below.
205 export IRONIC_INTERFACE="eth2"
207 Now install the jump server components.
209 root@jump:/icn# make jump_server
211 Let's walk quickly through some of the components installed. The
212 first, and most fundamental, is that the jump server is now a
213 single-node Kubernetes cluster.
215 root@jump:/icn# kubectl cluster-info
216 Kubernetes control plane is running at https://192.168.121.126:6443
218 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
220 The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is
222 [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
223 infrastructure provider and Kubeadm bootstrap provider. These
224 components provide the base for creating clusters with ICN.
226 root@jump:/icn# kubectl get deployments -A
227 NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
228 baremetal-operator-system baremetal-operator-controller-manager 1/1 1 1 96m
229 capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager 1/1 1 1 96m
230 capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager 1/1 1 1 96m
231 capi-system capi-controller-manager 1/1 1 1 96m
232 capm3-system capm3-controller-manager 1/1 1 1 96m
233 capm3-system capm3-ironic 1/1 1 1 98m
234 capm3-system ipam-controller-manager 1/1 1 1 96m
237 A closer look at the above deployments shows two others of interest:
238 `baremetal-operator-controller-manager` and `capm3-ironic`. These
239 components are from the [Metal3](https://metal3.io/) project and are dependencies of the
240 Metal3 infrastructure provider.
242 Before moving on to the next step, let's take one last look at the
243 provisioning NIC we set in `user_config.sh`.
245 root@jump:/icn# ip link show dev eth2
246 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
247 link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
249 The `master provisioning` portion indicates that this interface is now
250 attached to the `provisioning` bridge. The `provisioning` bridge was
251 created during installation and is how the `capm3-ironic` deployment
252 will communicate with the machines to be provisioned when it is time
253 to install an operating system.
257 root@jump:/icn# make vm_cluster
259 Once complete, we'll have a K8s cluster up and running on the machines
260 created earlier with all of the ICN addons configured and validated.
262 root@jump:/icn# clusterctl -n metal3 describe cluster icn
263 NAME READY SEVERITY REASON SINCE MESSAGE
265 ├─ClusterInfrastructure - Metal3Cluster/icn
266 ├─ControlPlane - KubeadmControlPlane/icn True 81m
267 │ └─Machine/icn-qhg4r True 81m
268 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-r8g2f
270 └─MachineDeployment/icn True 73m
271 └─Machine/icn-6b8dfc7f6f-qvrqv True 76m
272 └─MachineInfrastructure - Metal3Machine/icn-workers-bxf52
274 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
275 root@jump:/icn# kubectl --kubeconfig=icn-admin.conf cluster-info
276 Kubernetes control plane is running at https://192.168.151.254:6443
277 CoreDNS is running at https://192.168.151.254:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
279 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
283 At this point you may proceed with the [Installation
284 guide](installation-guide.md) to learn more about the hardware and
285 software configuration in a physical environment or jump directly to
286 the [Deployment](installation-guide.md#Deployment) sub-section to
287 examine the cluster creation process in more detail.
289 <a id="org48e2dc9"></a>