3 To get a taste of ICN, this guide will walk through creating a simple
4 two machine cluster using virtual machines.
6 A total of 3 virtual machines will be used: each with 8 CPUs, 24 GB
7 RAM, and 30 GB disk. So grab a host machine, [install Vagrant with the
8 libvirt provider](https://github.com/vagrant-libvirt/vagrant-libvirt#installation), and let's get started.
12 $ git clone https://gerrit.akraino.org/r/icn
14 $ vagrant up --no-parallel
16 vagrant@jump:~$ sudo su
17 root@jump:/home/vagrant# cd /icn
18 root@jump:/icn# make jump_server
19 root@jump:/icn# make vm_cluster
21 > NOTE: vagrant destroy may fail due to
22 > https://github.com/vagrant-libvirt/vagrant-libvirt/issues/1371. The
23 > workaround is to destroy the machines manually
25 > $ virsh -c qemu:///system destroy vm-machine-1
26 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-1
27 > $ virsh -c qemu:///system destroy vm-machine-2
28 > $ virsh -c qemu:///system undefine --nvram --remove-all-storage vm-machine-2
30 ## Create the virtual environment
32 $ vagrant up --no-parallel
34 Now let's take a closer look at what was created.
36 $ virsh -c qemu:///system list --uuid --name
37 0582a3ab-2516-47fe-8a77-2a88c411b550 vm-jump
38 ab389bad-2f4a-4eba-b49e-0d649ff3d237 vm-machine-1
39 8d747997-dcd1-42ca-9e25-b3eedbe326aa vm-machine-2
41 $ virsh -c qemu:///system net-list
42 Name State Autostart Persistent
43 ----------------------------------------------------------
44 vm-baremetal active yes yes
45 vm-provisioning active no yes
47 $ curl --insecure -u admin:password https://192.168.121.1:8000/redfish/v1/Managers
49 "@odata.type": "#ManagerCollection.ManagerCollection",
50 "Name": "Manager Collection",
51 "Members@odata.count": 3,
55 "@odata.id": "/redfish/v1/Managers/0582a3ab-2516-47fe-8a77-2a88c411b550"
59 "@odata.id": "/redfish/v1/Managers/8d747997-dcd1-42ca-9e25-b3eedbe326aa"
63 "@odata.id": "/redfish/v1/Managers/ab389bad-2f4a-4eba-b49e-0d649ff3d237"
68 "@odata.context": "/redfish/v1/$metadata#ManagerCollection.ManagerCollection",
69 "@odata.id": "/redfish/v1/Managers",
70 "@Redfish.Copyright": "Copyright 2014-2017 Distributed Management Task Force, Inc. (DMTF). For the full DMTF copyright policy, see http://www.dmtf.org/about/policies/copyright."
73 We've created a jump server and the two machines that will form the
74 cluster. The jump server will be responsible for creating the
77 We also created two networks, baremetal and provisioning, and a third
78 network overlaid upon the baremetal network using [Virtual Redfish
79 BMC](https://docs.openstack.org/sushy-tools/latest/user/dynamic-emulator.html)
80 for issuing Redfish requests to the virtual machines.
82 It's worth looking at these networks in more detail as they will be
83 important during configuration of the jump server and cluster.
85 $ virsh -c qemu:///system net-dumpxml vm-baremetal
86 <network connections='3' ipv6='yes'>
87 <name>vm-baremetal</name>
88 <uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
91 <port start='1024' end='65535'/>
94 <bridge name='virbr3' stp='on' delay='0'/>
95 <mac address='52:54:00:a3:e7:09'/>
96 <ip address='192.168.151.1' netmask='255.255.255.0'>
98 <range start='192.168.151.1' end='192.168.151.254'/>
103 The baremetal network provides outbound network access through the
104 host and also assigns DHCP addresses in the range `192.168.151.2` to
105 `192.168.151.254` to the virtual machines (the host itself is
108 $ virsh -c qemu:///system net-dumpxml vm-provisioning
109 <network connections='3'>
110 <name>vm-provisioning</name>
111 <uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
112 <bridge name='vm0' stp='on' delay='0'/>
113 <mac address='52:54:00:3e:38:a5'/>
116 The provisioning network is a private network; only the virtual
117 machines may communicate over it. Importantly, no DHCP server is
118 present on this network. The `ironic` component of the jump server will
119 be managing DHCP requests.
121 The virtual baseband management controller provided by the Virtual
122 Redfish BMC is listening at the address and port listed in the curl
123 command above. To issue a Redfish request to `vm-machine-1` for
124 example, the request will be issued to `192.168.121.1:8000`, and the
125 Virtual Redfish BMC will translate the the request into libvirt calls.
127 Now let's look at the networks from inside the virtual machines.
129 $ virsh -c qemu:///system dumpxml vm-jump
131 <interface type='network'>
132 <mac address='52:54:00:a8:97:6d'/>
133 <source network='vm-baremetal' bridge='virbr3'/>
134 <target dev='vnet0'/>
135 <model type='virtio'/>
136 <alias name='ua-net-0'/>
137 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
139 <interface type='network'>
140 <mac address='52:54:00:80:3d:4c'/>
141 <source network='vm-provisioning' bridge='vm0'/>
142 <target dev='vnet1'/>
143 <model type='virtio'/>
144 <alias name='ua-net-1'/>
145 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
149 The baremetal network NIC in the jump server is the first NIC present
150 in the machine and depending on the device naming scheme in place will
151 be called `ens5` or `eth0`. Similarly, the provisioning network NIC will
154 $ virsh -c qemu:///system dumpxml vm-machine-1
156 <interface type='network'>
157 <mac address='52:54:00:c6:75:40'/>
158 <source network='vm-provisioning' bridge='vm0'/>
159 <target dev='vnet2'/>
160 <model type='virtio'/>
161 <alias name='ua-net-0'/>
162 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
164 <interface type='network'>
165 <mac address='52:54:00:20:a3:0a'/>
166 <source network='vm-baremetal' bridge='virbr3'/>
167 <target dev='vnet4'/>
168 <model type='virtio'/>
169 <alias name='ua-net-1'/>
170 <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
174 In contrast to the jump server, the provisioning network NIC is the
175 first NIC present in the machine and will be named `ens5` or `eth0` and
176 the baremetal network NIC will be `ens6` or `eth1`.
178 The order of NICs is crucial here: the provisioning network NIC must
179 be the NIC that the machine PXE boots from, and the BIOS used in this
180 virtual machine is configured to use the first NIC in the machine. A
181 physical machine will typically provide this as a configuration option
182 in the BIOS settings.
185 ## Install the jump server components
188 vagrant@jump:~$ sudo su
189 root@jump:/home/vagrant# cd /icn
191 Before telling ICN to start installing the components, it must first
192 know which is the provisioning network NIC. Recall that in the jump
193 server the provisioning network NIC is `eth1`.
195 Edit `user_config.sh` to the below.
198 export IRONIC_INTERFACE="eth1"
200 Now install the jump server components.
202 root@jump:/icn# make jump_server
204 Let's walk quickly through some of the components installed. The
205 first, and most fundamental, is that the jump server is now a
206 single-node Kubernetes cluster.
208 root@jump:/icn# kubectl cluster-info
209 Kubernetes control plane is running at https://192.168.151.45:6443
211 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
213 The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is installed, with the [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
214 infrastructure provider and Kubeadm bootstrap provider. These
215 components provide the base for creating clusters with ICN.
217 root@jump:/icn# kubectl get deployments -A
218 NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
219 baremetal-operator-system baremetal-operator-controller-manager 1/1 1 1 96m
220 capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager 1/1 1 1 96m
221 capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager 1/1 1 1 96m
222 capi-system capi-controller-manager 1/1 1 1 96m
223 capm3-system capm3-controller-manager 1/1 1 1 96m
224 capm3-system capm3-ironic 1/1 1 1 98m
225 capm3-system ipam-controller-manager 1/1 1 1 96m
228 A closer look at the above deployments shows two others of interest:
229 `baremetal-operator-controller-manager` and `capm3-ironic`. These
230 components are from the [Metal3](https://metal3.io/) project and are dependencies of the
231 Metal3 infrastructure provider.
233 Before moving on to the next step, let's take one last look at the
234 provisioning NIC we set in `user_config.sh`.
236 root@jump:/icn# ip link show dev eth1
237 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
238 link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
240 The `master provisioning` portion indicates that this interface is now
241 attached to the `provisioning` bridge. The `provisioning` bridge was
242 created during installation and is how the `capm3-ironic` deployment
243 will communicate with the machines to be provisioned when it is time
244 to install an operating system.
249 root@jump:/icn# make vm_cluster
251 Once complete, we'll have a K8s cluster up and running on the machines
252 created earlier with all of the ICN addons configured and validated.
254 root@jump:/icn# clusterctl -n metal3 describe cluster icn
255 NAME READY SEVERITY REASON SINCE MESSAGE
257 ├─ClusterInfrastructure - Metal3Cluster/icn
258 ├─ControlPlane - KubeadmControlPlane/icn True 81m
259 │ └─Machine/icn-qhg4r True 81m
260 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-r8g2f
262 └─MachineDeployment/icn True 73m
263 └─Machine/icn-6b8dfc7f6f-qvrqv True 76m
264 └─MachineInfrastructure - Metal3Machine/icn-workers-bxf52
266 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
267 root@jump:/icn# kubectl --kubeconfig=icn-admin.conf cluster-info
268 Kubernetes control plane is running at https://192.168.151.254:6443
269 CoreDNS is running at https://192.168.151.254:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
271 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
276 At this point you may proceed with the [Installation
277 guide](installation-guide.md) to learn more about the hardware and
278 software configuration in a physical environment or jump directly to
279 the [Deployment](installation-guide.md#Deployment) sub-section to
280 examine the cluster creation process in more detail.
283 <a id="org48e2dc9"></a>