9 Due to the almost limitless number of possible hardware
10 configurations, this installation guide has chosen a concrete
11 configuration to use in the examples that follow.
13 > NOTE: The example configuration's BMC does not support Redfish
14 > virtual media, and therefore IPMI is used instead. When supported
15 > by the BMC, it is recommended to use the more secure Redfish virtual
16 > media option as shown [Quick start guide](quick-start.md).
18 The configuration contains the following three machines.
20 <table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
24 <col class="org-left" />
26 <col class="org-right" />
28 <col class="org-left" />
30 <col class="org-left" />
32 <col class="org-left" />
34 <col class="org-left" />
36 <col class="org-left" />
40 <th scope="col" class="org-left">Hostname</th>
41 <th scope="col" class="org-right">CPU Model</th>
42 <th scope="col" class="org-left">Memory</th>
43 <th scope="col" class="org-left">Storage</th>
44 <th scope="col" class="org-left">IPMI: IP/MAC, U/P</th>
45 <th scope="col" class="org-left">1GbE: NIC#, IP, MAC, VLAN, Network</th>
46 <th scope="col" class="org-left">10GbE: NIC#, IP, MAC, VLAN, Network</th>
52 <td class="org-left">pod11-node5</td>
53 <td class="org-right">2xE5-2699</td>
54 <td class="org-left">64GB</td>
55 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
56 <td class="org-left">IF0: 10.10.110.15 00:1e:67:fc:ff:18<br/>U/P: root/root</td>
57 <td class="org-left">IF0: 10.10.110.25 00:1e:67:fc:ff:16 VLAN 110<br/>IF1: 172.22.0.1 00:1e:67:fc:ff:17 VLAN 111</td>
58 <td class="org-left"> </td>
63 <td class="org-left">pod11-node3</td>
64 <td class="org-right">2xE5-2699</td>
65 <td class="org-left">64GB</td>
66 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
67 <td class="org-left">IF0: 10.10.110.13 00:1e:67:f1:5b:92<br/>U/P: root/root</td>
68 <td class="org-left">IF0: 10.10.110.23 00:1e:67:f1:5b:90 VLAN 110<br/>IF1: 172.22.0.0/24 00:1e:67:f1:5b:91 VLAN 111</td>
69 <td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
74 <td class="org-left">pod11-node2</td>
75 <td class="org-right">2xE5-2699</td>
76 <td class="org-left">64GB</td>
77 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
78 <td class="org-left">IF0: 10.10.110.12 00:1e:67:fe:f4:1b<br/>U/P: root/root</td>
79 <td class="org-left">IF0: 10.10.110.22 00:1e:67:fe:f4:19 VLAN 110<br/>IF1: 172.22.0.0/14 00:1e:67:fe:f4:1a VLAN 111</td>
80 <td class="org-left">IF3: 10.10.113.3 00:1e:67:f8:6a:41 VLAN 113</td>
85 `pod11-node5` will be the Local Controller or *jump server*. The other
86 two machines will form a two-node K8s cluster.
88 Recommended hardware requirements are servers with 64GB Memory, 32
89 CPUs and SR-IOV network cards.
91 The machines are connected in the following topology.
93 ![img](./pod11-topology.png "Topology")
95 There are three networks required by ICN:
97 - The `baremetal` network, used as the control plane for K8s and for
99 - The `provisioning` network, used during the infrastructure
100 provisioning (OS installation) phase.
101 - The `IPMI` network, also used during the infrastructure provisioning
104 In this configuration, the IPMI and baremetal interfaces share the
105 same port and network. Care has been taken to ensure that the IP
106 addresses do not conflict between the two interfaces.
108 There is an additional network connected to a high-speed switch:
110 - The `sriov` network, available for the application data plane.
115 #### Baseboard Management Controller (BMC) configuration
117 The BMC IP address should be statically assigned using the machine's
118 BMC tool or application. Configuration of the pod11-node3 machine is
119 shown in [Appendix A](#bmc-configuration).
121 To verify IPMI is configured correctly for each cluster machine, use
124 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
127 If the ipmitool output looks like the following, enable the *RMCP+
128 Cipher Suite3 Configuration* using the machine's BMC tool or application.
130 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
131 Error in open session response message : insufficient resources for session
132 Error: Unable to establish IPMI v2 / RMCP+ session
134 If the ipmitool output looks like the following, enable *IPMI over lan*
135 using the machine's BMC tool or application.
137 # ipmitool -I lan -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
138 Error: Unable to establish LAN session
140 Additional information on ipmitool may be found at [Configuring IPMI
142 ipmitool](https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool).
144 #### PXE Boot configuration
146 Each cluster machine must be configured to PXE boot from the interface
147 attached to the `provisioning` network. Configuration of the
148 pod11-node3 machine is shown in [Appendix A](#pxe-boot-configuration-1).
150 One method of verifying PXE boot is configured correctly is to access
151 the remote console of the machine and observe the boot process. If
152 the machine is not attempting PXE boot or it is attempting to PXE boot
153 on the wrong interface, reboot the machine into the BIOS and select
154 the correct interface in the boot options.
156 Additional verification can be done on the jump server using the
157 tcpdump tool. The following command looks for DHCP or TFTP traffic
158 arriving on any interface. Replace `any` with the interface attached to
159 the provisioning network to verify end-to-end connectivity between the
160 jump server and cluster machine.
162 # tcpdump -i any port 67 or port 68 or port 69
164 If tcpdump does not show any traffic, verify that the any switches are
165 configured properly to forward PXE boot requests (i.e. VLAN
168 ### Additional BIOS configuration
170 Each cluster machine should also be configured to enable any desired
171 features such as virtualization support. Configuration of the
172 pod11-node3 machine is shown in [Appendix
173 A](#additional-bios-configuration-1).
178 ### Configure the jump server
180 The jump server is required to be pre-installed with an OS. ICN
181 supports Ubuntu 20.04.
183 Before provisioning the jump server, first edit `user_config.sh` to
184 provide the name of the interface connected to the provisioning
187 # ip --brief link show
189 enp4s0f3 UP 00:1e:67:fc:ff:17 <BROADCAST,MULTICAST,UP,LOWER_UP>
193 export IRONIC_INTERFACE="enp4s0f3"
196 ### Install the jump server components
203 make clean_jump_server
211 Before proceeding with the configuration, a basic understanding of the
212 essential components used in ICN is required.
214 ![img](./sw-diagram.png "Software Overview")
218 [Flux](https://fluxcd.io/) is a tool for implementing GitOps workflows where infrastructure
219 and application configuration is committed to source control and
220 continuously deployed in a K8s cluster.
222 The important Flux resources ICN uses are:
224 - GitRepository, which describes where configuration data is committed
225 - HelmRelease, which describes an installation of a Helm chart
226 - Kustomization, which describes application of K8s resources
227 customized with a kustomization file
229 #### Cluster API (CAPI)
231 [Cluster API](https://cluster-api.sigs.k8s.io/) provides declarative APIs and tooling for provisioning,
232 upgrading, and operating K8s clusters.
234 There are a number of important CAPI resources that ICN uses. To ease
235 deployment, ICN captures the resources into a Helm chart.
237 #### Bare Metal Operator (BMO)
239 Central to CAPI are infrastructure and bootstrap providers. There are
240 pluggable components for configuring the OS and K8s installation
243 ICN uses the [Cluster API Provider Metal3 for Managed Bare Metal
244 Hardware](https://github.com/metal3-io/cluster-api-provider-metal3) for infrastructure provisioning, which in turn depends on the
245 [Metal3 Bare Metal Operator](https://github.com/metal3-io/baremetal-operator) to do the actual work. The Bare Metal
246 Operator uses [Ironic](https://ironicbaremetal.org/) to execute the low-level provisioning tasks.
248 Similar to the CAPI resources that ICN uses, ICN captures the Bare
249 Metal Operator resources it uses into a Helm chart.
254 > NOTE: To assist in the migration of R5 and earlier release's use from
255 > nodes.json and the Provisioning resource to the site YAML described
256 > below, a helper script is provided at tools/migration/to<sub>r6.sh</sub>.
258 #### Define the compute cluster
260 The first step in provisioning a site with ICN is to define the
261 desired day-0 configuration of the workload clusters.
263 A [configuration](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=tree;f=deploy/site/cluster-icn) containing all supported ICN components is available
264 in the ICN repository. End-users may use this as a base and add or
265 remove components as desired. Each YAML file in this configuration is
266 one of the Flux resources described in the overview: GitRepository,
267 HelmRelease, or Kustomization.
271 A site definition is composed of BMO and CAPI resources, describing
272 machines and clusters respectively. These resources are captured into
273 the ICN machine and cluster Helm charts. Defining the site is
274 therefore a matter of specifying the values needed by the charts.
276 ##### Site-specific Considerations
278 Documentation for the machine chart may be found in its [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/machine/values.yaml),
279 and documentation for the cluster chart may be found in its
280 [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/cluster/values.yaml). Please review those for more information; what follows
281 is some site-specific considerations to be aware of.
283 Note that there are a large number of ways to configure machines and
284 especially clusters; for large scale deployments it may make sense to
285 create custom charts to eliminate duplication in the values
288 ###### Control plane endpoint
290 The K8s control plane endpoint address must be provided to the cluster
293 For a highly-available control plane, this would typically be a
294 load-balanced virtual IP address. Configuration of an external load
295 balancer is out of scope for this document. The chart also provides
296 another mechanism to accomplish this using the VRRP protocol to assign
297 the control plane endpoint among the selected control plane nodes; see
298 the `keepalived` dictionary in the cluster chart values.
300 For a single control plane node with a static IP address, some care
301 must be taken to ensure that CAPI chooses the correct machine to
302 provision as the control plane node. To do this, add a label to the
303 `machineLabels` dictionary in the machine chart and specify a K8s match
304 expression in the `controlPlaneHostSelector` dictionary of the cluster
305 chart. Once done, the IP address of the labeled and selected machine
306 can be used as the control plane endpoint address.
308 ###### Static or dynamic baremetal network IPAM
310 The cluster and machine charts support either static or dynamic IPAM
311 in the baremetal network.
313 Dynamic IPAM is configured by specifying the `networks` dictionary in
314 the cluster chart. At least two entries must be included, the
315 `baremetal` and `provisioning` networks. Under each entry, provide the
316 predictable network interface name as the value of `interface` key.
318 Note that this is in the cluster chart and therefore is in the form of
319 a template for each machine used in the cluster. If the machines are
320 sufficiently different such that the same interface name is not used
321 on each machine, then the static approach below must be used instead.
323 Static IPAM is configured by specifying the `networks` dictionary in the
324 machine chart. At least two entries must be included, the `baremetal`
325 and `provisioning` networks. From the chart example values:
329 macAddress: 00:1e:67:fe:f4:19
330 # type is either ipv4 or ipv4_dhcp
332 # ipAddress is only valid for type ipv4
333 ipAddress: 10.10.110.21/24
334 # gateway is only valid for type ipv4
336 # nameservers is an array of DNS servers; only valid for type ipv4
337 nameservers: ["8.8.8.8"]
339 macAddress: 00:1e:67:fe:f4:1a
342 The provisioning network must always be type `ipv4_dhcp`.
344 In either the static or dynamic case additional networks may be
345 included, however the static assignment option for an individual
346 network exists only when the machine chart approach is used.
350 The first thing done is to create a `site.yaml` file containing a
351 Namespace to hold the site resources and a GitRepository pointing to
352 the ICN repository where the machine and cluster Helm charts are
355 Note that when definining multiple sites it is only necessary to apply
356 the Namespace and GitRepository once on the jump server managing the
365 apiVersion: source.toolkit.fluxcd.io/v1beta1
371 gitImplementation: go-git
376 url: https://gerrit.akraino.org/r/icn
378 ##### Define a machine
380 Important values in machine definition include:
382 - **machineName:** the host name of the machine
383 - **bmcAddress, bmcUsername, bmcPassword:** the bare metal controller
384 (e.g. IPMI) access values
386 Capture each machine's values into a HelmRelease in the site YAML:
389 apiVersion: helm.toolkit.fluxcd.io/v2beta1
398 chart: deploy/machine
404 machineName: pod11-node2
407 bmcAddress: ipmi://10.10.110.12
412 macAddress: 00:1e:67:fe:f4:19
414 ipAddress: 10.10.110.22/24
419 macAddress: 00:1e:67:fe:f4:1a
422 macAddress: 00:1e:67:f8:6a:40
424 ipAddress: 10.10.112.3/24
426 macAddress: 00:1e:67:f8:6a:41
428 ipAddress: 10.10.113.3/24
430 ##### Define a cluster
432 Important values in cluster definition include:
434 - **clusterName:** the name of the cluster
435 - **numControlPlaneMachines:** the number of control plane nodes
436 - **numWorkerMachines:** the number of worker nodes
437 - **controlPlaneEndpoint:** see [Site-specific Considerations](#site-specific-considerations) above
438 - **userData:** dictionary containing default username, password, and
440 - **flux:** dictionary containing location of day-0 configuration of
441 cluster; see [Define the compute cluster](#define-the-compute-cluster) above
443 Capture each cluster's values into a HelmRelease in the site YAML:
446 apiVersion: helm.toolkit.fluxcd.io/v2beta1
455 chart: deploy/cluster
464 controlPlaneEndpoint: 10.10.110.23
465 controlPlaneHostSelector:
472 hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
473 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
475 url: https://gerrit.akraino.org/r/icn
477 path: ./deploy/site/cluster-icn
479 ##### Encrypt secrets in site definition
481 This step is optional, but recommended to protect sensitive
482 information stored in the site definition. The site script is
483 configured to protect the `bmcPassword` and `hashedPassword` values.
485 Use an existing GPG key pair or create a new one, then encrypt the
486 secrets contained in the site YAML using site.sh. The public key and
487 SOPS configuration is created in the site YAML directory; these may be
488 used to encrypt (but not decrypt) future secrets.
490 # ./deploy/site/site.sh create-gpg-key site-secrets-key
491 # ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets-key
493 ##### Example site definitions
495 Refer to the [pod11 site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/pod11/site.yaml) and the [vm site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/vm/site.yaml) for complete
496 examples of site definitions for a static and dynamic baremetal
497 network respectively. These site definitions are for simple two
498 machine clusters used in ICN testing.
500 #### Inform the Flux controllers of the site definition
502 The final step is inform the jump server Flux controllers of the site
503 definition be creating three resources:
505 - a GitRepository containing the location where the site definition is
507 - a Secret holding the GPG private key used to encrypt the secrets in
509 - a Kustomization referencing the GitRepository, Secret, and path in
510 the repository where the site definition is located
512 This may be done with the help of the `site.sh` script:
514 # ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
517 <a id="org6324e82"></a>
521 #### Monitoring progress
523 The overall status of the cluster deployment can be monitored with
526 # clusterctl -n metal3 describe cluster icn
527 NAME READY SEVERITY REASON SINCE MESSAGE
528 /icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
529 ├─ClusterInfrastructure - Metal3Cluster/icn
530 ├─ControlPlane - KubeadmControlPlane/icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
531 │ └─Machine/icn-9sp7z False Info WaitingForInfrastructure 4m17s 1 of 2 completed
532 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-khtsk
534 └─MachineDeployment/icn False Warning WaitingForAvailableMachines 4m49s Minimum availability requires 1 replicas, current 0 available
535 └─Machine/icn-6b8dfc7f6f-tmgv7 False Info WaitingForInfrastructure 4m49s 0 of 2 completed
536 ├─BootstrapConfig - KubeadmConfig/icn-workers-79pl9 False Info WaitingForControlPlaneAvailable 4m19s
537 └─MachineInfrastructure - Metal3Machine/icn-workers-m7vb8
539 The status of OS provisioning can be monitored by inspecting the
540 `BareMetalHost` resources.
542 # kubectl -n metal3 get bmh
543 NAME STATE CONSUMER ONLINE ERROR AGE
544 pod11-node2 inspecting true 5m15s
545 pod11-node3 inspecting true 5m14s
547 Once the OS is installed, the status of K8s provisioning can be
548 monitored by logging into the machine using the credentials from the
549 `userData` section of the site values and inspecting the cloud-init
552 root@pod11-node2:~# tail -f /var/log/cloud-init-output.log
554 Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Wed, 05 Jan 2022 01:34:41 +0000. Up 131.66 seconds.
555 Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Wed, 05 Jan 2022 01:34:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sda2]. Up 132.02 seconds
557 Once the cluster's control plane is ready, its kubeconfig can be
558 obtained with `clusterctl` and the status of the cluster can be
559 monitored with `kubectl`.
561 # clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
562 # kubectl --kubeconfig=icn-admin.conf get pods -A
563 NAMESPACE NAME READY STATUS RESTARTS AGE
564 emco db-emco-mongo-0 1/1 Running 0 15h
565 emco emco-etcd-0 1/1 Running 0 15h
568 #### Examining the deployment process
570 The deployment resources can be examined with the kubectl and helm
571 tools. The below example provides pointers to the resources in the
574 # kubectl -n flux-system get GitRepository
575 NAME URL READY STATUS AGE
576 icn-master https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
578 # kubectl -n flux-system get Kustomization
579 NAME READY STATUS AGE
580 icn-master-site-pod11 True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m4s
582 # kubectl -n metal3 get GitRepository
583 NAME URL READY STATUS AGE
584 icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m22s
586 # kubectl -n metal3 get HelmRelease
587 NAME READY STATUS AGE
588 cluster-icn True Release reconciliation succeeded 7m54s
589 pod11-node2 True Release reconciliation succeeded 7m54s
590 pod11-node3 True Release reconciliation succeeded 7m54s
592 # kubectl -n metal3 get HelmChart
593 NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
594 metal3-cluster-icn deploy/cluster * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
595 metal3-pod11-node2 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
596 metal3-pod11-node3 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
599 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
600 cluster-icn metal3 2 2022-01-05 01:03:51.075860871 +0000 UTC deployed cluster-0.1.0
601 pod11-node2 metal3 2 2022-01-05 01:03:49.365432 +0000 UTC deployed machine-0.1.0
602 pod11-node3 metal3 2 2022-01-05 01:03:49.463726617 +0000 UTC deployed machine-0.1.0
604 # helm -n metal3 get values --all cluster-icn
611 containerRuntime: containerd
612 containerdVersion: 1.4.11-1
613 controlPlaneEndpoint: 10.10.110.23
614 controlPlaneHostSelector:
617 controlPlanePrefix: 24
618 dockerVersion: 5:20.10.10~3-0~ubuntu-focal
621 path: ./deploy/site/cluster-icn
623 url: https://gerrit.akraino.org/r/icn
624 imageName: focal-server-cloudimg-amd64.img
626 kubeVersion: 1.21.6-00
627 numControlPlaneMachines: 1
629 podCidr: 10.244.64.0/18
631 hashedPassword: $6$rounds=10000$bhRsNADLl$BzCcBaQ7Tle9AizUHcMKN2fygyPMqBebOuvhApI8B.pELWyFUaAWRasPOz.5Gf9bvCihakRnBTwsi217n2qQs1
633 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
638 # helm -n metal3 get values --all pod11-node2
640 bmcAddress: ipmi://10.10.110.12
645 machineName: pod11-node2
649 ipAddress: 10.10.110.22/24
650 macAddress: 00:1e:67:fe:f4:19
655 macAddress: 00:1e:67:fe:f4:1a
658 ipAddress: 10.10.113.3/24
659 macAddress: 00:1e:67:f8:6a:41
662 # helm -n metal3 get values --all pod11-node3
664 bmcAddress: ipmi://10.10.110.13
669 machineName: pod11-node3
673 ipAddress: 10.10.110.23/24
674 macAddress: 00:1e:67:f1:5b:90
679 macAddress: 00:1e:67:f1:5b:91
682 ipAddress: 10.10.113.4/24
683 macAddress: 00:1e:67:f8:69:81
686 Once the workload cluster is ready, the deployment resources may be
689 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
690 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get GitRepository -A
691 NAMESPACE NAME URL READY STATUS AGE
692 emco emco https://github.com/open-ness/EMCO True Fetched revision: openness-21.03.06/18ec480f755119d54aa42c1bc3bd248dfd477165 16h
693 flux-system icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
694 kud kud https://gerrit.onap.org/r/multicloud/k8s True Fetched revision: master/8157bf63753839ce4e9006978816fad3f63ca2de 16h
696 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get Kustomization -A
697 NAMESPACE NAME READY STATUS AGE
698 flux-system icn-flux-sync True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
699 flux-system kata True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
701 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmRelease -A
702 NAMESPACE NAME READY STATUS AGE
703 emco db True Release reconciliation succeeded 16h
704 emco monitor True Release reconciliation succeeded 16h
705 emco podsecurity True Release reconciliation succeeded 16h
706 emco services True Release reconciliation succeeded 16h
707 emco tools True Release reconciliation succeeded 16h
708 kud cdi True Release reconciliation succeeded 16h
709 kud cdi-operator True Release reconciliation succeeded 16h
710 kud cpu-manager True Release reconciliation succeeded 16h
711 kud kubevirt True Release reconciliation succeeded 16h
712 kud kubevirt-operator True Release reconciliation succeeded 16h
713 kud multus-cni True Release reconciliation succeeded 16h
714 kud node-feature-discovery True Release reconciliation succeeded 16h
715 kud ovn4nfv True Release reconciliation succeeded 16h
716 kud ovn4nfv-network True Release reconciliation succeeded 16h
717 kud podsecurity True Release reconciliation succeeded 16h
718 kud qat-device-plugin True Release reconciliation succeeded 16h
719 kud sriov-network True Release reconciliation succeeded 16h
720 kud sriov-network-operator True Release reconciliation succeeded 16h
722 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmChart -A
723 NAMESPACE NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
724 emco emco-db deployments/helm/emcoOpenNESS/emco-db * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
725 emco emco-monitor deployments/helm/monitor * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
726 emco emco-services deployments/helm/emcoOpenNESS/emco-services * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
727 emco emco-tools deployments/helm/emcoOpenNESS/emco-tools * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
728 flux-system emco-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
729 flux-system kud-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
730 kud kud-cdi kud/deployment_infra/helm/cdi * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
731 kud kud-cdi-operator kud/deployment_infra/helm/cdi-operator * GitRepository kud True Fetched and packaged revision: 0.1.1 16h
732 kud kud-cpu-manager kud/deployment_infra/helm/cpu-manager * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
733 kud kud-kubevirt kud/deployment_infra/helm/kubevirt * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
734 kud kud-kubevirt-operator kud/deployment_infra/helm/kubevirt-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
735 kud kud-multus-cni kud/deployment_infra/helm/multus-cni * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
736 kud kud-node-feature-discovery kud/deployment_infra/helm/node-feature-discovery * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
737 kud kud-ovn4nfv kud/deployment_infra/helm/ovn4nfv * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
738 kud kud-ovn4nfv-network kud/deployment_infra/helm/ovn4nfv-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
739 kud kud-qat-device-plugin kud/deployment_infra/helm/qat-device-plugin * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
740 kud kud-sriov-network kud/deployment_infra/helm/sriov-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
741 kud kud-sriov-network-operator kud/deployment_infra/helm/sriov-network-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
743 root@pod11-node5:# helm --kubeconfig=icn-admin.conf ls -A
744 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
745 cdi kud 2 2022-01-05 01:54:28.39195226 +0000 UTC deployed cdi-0.1.0 v1.34.1
746 cdi-operator kud 2 2022-01-05 01:54:04.904465491 +0000 UTC deployed cdi-operator-0.1.1 v1.34.1
747 cpu-manager kud 2 2022-01-05 01:54:01.911819055 +0000 UTC deployed cpu-manager-0.1.0 v1.4.1-no-taint
748 db emco 2 2022-01-05 01:53:36.096690949 +0000 UTC deployed emco-db-0.1.0
749 kubevirt kud 2 2022-01-05 01:54:12.563840437 +0000 UTC deployed kubevirt-0.1.0 v0.41.0
750 kubevirt-operator kud 2 2022-01-05 01:53:59.190388299 +0000 UTC deployed kubevirt-operator-0.1.0 v0.41.0
751 monitor emco 2 2022-01-05 01:53:36.085180458 +0000 UTC deployed monitor-0.1.0 1.16.0
752 multus-cni kud 2 2022-01-05 01:54:03.494462704 +0000 UTC deployed multus-cni-0.1.0 v3.7
753 node-feature-discovery kud 2 2022-01-05 01:53:58.489616047 +0000 UTC deployed node-feature-discovery-0.1.0 v0.7.0
754 ovn4nfv kud 2 2022-01-05 01:54:07.488105774 +0000 UTC deployed ovn4nfv-0.1.0 v3.0.0
755 ovn4nfv-network kud 2 2022-01-05 01:54:31.79127155 +0000 UTC deployed ovn4nfv-network-0.1.0 v2.2.0
756 podsecurity kud 2 2022-01-05 01:53:37.400019369 +0000 UTC deployed podsecurity-0.1.0
757 podsecurity emco 2 2022-01-05 01:53:35.993351972 +0000 UTC deployed podsecurity-0.1.0
758 qat-device-plugin kud 2 2022-01-05 01:54:03.598022943 +0000 UTC deployed qat-device-plugin-0.1.0 0.19.0-kerneldrv
759 sriov-network kud 2 2022-01-05 01:54:31.695963579 +0000 UTC deployed sriov-network-0.1.0 4.8.0
760 sriov-network-operator kud 2 2022-01-05 01:54:07.787596951 +0000 UTC deployed sriov-network-operator-0.1.0 4.8.0
761 tools emco 2 2022-01-05 01:53:58.317119097 +0000 UTC deployed emco-tools-0.1.0
763 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get pods -A -o wide
764 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
765 emco db-emco-mongo-0 1/1 Running 0 16h 10.244.65.53 pod11-node2 <none> <none>
766 emco emco-etcd-0 1/1 Running 0 16h 10.244.65.57 pod11-node2 <none> <none>
767 emco monitor-monitor-74649c5c64-dxhfn 1/1 Running 0 16h 10.244.65.65 pod11-node2 <none> <none>
768 emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
774 Basic self-tests of Kata, EMCO, and the other addons may be performed
775 with the `kata.sh` and `addons.sh` test scripts once the workload cluster
778 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
779 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
784 To destroy the workload cluster and deprovision its machines, it is
785 only necessary to delete the site Kustomization. Uninstallation
786 progress can be monitored similar to deployment with `clusterctl`,
787 examining the `BareMetalHost` resources, etc.
789 root@pod11-node5:# kubectl -n flux-system delete Kustomization icn-master-site-pod11
791 ## Appendix A: BMC and BIOS configuration of pod11-node3
793 The BMC and BIOS configuration will vary depending on the vendor. The
794 below is intended only to provide some guidance on what to look for in
795 the hardware used in the chosen configuration.
797 ### BMC configuration
799 BMC IP address configured in the BIOS.
801 ![img](./pod11-node3-bios-bmc-configuration.png "BMC LAN Configuration")
803 BMC IP address configured in the web console.
805 ![img](./pod11-node3-ip-configuration.png "BMC LAN Configuration")
807 IPMI configuration. Not shown is the cipher suite configuration.
809 ![img](./pod11-node3-ipmi-over-lan.png "IPMI over LAN")
811 ### PXE boot configuration
813 The screens below show enabling PXE boot for the specified NIC and
814 ensuring it is first in the boot order.
816 ![img](./pod11-node3-bios-enable-pxe.png "Enable PXE boot")
818 ![img](./pod11-node3-bios-nic-boot-order.png "NIC boot order")
820 ### Additional BIOS configuration
822 The screens below show enabling virtualization options in the BIOS.
824 ![img](./pod11-node3-bios-vt-x.png "Enable Intel VT-x")
826 ![img](./pod11-node3-bios-vt-d.png "Enable Intel VT-d")