7 Due to the almost limitless number of possible hardware
8 configurations, this installation guide has chosen a concrete
9 configuration to use in the examples that follow.
11 > NOTE: The example configuration's BMC does not support Redfish
12 > virtual media, and therefore IPMI is used instead. When supported
13 > by the BMC, it is recommended to use the more secure Redfish virtual
14 > media option as shown [Quick start guide](quick-start.md).
16 The configuration contains the following three machines.
18 <table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
21 <col class="org-left" />
22 <col class="org-right" />
23 <col class="org-left" />
24 <col class="org-left" />
25 <col class="org-left" />
26 <col class="org-left" />
27 <col class="org-left" />
32 <th scope="col" class="org-left">Hostname</th>
33 <th scope="col" class="org-right">CPU Model</th>
34 <th scope="col" class="org-left">Memory</th>
35 <th scope="col" class="org-left">Storage</th>
36 <th scope="col" class="org-left">IPMI: IP/MAC, U/P</th>
37 <th scope="col" class="org-left">1GbE: NIC#, IP, MAC, VLAN, Network</th>
38 <th scope="col" class="org-left">10GbE: NIC#, IP, MAC, VLAN, Network</th>
44 <td class="org-left">pod11-node5</td>
45 <td class="org-right">2xE5-2699</td>
46 <td class="org-left">64GB</td>
47 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
48 <td class="org-left">IF0: 10.10.110.15 00:1e:67:fc:ff:18<br/>U/P: root/root</td>
49 <td class="org-left">IF0: 10.10.110.25 00:1e:67:fc:ff:16 VLAN 110<br/>IF1: 172.22.0.1 00:1e:67:fc:ff:17 VLAN 111</td>
50 <td class="org-left"> </td>
54 <td class="org-left">pod11-node3</td>
55 <td class="org-right">2xE5-2699</td>
56 <td class="org-left">64GB</td>
57 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
58 <td class="org-left">IF0: 10.10.110.13 00:1e:67:f1:5b:92<br/>U/P: root/root</td>
59 <td class="org-left">IF0: 10.10.110.23 00:1e:67:f1:5b:90 VLAN 110<br/>IF1: 172.22.0.0/24 00:1e:67:f1:5b:91 VLAN 111</td>
60 <td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
64 <td class="org-left">pod11-node2</td>
65 <td class="org-right">2xE5-2699</td>
66 <td class="org-left">64GB</td>
67 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
68 <td class="org-left">IF0: 10.10.110.12 00:1e:67:fe:f4:1b<br/>U/P: root/root</td>
69 <td class="org-left">IF0: 10.10.110.22 00:1e:67:fe:f4:19 VLAN 110<br/>IF1: 172.22.0.0/14 00:1e:67:fe:f4:1a VLAN 111</td>
70 <td class="org-left">IF3: 10.10.113.3 00:1e:67:f8:6a:41 VLAN 113</td>
75 `pod11-node5` will be the Local Controller or *jump server*. The other
76 two machines will form a two-node K8s cluster.
78 Recommended hardware requirements are servers with 64GB Memory, 32
79 CPUs and SR-IOV network cards.
81 The machines are connected in the following topology.
83 ![img](./pod11-topology.png "Topology")
85 There are three networks required by ICN:
87 - The `baremetal` network, used as the control plane for K8s and for
89 - The `provisioning` network, used during the infrastructure
90 provisioning (OS installation) phase.
91 - The `IPMI` network, also used during the infrastructure provisioning
94 In this configuration, the IPMI and baremetal interfaces share the
95 same port and network. Care has been taken to ensure that the IP
96 addresses do not conflict between the two interfaces.
98 There is an additional network connected to a high-speed switch:
100 - The `sriov` network, available for the application data plane.
104 #### Baseboard Management Controller (BMC) configuration
106 The BMC IP address should be statically assigned using the machine's
107 BMC tool or application. Configuration of the pod11-node3 machine is
108 shown in [Appendix A](#bmc-configuration).
110 To verify IPMI is configured correctly for each cluster machine, use
113 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
116 If the ipmitool output looks like the following, enable the *RMCP+
117 Cipher Suite3 Configuration* using the machine's BMC tool or application.
119 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
120 Error in open session response message : insufficient resources for session
121 Error: Unable to establish IPMI v2 / RMCP+ session
123 If the ipmitool output looks like the following, enable *IPMI over lan*
124 using the machine's BMC tool or application.
126 # ipmitool -I lan -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
127 Error: Unable to establish LAN session
129 Additional information on ipmitool may be found at [Configuring IPMI
131 ipmitool](https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool).
133 #### PXE Boot configuration
135 Each cluster machine must be configured to PXE boot from the interface
136 attached to the `provisioning` network. Configuration of the
137 pod11-node3 machine is shown in [Appendix A](#pxe-boot-configuration-1).
139 One method of verifying PXE boot is configured correctly is to access
140 the remote console of the machine and observe the boot process. If
141 the machine is not attempting PXE boot or it is attempting to PXE boot
142 on the wrong interface, reboot the machine into the BIOS and select
143 the correct interface in the boot options.
145 Additional verification can be done on the jump server using the
146 tcpdump tool. The following command looks for DHCP or TFTP traffic
147 arriving on any interface. Replace `any` with the interface attached to
148 the provisioning network to verify end-to-end connectivity between the
149 jump server and cluster machine.
151 # tcpdump -i any port 67 or port 68 or port 69
153 If tcpdump does not show any traffic, verify that the any switches are
154 configured properly to forward PXE boot requests (i.e. VLAN
157 ### Additional BIOS configuration
159 Each cluster machine should also be configured to enable any desired
160 features such as virtualization support. Configuration of the
161 pod11-node3 machine is shown in [Appendix
162 A](#additional-bios-configuration-1).
166 ### Configure the jump server
168 The jump server is required to be pre-installed with an OS. ICN
169 supports Ubuntu 20.04.
171 Before provisioning the jump server, first edit `user_config.sh` to
172 provide the name of the interface connected to the provisioning
175 # ip --brief link show
177 enp4s0f3 UP 00:1e:67:fc:ff:17 <BROADCAST,MULTICAST,UP,LOWER_UP>
181 export IRONIC_INTERFACE="enp4s0f3"
184 ### Install the jump server components
190 make clean_jump_server
196 Before proceeding with the configuration, a basic understanding of the
197 essential components used in ICN is required.
199 ![img](./sw-diagram.png "Software Overview")
203 [Flux](https://fluxcd.io/) is a tool for implementing GitOps workflows where infrastructure
204 and application configuration is committed to source control and
205 continuously deployed in a K8s cluster.
207 The important Flux resources ICN uses are:
209 - GitRepository, which describes where configuration data is committed
210 - HelmRelease, which describes an installation of a Helm chart
211 - Kustomization, which describes application of K8s resources
212 customized with a kustomization file
214 #### Cluster API (CAPI)
216 [Cluster API](https://cluster-api.sigs.k8s.io/) provides declarative APIs and tooling for provisioning,
217 upgrading, and operating K8s clusters.
219 There are a number of important CAPI resources that ICN uses. To ease
220 deployment, ICN captures the resources into a Helm chart.
222 #### Bare Metal Operator (BMO)
224 Central to CAPI are infrastructure and bootstrap providers. There are
225 pluggable components for configuring the OS and K8s installation
228 ICN uses the [Cluster API Provider Metal3 for Managed Bare Metal
229 Hardware](https://github.com/metal3-io/cluster-api-provider-metal3) for infrastructure provisioning, which in turn depends on the
230 [Metal3 Bare Metal Operator](https://github.com/metal3-io/baremetal-operator) to do the actual work. The Bare Metal
231 Operator uses [Ironic](https://ironicbaremetal.org/) to execute the low-level provisioning tasks.
233 Similar to the CAPI resources that ICN uses, ICN captures the Bare
234 Metal Operator resources it uses into a Helm chart.
238 > NOTE: To assist in the migration of R5 and earlier release's use from
239 > nodes.json and the Provisioning resource to the site YAML described
240 > below, a helper script is provided at tools/migration/to<sub>r6.sh</sub>.
242 #### Define the compute cluster
244 The first step in provisioning a site with ICN is to define the
245 desired day-0 configuration of the workload clusters.
247 A [configuration](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=tree;f=deploy/site/cluster-icn) containing all supported ICN components is available
248 in the ICN repository. End-users may use this as a base and add or
249 remove components as desired. Each YAML file in this configuration is
250 one of the Flux resources described in the overview: GitRepository,
251 HelmRelease, or Kustomization.
255 A site definition is composed of BMO and CAPI resources, describing
256 machines and clusters respectively. These resources are captured into
257 the ICN machine and cluster Helm charts. Defining the site is
258 therefore a matter of specifying the values needed by the charts.
260 ##### Site-specific Considerations
262 Documentation for the machine chart may be found in its [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/machine/values.yaml),
263 and documentation for the cluster chart may be found in its
264 [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/cluster/values.yaml). Please review those for more information; what follows
265 is some site-specific considerations to be aware of.
267 Note that there are a large number of ways to configure machines and
268 especially clusters; for large scale deployments it may make sense to
269 create custom charts to eliminate duplication in the values
272 ###### Control plane endpoint
274 The K8s control plane endpoint address must be provided to the cluster
277 For a highly-available control plane, this would typically be a
278 load-balanced virtual IP address. Configuration of an external load
279 balancer is out of scope for this document. The chart also provides
280 another mechanism to accomplish this using the VRRP protocol to assign
281 the control plane endpoint among the selected control plane nodes; see
282 the `keepalived` dictionary in the cluster chart values.
284 For a single control plane node with a static IP address, some care
285 must be taken to ensure that CAPI chooses the correct machine to
286 provision as the control plane node. To do this, add a label to the
287 `machineLabels` dictionary in the machine chart and specify a K8s match
288 expression in the `controlPlaneHostSelector` dictionary of the cluster
289 chart. Once done, the IP address of the labeled and selected machine
290 can be used as the control plane endpoint address.
292 ###### Static or dynamic baremetal network IPAM
294 The cluster and machine charts support either static or dynamic IPAM
295 in the baremetal network.
297 Dynamic IPAM is configured by specifying IP pools containing the
298 address ranges to assign and the interface and network mapping to the
299 pools. The IP pools are specified with the `ipPools` dictionary in
300 the cluster chart. From the chart example values:
304 # start is the beginning of the address range in the pool.
305 start: 192.168.151.10
306 # end is the end of the address range in the pool.
308 # prefix is the network prefix of addresses in the range.
310 # gateway is optional.
311 #gateway: 192.168.151.1
312 # preAllocations are optional. Note that if the pool overlaps
313 # with the gateway, then a pre-allocation is required.
315 # controlPlane: 192.168.151.254
317 The interface and network mapping is specified with the `networkData`
318 dictionary in the cluster chart. From the chart example values:
332 # link is optional and defaults to the network name.
334 fromIPPool: baremetal
339 At least two entries must be included, the `baremetal` and
340 `provisioning` networks and the provisioning network must always be
341 type `ipv4DHCP`. Under each entry, provide the predictable network
342 interface name as the value of `interface` key.
344 Note that this is in the cluster chart and therefore is in the form of
345 a template for each machine used in the cluster. If the machines are
346 sufficiently different such that the same interface name is not used
347 on each machine, then the static approach below must be used instead.
349 Static IPAM is configured similar to the dynamic IPAM. Instead of
350 providing a template with the cluster chart, specific values are
351 provided with the machine chart. From the chart example values:
357 macAddress: 00:1e:67:fe:f4:19
359 macAddress: 00:1e:67:fe:f4:1a
361 macAddress: 00:1e:67:f8:6a:40
363 macAddress: 00:1e:67:f8:6a:41
369 # link is optional and defaults to the network name.
371 ipAddress: 10.10.110.21/24
374 ipAddress: 10.10.112.2/24
376 ipAddress: 10.10.113.2/24
380 Again, at least two entries must be included, the `baremetal` and
381 `provisioning` networks and the provisioning network must always be
384 In either the static or dynamic case additional networks may be
385 included, however the static assignment option for an individual
386 network exists only when the machine chart approach is used.
388 For additional information on configuring IPv4/IPV6 dual-stack
389 operation, refer to [Appendix B](#appendix-b-ipv4ipv6-dual-stack).
393 The first thing done is to create a `site.yaml` file containing a
394 Namespace to hold the site resources and a GitRepository pointing to
395 the ICN repository where the machine and cluster Helm charts are
398 Note that when definining multiple sites it is only necessary to apply
399 the Namespace and GitRepository once on the jump server managing the
408 apiVersion: source.toolkit.fluxcd.io/v1beta1
414 gitImplementation: go-git
419 url: https://gerrit.akraino.org/r/icn
421 ##### Define a machine
423 Important values in machine definition include:
425 - **machineName:** the host name of the machine
426 - **bmcAddress, bmcUsername, bmcPassword:** the bare metal controller
427 (e.g. IPMI) access values
429 Capture each machine's values into a HelmRelease in the site YAML:
432 apiVersion: helm.toolkit.fluxcd.io/v2beta1
441 chart: deploy/machine
447 machineName: pod11-node2
450 bmcAddress: ipmi://10.10.110.12
457 macAddress: 00:1e:67:fe:f4:19
459 macAddress: 00:1e:67:fe:f4:1a
461 macAddress: 00:1e:67:f8:6a:40
463 macAddress: 00:1e:67:f8:6a:41
469 ipAddress: 10.10.110.22/24
472 ipAddress: 10.10.112.3/24
474 ipAddress: 10.10.113.3/24
479 ##### Define a cluster
481 Important values in cluster definition include:
483 - **clusterName:** the name of the cluster
484 - **numControlPlaneMachines:** the number of control plane nodes
485 - **numWorkerMachines:** the number of worker nodes
486 - **controlPlaneEndpoint:** see [Site-specific Considerations](#site-specific-considerations) above
487 - **userData:** dictionary containing default username, password, and
489 - **flux:** dictionary containing location of day-0 configuration of
490 cluster; see [Define the compute cluster](#define-the-compute-cluster) above
492 Capture each cluster's values into a HelmRelease in the site YAML:
495 apiVersion: helm.toolkit.fluxcd.io/v2beta1
504 chart: deploy/cluster
513 controlPlaneEndpoint: 10.10.110.23
514 controlPlaneHostSelector:
521 hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
522 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
524 url: https://gerrit.akraino.org/r/icn
526 path: ./deploy/site/cluster-icn
528 ##### Encrypt secrets in site definition
530 This step is optional, but recommended to protect sensitive
531 information stored in the site definition. The site script is
532 configured to protect the `bmcPassword` and `hashedPassword` values.
534 Use an existing GPG key pair or create a new one, then encrypt the
535 secrets contained in the site YAML using site.sh. The public key and
536 SOPS configuration is created in the site YAML directory; these may be
537 used to encrypt (but not decrypt) future secrets.
539 # ./deploy/site/site.sh create-gpg-key site-secrets-key
540 # ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets-key
542 ##### Example site definitions
544 Refer to the [pod11 site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/pod11/site.yaml) and the [vm site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/vm/site.yaml) for complete
545 examples of site definitions for a static and dynamic baremetal
546 network respectively. These site definitions are for simple two
547 machine clusters used in ICN testing.
549 #### Inform the Flux controllers of the site definition
551 The final step is inform the jump server Flux controllers of the site
552 definition be creating three resources:
554 - a GitRepository containing the location where the site definition is
556 - a Secret holding the GPG private key used to encrypt the secrets in
558 - a Kustomization referencing the GitRepository, Secret, and path in
559 the repository where the site definition is located
561 This may be done with the help of the `site.sh` script:
563 # ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
565 <a id="org6324e82"></a>
569 #### Monitoring progress
571 The overall status of the cluster deployment can be monitored with
574 # clusterctl -n metal3 describe cluster icn
575 NAME READY SEVERITY REASON SINCE MESSAGE
576 /icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
577 ├─ClusterInfrastructure - Metal3Cluster/icn
578 ├─ControlPlane - KubeadmControlPlane/icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
579 │ └─Machine/icn-9sp7z False Info WaitingForInfrastructure 4m17s 1 of 2 completed
580 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-khtsk
582 └─MachineDeployment/icn False Warning WaitingForAvailableMachines 4m49s Minimum availability requires 1 replicas, current 0 available
583 └─Machine/icn-6b8dfc7f6f-tmgv7 False Info WaitingForInfrastructure 4m49s 0 of 2 completed
584 ├─BootstrapConfig - KubeadmConfig/icn-workers-79pl9 False Info WaitingForControlPlaneAvailable 4m19s
585 └─MachineInfrastructure - Metal3Machine/icn-workers-m7vb8
587 The status of OS provisioning can be monitored by inspecting the
588 `BareMetalHost` resources.
590 # kubectl -n metal3 get bmh
591 NAME STATE CONSUMER ONLINE ERROR AGE
592 pod11-node2 inspecting true 5m15s
593 pod11-node3 inspecting true 5m14s
595 Once the OS is installed, the status of K8s provisioning can be
596 monitored by logging into the machine using the credentials from the
597 `userData` section of the site values and inspecting the cloud-init
600 root@pod11-node2:~# tail -f /var/log/cloud-init-output.log
602 Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Wed, 05 Jan 2022 01:34:41 +0000. Up 131.66 seconds.
603 Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Wed, 05 Jan 2022 01:34:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sda2]. Up 132.02 seconds
605 Once the cluster's control plane is ready, its kubeconfig can be
606 obtained with `clusterctl` and the status of the cluster can be
607 monitored with `kubectl`.
609 # clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
610 # kubectl --kubeconfig=icn-admin.conf get pods -A
611 NAMESPACE NAME READY STATUS RESTARTS AGE
612 emco db-emco-mongo-0 1/1 Running 0 15h
613 emco emco-etcd-0 1/1 Running 0 15h
616 #### Examining the deployment process
618 The deployment resources can be examined with the kubectl and helm
619 tools. The below example provides pointers to the resources in the
622 # kubectl -n flux-system get GitRepository
623 NAME URL READY STATUS AGE
624 icn-master https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
626 # kubectl -n flux-system get Kustomization
627 NAME READY STATUS AGE
628 icn-master-site-pod11 True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m4s
630 # kubectl -n metal3 get GitRepository
631 NAME URL READY STATUS AGE
632 icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m22s
634 # kubectl -n metal3 get HelmRelease
635 NAME READY STATUS AGE
636 cluster-icn True Release reconciliation succeeded 7m54s
637 pod11-node2 True Release reconciliation succeeded 7m54s
638 pod11-node3 True Release reconciliation succeeded 7m54s
640 # kubectl -n metal3 get HelmChart
641 NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
642 metal3-cluster-icn deploy/cluster * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
643 metal3-pod11-node2 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
644 metal3-pod11-node3 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
647 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
648 cluster-icn metal3 2 2022-01-05 01:03:51.075860871 +0000 UTC deployed cluster-0.1.0
649 pod11-node2 metal3 2 2022-01-05 01:03:49.365432 +0000 UTC deployed machine-0.1.0
650 pod11-node3 metal3 2 2022-01-05 01:03:49.463726617 +0000 UTC deployed machine-0.1.0
652 # helm -n metal3 get values --all cluster-icn
659 containerRuntime: containerd
660 containerdVersion: 1.4.11-1
661 controlPlaneEndpoint: 10.10.110.23
662 controlPlaneHostSelector:
665 controlPlanePrefix: 24
668 decryptionSecret: # ...
669 path: ./deploy/site/pod11/cluster/icn
671 url: https://github.com/malsbat/icn
672 imageName: focal-server-cloudimg-amd64.img
675 kubeVersion: 1.21.6-00
676 numControlPlaneMachines: 1
683 hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
685 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
690 # helm -n metal3 get values --all pod11-node2
692 bmcAddress: ipmi://10.10.110.12
693 bmcDisableCertificateVerification: false
698 machineName: pod11-node2
703 macAddress: 00:1e:67:fe:f4:19
705 macAddress: 00:1e:67:fe:f4:1a
707 macAddress: 00:1e:67:f8:6a:41
712 ipAddress: 10.10.110.22/24
714 ipAddress: 10.10.113.3/24
721 # helm -n metal3 get values --all pod11-node3
723 bmcAddress: ipmi://10.10.110.13
724 bmcDisableCertificateVerification: false
729 machineName: pod11-node3
734 macAddress: 00:1e:67:f1:5b:90
736 macAddress: 00:1e:67:f1:5b:91
738 macAddress: 00:1e:67:f8:69:81
743 ipAddress: 10.10.110.23/24
745 ipAddress: 10.10.113.4/24
752 Once the workload cluster is ready, the deployment resources may be
755 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
756 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get GitRepository -A
757 NAMESPACE NAME URL READY STATUS AGE
758 emco emco https://github.com/open-ness/EMCO True Fetched revision: openness-21.03.06/18ec480f755119d54aa42c1bc3bd248dfd477165 16h
759 flux-system icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
760 kud kud https://gerrit.onap.org/r/multicloud/k8s True Fetched revision: master/8157bf63753839ce4e9006978816fad3f63ca2de 16h
762 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get Kustomization -A
763 NAMESPACE NAME READY STATUS AGE
764 flux-system icn-flux-sync True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
765 flux-system kata True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
767 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmRelease -A
768 NAMESPACE NAME READY STATUS AGE
769 emco db True Release reconciliation succeeded 16h
770 emco monitor True Release reconciliation succeeded 16h
771 emco podsecurity True Release reconciliation succeeded 16h
772 emco services True Release reconciliation succeeded 16h
773 emco tools True Release reconciliation succeeded 16h
774 kud cdi True Release reconciliation succeeded 16h
775 kud cdi-operator True Release reconciliation succeeded 16h
776 kud cpu-manager True Release reconciliation succeeded 16h
777 kud kubevirt True Release reconciliation succeeded 16h
778 kud kubevirt-operator True Release reconciliation succeeded 16h
779 kud multus-cni True Release reconciliation succeeded 16h
780 kud node-feature-discovery True Release reconciliation succeeded 16h
781 kud ovn4nfv True Release reconciliation succeeded 16h
782 kud ovn4nfv-network True Release reconciliation succeeded 16h
783 kud podsecurity True Release reconciliation succeeded 16h
784 kud qat-device-plugin True Release reconciliation succeeded 16h
785 kud sriov-network True Release reconciliation succeeded 16h
786 kud sriov-network-operator True Release reconciliation succeeded 16h
788 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmChart -A
789 NAMESPACE NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
790 emco emco-db deployments/helm/emcoOpenNESS/emco-db * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
791 emco emco-monitor deployments/helm/monitor * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
792 emco emco-services deployments/helm/emcoOpenNESS/emco-services * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
793 emco emco-tools deployments/helm/emcoOpenNESS/emco-tools * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
794 flux-system emco-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
795 flux-system kud-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
796 kud kud-cdi kud/deployment_infra/helm/cdi * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
797 kud kud-cdi-operator kud/deployment_infra/helm/cdi-operator * GitRepository kud True Fetched and packaged revision: 0.1.1 16h
798 kud kud-cpu-manager kud/deployment_infra/helm/cpu-manager * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
799 kud kud-kubevirt kud/deployment_infra/helm/kubevirt * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
800 kud kud-kubevirt-operator kud/deployment_infra/helm/kubevirt-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
801 kud kud-multus-cni kud/deployment_infra/helm/multus-cni * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
802 kud kud-node-feature-discovery kud/deployment_infra/helm/node-feature-discovery * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
803 kud kud-ovn4nfv kud/deployment_infra/helm/ovn4nfv * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
804 kud kud-ovn4nfv-network kud/deployment_infra/helm/ovn4nfv-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
805 kud kud-qat-device-plugin kud/deployment_infra/helm/qat-device-plugin * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
806 kud kud-sriov-network kud/deployment_infra/helm/sriov-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
807 kud kud-sriov-network-operator kud/deployment_infra/helm/sriov-network-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
809 root@pod11-node5:# helm --kubeconfig=icn-admin.conf ls -A
810 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
811 cdi kud 2 2022-01-05 01:54:28.39195226 +0000 UTC deployed cdi-0.1.0 v1.34.1
812 cdi-operator kud 2 2022-01-05 01:54:04.904465491 +0000 UTC deployed cdi-operator-0.1.1 v1.34.1
813 cpu-manager kud 2 2022-01-05 01:54:01.911819055 +0000 UTC deployed cpu-manager-0.1.0 v1.4.1-no-taint
814 db emco 2 2022-01-05 01:53:36.096690949 +0000 UTC deployed emco-db-0.1.0
815 kubevirt kud 2 2022-01-05 01:54:12.563840437 +0000 UTC deployed kubevirt-0.1.0 v0.41.0
816 kubevirt-operator kud 2 2022-01-05 01:53:59.190388299 +0000 UTC deployed kubevirt-operator-0.1.0 v0.41.0
817 monitor emco 2 2022-01-05 01:53:36.085180458 +0000 UTC deployed monitor-0.1.0 1.16.0
818 multus-cni kud 2 2022-01-05 01:54:03.494462704 +0000 UTC deployed multus-cni-0.1.0 v3.7
819 node-feature-discovery kud 2 2022-01-05 01:53:58.489616047 +0000 UTC deployed node-feature-discovery-0.1.0 v0.7.0
820 ovn4nfv kud 2 2022-01-05 01:54:07.488105774 +0000 UTC deployed ovn4nfv-0.1.0 v3.0.0
821 ovn4nfv-network kud 2 2022-01-05 01:54:31.79127155 +0000 UTC deployed ovn4nfv-network-0.1.0 v2.2.0
822 podsecurity kud 2 2022-01-05 01:53:37.400019369 +0000 UTC deployed podsecurity-0.1.0
823 podsecurity emco 2 2022-01-05 01:53:35.993351972 +0000 UTC deployed podsecurity-0.1.0
824 qat-device-plugin kud 2 2022-01-05 01:54:03.598022943 +0000 UTC deployed qat-device-plugin-0.1.0 0.19.0-kerneldrv
825 sriov-network kud 2 2022-01-05 01:54:31.695963579 +0000 UTC deployed sriov-network-0.1.0 4.8.0
826 sriov-network-operator kud 2 2022-01-05 01:54:07.787596951 +0000 UTC deployed sriov-network-operator-0.1.0 4.8.0
827 tools emco 2 2022-01-05 01:53:58.317119097 +0000 UTC deployed emco-tools-0.1.0
829 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get pods -A -o wide
830 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
831 emco db-emco-mongo-0 1/1 Running 0 16h 10.244.65.53 pod11-node2 <none> <none>
832 emco emco-etcd-0 1/1 Running 0 16h 10.244.65.57 pod11-node2 <none> <none>
833 emco monitor-monitor-74649c5c64-dxhfn 1/1 Running 0 16h 10.244.65.65 pod11-node2 <none> <none>
834 emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
839 Basic self-tests of Kata, EMCO, and the other addons may be performed
840 with the `kata.sh` and `addons.sh` test scripts once the workload cluster
843 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
844 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
848 To destroy the workload cluster and deprovision its machines, it is
849 only necessary to delete the site Kustomization. Uninstallation
850 progress can be monitored similar to deployment with `clusterctl`,
851 examining the `BareMetalHost` resources, etc.
853 root@pod11-node5:# kubectl -n flux-system delete Kustomization icn-master-site-pod11
855 ## Appendix A: BMC and BIOS configuration of pod11-node3
857 The BMC and BIOS configuration will vary depending on the vendor. The
858 below is intended only to provide some guidance on what to look for in
859 the hardware used in the chosen configuration.
861 ### BMC configuration
863 BMC IP address configured in the BIOS.
865 ![img](./pod11-node3-bios-bmc-configuration.png "BMC LAN Configuration")
867 BMC IP address configured in the web console.
869 ![img](./pod11-node3-ip-configuration.png "BMC LAN Configuration")
871 IPMI configuration. Not shown is the cipher suite configuration.
873 ![img](./pod11-node3-ipmi-over-lan.png "IPMI over LAN")
875 ### PXE boot configuration
877 The screens below show enabling PXE boot for the specified NIC and
878 ensuring it is first in the boot order.
880 ![img](./pod11-node3-bios-enable-pxe.png "Enable PXE boot")
882 ![img](./pod11-node3-bios-nic-boot-order.png "NIC boot order")
884 ### Additional BIOS configuration
886 The screens below show enabling virtualization options in the BIOS.
888 ![img](./pod11-node3-bios-vt-x.png "Enable Intel VT-x")
890 ![img](./pod11-node3-bios-vt-d.png "Enable Intel VT-d")
892 ## Appendix B: IPv4/IPv6 dual-stack
894 To enable dual-stack with dynamic IPAM, create an additional IP pool
895 of IPv6 addresses and reference it in the `networkData` dictionary.
896 Note the use of the `link` value to assign the IPv6 address to the
903 start: 2001:db8:0::10
906 gateway: 2001:db8:0::1
919 fromIPPool: baremetal
923 fromIPPool: baremetal6
927 To enable dual-stack with static IPAM, assign the addresses in the
928 `networkData` dictionary. Note the use of the `link` value to assign
929 the IPv6 address to the correct interface.
935 macAddress: 00:1e:67:fe:f4:19
937 macAddress: 00:1e:67:fe:f4:1a
943 ipAddress: 10.10.110.21/24
948 ipAddress: 2001:db8:0::21/64
949 gateway: 2001:db8:0::1
953 The last change needed is in the cluster chart values to configure
954 dual-stack support and define the IPv6 CIDR blocks for pods and