9 Due to the almost limitless number of possible hardware
10 configurations, this installation guide has chosen a concrete
11 configuration to use in the examples that follow.
13 The configuration contains the following three machines.
15 <table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
19 <col class="org-left" />
21 <col class="org-right" />
23 <col class="org-left" />
25 <col class="org-left" />
27 <col class="org-left" />
29 <col class="org-left" />
31 <col class="org-left" />
35 <th scope="col" class="org-left">Hostname</th>
36 <th scope="col" class="org-right">CPU Model</th>
37 <th scope="col" class="org-left">Memory</th>
38 <th scope="col" class="org-left">Storage</th>
39 <th scope="col" class="org-left">IPMI: IP/MAC, U/P</th>
40 <th scope="col" class="org-left">1GbE: NIC#, IP, MAC, VLAN, Network</th>
41 <th scope="col" class="org-left">10GbE: NIC#, IP, MAC, VLAN, Network</th>
47 <td class="org-left">pod11-node5</td>
48 <td class="org-right">2xE5-2699</td>
49 <td class="org-left">64GB</td>
50 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
51 <td class="org-left">IF0: 10.10.110.15 00:1e:67:fc:ff:18<br/>U/P: root/root</td>
52 <td class="org-left">IF0: 10.10.110.25 00:1e:67:fc:ff:16 VLAN 110<br/>IF1: 172.22.0.1 00:1e:67:fc:ff:17 VLAN 111</td>
53 <td class="org-left"> </td>
58 <td class="org-left">pod11-node3</td>
59 <td class="org-right">2xE5-2699</td>
60 <td class="org-left">64GB</td>
61 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
62 <td class="org-left">IF0: 10.10.110.13 00:1e:67:f1:5b:92<br/>U/P: root/root</td>
63 <td class="org-left">IF0: 10.10.110.23 00:1e:67:f1:5b:90 VLAN 110<br/>IF1: 172.22.0.0/24 00:1e:67:f1:5b:91 VLAN 111</td>
64 <td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
69 <td class="org-left">pod11-node2</td>
70 <td class="org-right">2xE5-2699</td>
71 <td class="org-left">64GB</td>
72 <td class="org-left">3TB (Sata)<br/>180 (SSD)</td>
73 <td class="org-left">IF0: 10.10.110.12 00:1e:67:fe:f4:1b<br/>U/P: root/root</td>
74 <td class="org-left">IF0: 10.10.110.22 00:1e:67:fe:f4:19 VLAN 110<br/>IF1: 172.22.0.0/14 00:1e:67:fe:f4:1a VLAN 111</td>
75 <td class="org-left">IF3: 10.10.113.3 00:1e:67:f8:6a:41 VLAN 113</td>
80 `pod11-node5` will be the Local Controller or *jump server*. The other
81 two machines will form a two-node K8s cluster.
83 Recommended hardware requirements are servers with 64GB Memory, 32
84 CPUs and SR-IOV network cards.
86 The machines are connected in the following topology.
88 ![img](./pod11-topology.png "Topology")
90 There are three networks required by ICN:
92 - The `baremetal` network, used as the control plane for K8s and for
94 - The `provisioning` network, used during the infrastructure
95 provisioning (OS installation) phase.
96 - The `IPMI` network, also used during the infrastructure provisioning
99 In this configuration, the IPMI and baremetal interfaces share the
100 same port and network. Care has been taken to ensure that the IP
101 addresses do not conflict between the two interfaces.
103 There is an additional network connected to a high-speed switch:
105 - The `sriov` network, available for the application data plane.
110 #### Baseboard Management Controller (BMC) configuration
112 The BMC IP address should be statically assigned using the machine's
113 BMC tool or application.
115 To verify IPMI is configured correctly for each cluster machine, use
118 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
121 If the ipmitool output looks like the following, enable the *RMCP+
122 Cipher Suite3 Configuration* using the machine's BMC tool or application.
124 # ipmitool -I lanplus -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
125 Error in open session response message : insufficient resources for session
126 Error: Unable to establish IPMI v2 / RMCP+ session
128 If the ipmitool output looks like the following, enable *IPMI over lan*
129 using the machine's BMC tool or application.
131 # ipmitool -I lan -H 10.10.110.13 -L ADMINISTRATOR -U root -R 7 -N 5 -P root power status
132 Error: Unable to establish LAN session
134 Additional information on ipmitool may be found at [Configuring IPMI
136 ipmitool](https://www.thomas-krenn.com/en/wiki/Configuring_IPMI_under_Linux_using_ipmitool).
138 #### PXE Boot configuration
140 Each cluster machine must be configured to PXE boot from the interface
141 attached to the `provisioning` network.
143 One method of verifying PXE boot is configured correctly is to access
144 the remote console of the machine and observe the boot process. If
145 the machine is not attempting PXE boot or it is attempting to PXE boot
146 on the wrong interface, reboot the machine into the BIOS and select
147 the correct interface in the boot options.
149 Additional verification can be done on the jump server using the
150 tcpdump tool. The following command looks for DHCP or TFTP traffic
151 arriving on any interface. Replace `any` with the interface attached to
152 the provisioning network to verify end-to-end connectivity between the
153 jump server and cluster machine.
155 # tcpdump -i any port 67 or port 68 or port 69
157 If tcpdump does not show any traffic, verify that the any switches are
158 configured properly to forward PXE boot requests (i.e. VLAN
165 ### Configure the jump server
167 The jump server is required to be pre-installed with an OS. ICN
168 supports Ubuntu 20.04.
170 Before provisioning the jump server, first edit `user_config.sh` to
171 provide the name of the interface connected to the provisioning
174 # ip --brief link show
176 enp4s0f3 UP 00:1e:67:fc:ff:17 <BROADCAST,MULTICAST,UP,LOWER_UP>
180 export IRONIC_INTERFACE="enp4s0f3"
183 ### Install the jump server components
190 make clean_jump_server
198 Before proceeding with the configuration, a basic understanding of the
199 essential components used in ICN is required.
201 ![img](./sw-diagram.png "Software Overview")
205 [Flux](https://fluxcd.io/) is a tool for implementing GitOps workflows where infrastructure
206 and application configuration is committed to source control and
207 continuously deployed in a K8s cluster.
209 The important Flux resources ICN uses are:
211 - GitRepository, which describes where configuration data is committed
212 - HelmRelease, which describes an installation of a Helm chart
213 - Kustomization, which describes application of K8s resources
214 customized with a kustomization file
216 #### Cluster API (CAPI)
218 [Cluster API](https://cluster-api.sigs.k8s.io/) provides declarative APIs and tooling for provisioning,
219 upgrading, and operating K8s clusters.
221 There are a number of important CAPI resources that ICN uses. To ease
222 deployment, ICN captures the resources into a Helm chart.
224 #### Bare Metal Operator (BMO)
226 Central to CAPI are infrastructure and bootstrap providers. There are
227 pluggable components for configuring the OS and K8s installation
230 ICN uses the [Cluster API Provider Metal3 for Managed Bare Metal
231 Hardware](https://github.com/metal3-io/cluster-api-provider-metal3) for infrastructure provisioning, which in turn depends on the
232 [Metal3 Bare Metal Operator](https://github.com/metal3-io/baremetal-operator) to do the actual work. The Bare Metal
233 Operator uses [Ironic](https://ironicbaremetal.org/) to execute the low-level provisioning tasks.
235 Similar to the CAPI resources that ICN uses, ICN captures the Bare
236 Metal Operator resources it uses into a Helm chart.
241 > NOTE:/ To assist in the migration of R5 and earlier release's use from
242 > nodes.json and the Provisioning resource to the site YAML described
243 > below, a helper script is provided at tools/migration/to<sub>r6.sh</sub>.
245 #### Define the compute cluster
247 The first step in provisioning a site with ICN is to define the
248 desired day-0 configuration of the workload clusters.
250 A [configuration](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=tree;f=deploy/site/cluster-icn) containing all supported ICN components is available
251 in the ICN repository. End-users may use this as a base and add or
252 remove components as desired. Each YAML file in this configuration is
253 one of the Flux resources described in the overview: GitRepository,
254 HelmRelease, or Kustomization.
258 A site definition is composed of BMO and CAPI resources, describing
259 machines and clusters respectively. These resources are captured into
260 the ICN machine and cluster Helm charts. Defining the site is
261 therefore a matter of specifying the values needed by the charts.
263 ##### Site-specific Considerations
265 Documentation for the machine chart may be found in its [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/machine/values.yaml),
266 and documentation for the cluster chart may be found in its
267 [values.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/cluster/values.yaml). Please review those for more information; what follows
268 is some site-specific considerations to be aware of.
270 Note that there are a large number of ways to configure machines and
271 especially clusters; for large scale deployments it may make sense to
272 create custom charts to eliminate duplication in the values
275 ###### Control plane endpoint
277 The K8s control plane endpoint address must be provided to the cluster
280 For a highly-available control plane, this would typically be a
281 load-balanced virtual IP address. Configuration of an external load
282 balancer is out of scope for this document. The chart also provides
283 another mechanism to accomplish this using the VRRP protocol to assign
284 the control plane endpoint among the selected control plane nodes; see
285 the `keepalived` dictionary in the cluster chart values.
287 For a single control plane node with a static IP address, some care
288 must be taken to ensure that CAPI chooses the correct machine to
289 provision as the control plane node. To do this, add a label to the
290 `machineLabels` dictionary in the machine chart and specify a K8s match
291 expression in the `controlPlaneHostSelector` dictionary of the cluster
292 chart. Once done, the IP address of the labeled and selected machine
293 can be used as the control plane endpoint address.
295 ###### Static or dynamic baremetal network IPAM
297 The cluster and machine charts support either static or dynamic IPAM
298 in the baremetal network.
300 Dynamic IPAM is configured by specifying the `networks` dictionary in
301 the cluster chart. At least two entries must be included, the
302 `baremetal` and `provisioning` networks. Under each entry, provide the
303 predictable network interface name as the value of `interface` key.
305 Note that this is in the cluster chart and therefore is in the form of
306 a template for each machine used in the cluster. If the machines are
307 sufficiently different such that the same interface name is not used
308 on each machine, then the static approach below must be used instead.
310 Static IPAM is configured by specifying the `networks` dictionary in the
311 machine chart. At least two entries must be included, the `baremetal`
312 and `provisioning` networks. From the chart example values:
316 macAddress: 00:1e:67:fe:f4:19
317 # type is either ipv4 or ipv4_dhcp
319 # ipAddress is only valid for type ipv4
320 ipAddress: 10.10.110.21/24
321 # gateway is only valid for type ipv4
323 # nameservers is an array of DNS servers; only valid for type ipv4
324 nameservers: ["8.8.8.8"]
326 macAddress: 00:1e:67:fe:f4:1a
329 The provisioning network must always be type `ipv4_dhcp`.
331 In either the static or dynamic case additional networks may be
332 included, however the static assignment option for an individual
333 network exists only when the machine chart approach is used.
337 The first thing done is to create a `site.yaml` file containing a
338 Namespace to hold the site resources and a GitRepository pointing to
339 the ICN repository where the machine and cluster Helm charts are
342 Note that when definining multiple sites it is only necessary to apply
343 the Namespace and GitRepository once on the jump server managing the
352 apiVersion: source.toolkit.fluxcd.io/v1beta1
358 gitImplementation: go-git
363 url: https://gerrit.akraino.org/r/icn
365 ##### Define a machine
367 Important values in machine definition include:
369 - **machineName:** the host name of the machine
370 - **bmcAddress, bmcUsername, bmcPassword:** the bare metal controller
371 (e.g. IPMI) access values
373 Capture each machine's values into a HelmRelease in the site YAML:
376 apiVersion: helm.toolkit.fluxcd.io/v2beta1
385 chart: deploy/machine
391 machineName: pod11-node2
394 bmcAddress: ipmi://10.10.110.12
399 macAddress: 00:1e:67:fe:f4:19
401 ipAddress: 10.10.110.22/24
406 macAddress: 00:1e:67:fe:f4:1a
409 macAddress: 00:1e:67:f8:6a:40
411 ipAddress: 10.10.112.3/24
413 macAddress: 00:1e:67:f8:6a:41
415 ipAddress: 10.10.113.3/24
417 ##### Define a cluster
419 Important values in cluster definition include:
421 - **clusterName:** the name of the cluster
422 - **numControlPlaneMachines:** the number of control plane nodes
423 - **numWorkerMachines:** the number of worker nodes
424 - **controlPlaneEndpoint:** see [Site-specific Considerations](#site-specific-considerations) above
425 - **userData:** dictionary containing default username, password, and
427 - **flux:** dictionary containing location of day-0 configuration of
428 cluster; see [Define the compute cluster](#define-the-compute-cluster) above
430 Capture each cluster's values into a HelmRelease in the site YAML:
433 apiVersion: helm.toolkit.fluxcd.io/v2beta1
442 chart: deploy/cluster
451 controlPlaneEndpoint: 10.10.110.23
452 controlPlaneHostSelector:
459 hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
460 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
462 url: https://gerrit.akraino.org/r/icn
464 path: ./deploy/site/cluster-icn
466 ##### Encrypt secrets in site definition
468 This step is optional, but recommended to protect sensitive
469 information stored in the site definition. The site script is
470 configured to protect the `bmcPassword` and `hashedPassword` values.
472 Use an existing GPG key pair or create a new one, then encrypt the
473 secrets contained in the site YAML using site.sh. The public key and
474 SOPS configuration is created in the site YAML directory; these may be
475 used to encrypt (but not decrypt) future secrets.
477 # ./deploy/site/site.sh create-gpg-key site-secrets-key
478 # ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets-key
480 ##### Example site definitions
482 Refer to the [pod11 site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/pod11/site.yaml) and the [vm site.yaml](https://gerrit.akraino.org/r/gitweb?p=icn.git;a=blob;f=deploy/site/vm/site.yaml) for complete
483 examples of site definitions for a static and dynamic baremetal
484 network respectively. These site definitions are for simple two
485 machine clusters used in ICN testing.
487 #### Inform the Flux controllers of the site definition
489 The final step is inform the jump server Flux controllers of the site
490 definition be creating three resources:
492 - a GitRepository containing the location where the site definition is
494 - a Secret holding the GPG private key used to encrypt the secrets in
496 - a Kustomization referencing the GitRepository, Secret, and path in
497 the repository where the site definition is located
499 This may be done with the help of the `site.sh` script:
501 # ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
504 <a id="org6324e82"></a>
508 #### Monitoring progress
510 The overall status of the cluster deployment can be monitored with
513 # clusterctl -n metal3 describe cluster icn
514 NAME READY SEVERITY REASON SINCE MESSAGE
515 /icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
516 ├─ClusterInfrastructure - Metal3Cluster/icn
517 ├─ControlPlane - KubeadmControlPlane/icn False Warning ScalingUp 4m14s Scaling up control plane to 1 replicas (actual 0)
518 │ └─Machine/icn-9sp7z False Info WaitingForInfrastructure 4m17s 1 of 2 completed
519 │ └─MachineInfrastructure - Metal3Machine/icn-controlplane-khtsk
521 └─MachineDeployment/icn False Warning WaitingForAvailableMachines 4m49s Minimum availability requires 1 replicas, current 0 available
522 └─Machine/icn-6b8dfc7f6f-tmgv7 False Info WaitingForInfrastructure 4m49s 0 of 2 completed
523 ├─BootstrapConfig - KubeadmConfig/icn-workers-79pl9 False Info WaitingForControlPlaneAvailable 4m19s
524 └─MachineInfrastructure - Metal3Machine/icn-workers-m7vb8
526 The status of OS provisioning can be monitored by inspecting the
527 `BareMetalHost` resources.
529 # kubectl -n metal3 get bmh
530 NAME STATE CONSUMER ONLINE ERROR AGE
531 pod11-node2 inspecting true 5m15s
532 pod11-node3 inspecting true 5m14s
534 Once the OS is installed, the status of K8s provisioning can be
535 monitored by logging into the machine using the credentials from the
536 `userData` section of the site values and inspecting the cloud-init
539 root@pod11-node2:~# tail -f /var/log/cloud-init-output.log
541 Cloud-init v. 21.4-0ubuntu1~20.04.1 running 'modules:final' at Wed, 05 Jan 2022 01:34:41 +0000. Up 131.66 seconds.
542 Cloud-init v. 21.4-0ubuntu1~20.04.1 finished at Wed, 05 Jan 2022 01:34:41 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sda2]. Up 132.02 seconds
544 Once the cluster's control plane is ready, its kubeconfig can be
545 obtained with `clusterctl` and the status of the cluster can be
546 monitored with `kubectl`.
548 # clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
549 # kubectl --kubeconfig=icn-admin.conf get pods -A
550 NAMESPACE NAME READY STATUS RESTARTS AGE
551 emco db-emco-mongo-0 1/1 Running 0 15h
552 emco emco-etcd-0 1/1 Running 0 15h
555 #### Examining the deployment process
557 The deployment resources can be examined with the kubectl and helm
558 tools. The below example provides pointers to the resources in the
561 # kubectl -n flux-system get GitRepository
562 NAME URL READY STATUS AGE
563 icn-master https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
565 # kubectl -n flux-system get Kustomization
566 NAME READY STATUS AGE
567 icn-master-site-pod11 True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m4s
569 # kubectl -n metal3 get GitRepository
570 NAME URL READY STATUS AGE
571 icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 7m22s
573 # kubectl -n metal3 get HelmRelease
574 NAME READY STATUS AGE
575 cluster-icn True Release reconciliation succeeded 7m54s
576 pod11-node2 True Release reconciliation succeeded 7m54s
577 pod11-node3 True Release reconciliation succeeded 7m54s
579 # kubectl -n metal3 get HelmChart
580 NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
581 metal3-cluster-icn deploy/cluster * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
582 metal3-pod11-node2 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
583 metal3-pod11-node3 deploy/machine * GitRepository icn True Fetched and packaged revision: 0.1.0 8m9s
586 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
587 cluster-icn metal3 2 2022-01-05 01:03:51.075860871 +0000 UTC deployed cluster-0.1.0
588 pod11-node2 metal3 2 2022-01-05 01:03:49.365432 +0000 UTC deployed machine-0.1.0
589 pod11-node3 metal3 2 2022-01-05 01:03:49.463726617 +0000 UTC deployed machine-0.1.0
591 # helm -n metal3 get values --all cluster-icn
598 containerRuntime: containerd
599 containerdVersion: 1.4.11-1
600 controlPlaneEndpoint: 10.10.110.23
601 controlPlaneHostSelector:
604 controlPlanePrefix: 24
605 dockerVersion: 5:20.10.10~3-0~ubuntu-focal
608 path: ./deploy/site/cluster-icn
610 url: https://gerrit.akraino.org/r/icn
611 imageName: focal-server-cloudimg-amd64.img
613 kubeVersion: 1.21.6-00
614 numControlPlaneMachines: 1
616 podCidr: 10.244.64.0/18
618 hashedPassword: $6$rounds=10000$bhRsNADLl$BzCcBaQ7Tle9AizUHcMKN2fygyPMqBebOuvhApI8B.pELWyFUaAWRasPOz.5Gf9bvCihakRnBTwsi217n2qQs1
620 sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
625 # helm -n metal3 get values --all pod11-node2
627 bmcAddress: ipmi://10.10.110.12
632 machineName: pod11-node2
636 ipAddress: 10.10.110.22/24
637 macAddress: 00:1e:67:fe:f4:19
642 macAddress: 00:1e:67:fe:f4:1a
645 ipAddress: 10.10.113.3/24
646 macAddress: 00:1e:67:f8:6a:41
649 # helm -n metal3 get values --all pod11-node3
651 bmcAddress: ipmi://10.10.110.13
656 machineName: pod11-node3
660 ipAddress: 10.10.110.23/24
661 macAddress: 00:1e:67:f1:5b:90
666 macAddress: 00:1e:67:f1:5b:91
669 ipAddress: 10.10.113.4/24
670 macAddress: 00:1e:67:f8:69:81
673 Once the workload cluster is ready, the deployment resources may be
676 root@jump:/icn# clusterctl -n metal3 get kubeconfig icn >icn-admin.conf
677 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get GitRepository -A
678 NAMESPACE NAME URL READY STATUS AGE
679 emco emco https://github.com/open-ness/EMCO True Fetched revision: openness-21.03.06/18ec480f755119d54aa42c1bc3bd248dfd477165 16h
680 flux-system icn https://gerrit.akraino.org/r/icn True Fetched revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
681 kud kud https://gerrit.onap.org/r/multicloud/k8s True Fetched revision: master/8157bf63753839ce4e9006978816fad3f63ca2de 16h
683 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get Kustomization -A
684 NAMESPACE NAME READY STATUS AGE
685 flux-system icn-flux-sync True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
686 flux-system kata True Applied revision: master/0e93643e74f26bfc062a81c2f05ad947550f8d50 16h
688 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmRelease -A
689 NAMESPACE NAME READY STATUS AGE
690 emco db True Release reconciliation succeeded 16h
691 emco monitor True Release reconciliation succeeded 16h
692 emco podsecurity True Release reconciliation succeeded 16h
693 emco services True Release reconciliation succeeded 16h
694 emco tools True Release reconciliation succeeded 16h
695 kud cdi True Release reconciliation succeeded 16h
696 kud cdi-operator True Release reconciliation succeeded 16h
697 kud cpu-manager True Release reconciliation succeeded 16h
698 kud kubevirt True Release reconciliation succeeded 16h
699 kud kubevirt-operator True Release reconciliation succeeded 16h
700 kud multus-cni True Release reconciliation succeeded 16h
701 kud node-feature-discovery True Release reconciliation succeeded 16h
702 kud ovn4nfv True Release reconciliation succeeded 16h
703 kud ovn4nfv-network True Release reconciliation succeeded 16h
704 kud podsecurity True Release reconciliation succeeded 16h
705 kud qat-device-plugin True Release reconciliation succeeded 16h
706 kud sriov-network True Release reconciliation succeeded 16h
707 kud sriov-network-operator True Release reconciliation succeeded 16h
709 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get HelmChart -A
710 NAMESPACE NAME CHART VERSION SOURCE KIND SOURCE NAME READY STATUS AGE
711 emco emco-db deployments/helm/emcoOpenNESS/emco-db * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
712 emco emco-monitor deployments/helm/monitor * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
713 emco emco-services deployments/helm/emcoOpenNESS/emco-services * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
714 emco emco-tools deployments/helm/emcoOpenNESS/emco-tools * GitRepository emco True Fetched and packaged revision: 0.1.0 16h
715 flux-system emco-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
716 flux-system kud-podsecurity deploy/podsecurity * GitRepository icn True Fetched and packaged revision: 0.1.0 16h
717 kud kud-cdi kud/deployment_infra/helm/cdi * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
718 kud kud-cdi-operator kud/deployment_infra/helm/cdi-operator * GitRepository kud True Fetched and packaged revision: 0.1.1 16h
719 kud kud-cpu-manager kud/deployment_infra/helm/cpu-manager * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
720 kud kud-kubevirt kud/deployment_infra/helm/kubevirt * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
721 kud kud-kubevirt-operator kud/deployment_infra/helm/kubevirt-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
722 kud kud-multus-cni kud/deployment_infra/helm/multus-cni * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
723 kud kud-node-feature-discovery kud/deployment_infra/helm/node-feature-discovery * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
724 kud kud-ovn4nfv kud/deployment_infra/helm/ovn4nfv * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
725 kud kud-ovn4nfv-network kud/deployment_infra/helm/ovn4nfv-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
726 kud kud-qat-device-plugin kud/deployment_infra/helm/qat-device-plugin * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
727 kud kud-sriov-network kud/deployment_infra/helm/sriov-network * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
728 kud kud-sriov-network-operator kud/deployment_infra/helm/sriov-network-operator * GitRepository kud True Fetched and packaged revision: 0.1.0 16h
730 root@pod11-node5:# helm --kubeconfig=icn-admin.conf ls -A
731 NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
732 cdi kud 2 2022-01-05 01:54:28.39195226 +0000 UTC deployed cdi-0.1.0 v1.34.1
733 cdi-operator kud 2 2022-01-05 01:54:04.904465491 +0000 UTC deployed cdi-operator-0.1.1 v1.34.1
734 cpu-manager kud 2 2022-01-05 01:54:01.911819055 +0000 UTC deployed cpu-manager-0.1.0 v1.4.1-no-taint
735 db emco 2 2022-01-05 01:53:36.096690949 +0000 UTC deployed emco-db-0.1.0
736 kubevirt kud 2 2022-01-05 01:54:12.563840437 +0000 UTC deployed kubevirt-0.1.0 v0.41.0
737 kubevirt-operator kud 2 2022-01-05 01:53:59.190388299 +0000 UTC deployed kubevirt-operator-0.1.0 v0.41.0
738 monitor emco 2 2022-01-05 01:53:36.085180458 +0000 UTC deployed monitor-0.1.0 1.16.0
739 multus-cni kud 2 2022-01-05 01:54:03.494462704 +0000 UTC deployed multus-cni-0.1.0 v3.7
740 node-feature-discovery kud 2 2022-01-05 01:53:58.489616047 +0000 UTC deployed node-feature-discovery-0.1.0 v0.7.0
741 ovn4nfv kud 2 2022-01-05 01:54:07.488105774 +0000 UTC deployed ovn4nfv-0.1.0 v3.0.0
742 ovn4nfv-network kud 2 2022-01-05 01:54:31.79127155 +0000 UTC deployed ovn4nfv-network-0.1.0 v2.2.0
743 podsecurity kud 2 2022-01-05 01:53:37.400019369 +0000 UTC deployed podsecurity-0.1.0
744 podsecurity emco 2 2022-01-05 01:53:35.993351972 +0000 UTC deployed podsecurity-0.1.0
745 qat-device-plugin kud 2 2022-01-05 01:54:03.598022943 +0000 UTC deployed qat-device-plugin-0.1.0 0.19.0-kerneldrv
746 sriov-network kud 2 2022-01-05 01:54:31.695963579 +0000 UTC deployed sriov-network-0.1.0 4.8.0
747 sriov-network-operator kud 2 2022-01-05 01:54:07.787596951 +0000 UTC deployed sriov-network-operator-0.1.0 4.8.0
748 tools emco 2 2022-01-05 01:53:58.317119097 +0000 UTC deployed emco-tools-0.1.0
750 root@pod11-node5:# kubectl --kubeconfig=icn-admin.conf get pods -A -o wide
751 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
752 emco db-emco-mongo-0 1/1 Running 0 16h 10.244.65.53 pod11-node2 <none> <none>
753 emco emco-etcd-0 1/1 Running 0 16h 10.244.65.57 pod11-node2 <none> <none>
754 emco monitor-monitor-74649c5c64-dxhfn 1/1 Running 0 16h 10.244.65.65 pod11-node2 <none> <none>
755 emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
761 Basic self-tests of Kata, EMCO, and the other addons may be performed
762 with the `kata.sh` and `addons.sh` test scripts once the workload cluster
765 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
766 root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
771 To destroy the workload cluster and deprovision its machines, it is
772 only necessary to delete the site Kustomization. Uninstallation
773 progress can be monitored similar to deployment with `clusterctl`,
774 examining the `BareMetalHost` resources, etc.
776 root@pod11-node5:# kubectl -n flux-system delete Kustomization icn-master-site-pod11