-4. Metal3 is launched with IPMI configuration as configured in
- "user_config.sh" and provisions the bare metal servers using IPMI
- LAN network. For more information refer to the [Debugging
- Failures](#debugging-failures) section.
-5. Metal3 launch verification runs with a timeout of 60 mins by
- checking the status of all the servers being provisioned or not.
- 1. All servers are provisioned in parallel. For example, if your
- deployment is having 10 servers in the edge location, all the 10
- servers are provisioned at the same time.
- 2. Metal3 launch verification takes care of checking all the
- servers are provisioned, the network interfaces are up and
- provisioned with a provider network gateway and DNS server.
- 3. Metal3 launch verification checks the status of all servers
- given in user_config.sh to make sure all the servers are
- provisioned. For example, if 8 servers are provisioned and 2
- servers are not provisioned, launch verification makes sure all
- servers are provisioned before launch k8s clusters on those
- servers.
-6. BPA bare metal components are invoked with the MAC address of the
- servers provisioned by Metal3, BPA bare metal components decide the
- cluster size and also the number of clusters required in the edge
- location.
-7. BPA bare metal runs the containerized Kuberenetes Reference
- Deployment (KUD) as a job for each cluster. KUD installs the k8s
- cluster on the slice of servers and install ONAP4K8S and all other
- default plugins such as Multus, OVN, OVN4NFV, NFD, Virtlet and
- SRIOV.
-8. BPA REST API agent installed in the bootstrap cluster or jump
- server, and this install rest-api, rook/ceph, MinIO as the cloud
- storage. This provides a way for user to upload their own software,
- container images or OS image to jump server.
+4. The Cluster API controllers, bootstrap, and infrastructure
+ providers and configured and installed.
+5. The Flux controllers are installed.
+
+#### Creating a compute cluster
+A compute cluster is composed of installations of two types of Helm
+charts: machine and cluster. The specific installations of these Helm
+charts are defined in HelmRelease resources consumed by the Flux
+controllers in the jump server. The user is required to provide the
+machine and cluster specific values in the HelmRelease resources.
+
+##### Preconfiguration for the compute cluster in Jump Server
+The user is required to provide the IPMI information of the servers
+and the values of the compute cluster they connect to the Local
+Controller.
+
+If the baremetal network provides a DHCP server with gateway and DNS
+server information, and each server has identical hardware then a
+cluster template can be used. Otherwise these values must also be
+provided with the values for each server. Refer to the machine chart
+in icn/deploy/machine for more details. In the example below, no DHCP
+server is present in the baremetal network.
+
+`site.yaml`
+``` yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: metal3
+---
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: GitRepository
+metadata:
+ name: icn
+ namespace: metal3
+spec:
+ gitImplementation: go-git
+ interval: 1m0s
+ ref:
+ branch: master
+ timeout: 20s
+ url: https://gerrit.akraino.org/r/icn
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: machine-node1
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/machine
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ machineName: node1
+ machineLabels:
+ machine: node1
+ bmcAddress: ipmi://10.10.110.11
+ bmcUsername: admin
+ bmcPassword: password
+ networks:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ type: ipv4
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ nameservers: ["8.8.8.8"]
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ type: ipv4_dhcp
+ sriov:
+ macAddress: 00:1e:67:f8:6a:41
+ type: ipv4
+ ipAddress: 10.10.113.3/24
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: machine-node2
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/machine
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ machineName: node2
+ machineLabels:
+ machine: node2
+ bmcAddress: ipmi://10.10.110.12
+ bmcUsername: admin
+ bmcPassword: password
+ networks:
+ baremetal:
+ macAddress: 00:1e:67:f1:5b:90
+ type: ipv4
+ ipAddress: 10.10.110.22/24
+ gateway: 10.10.110.1
+ nameservers: ["8.8.8.8"]
+ provisioning:
+ macAddress: 00:1e:67:f1:5b:91
+ type: ipv4_dhcp
+ sriov:
+ macAddress: 00:1e:67:f8:69:81
+ type: ipv4
+ ipAddress: 10.10.113.4/24
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: cluster-compute
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/cluster
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ clusterName: compute
+ controlPlaneEndpoint: 10.10.110.21
+ controlPlaneHostSelector:
+ matchLabels:
+ machine: node1
+ workersHostSelector:
+ matchLabels:
+ machine: node2
+ userData:
+ hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
+ sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrxu+fSrU51vgAO5zP5xWcTU8uLv4MkUZptE2m1BJE88JdQ80kz9DmUmq2AniMkVTy4pNeUW5PsmGJa+anN3MPM99CR9I37zRqy5i6rUDQgKjz8W12RauyeRMIBrbdy7AX1xasoTRnd6Ta47bP0egiFb+vUGnlTFhgfrbYfjbkJhVfVLCTgRw8Yj0NSK16YEyhYLbLXpix5udRpXSiFYIyAEWRCCsWJWljACr99P7EF82vCGI0UDGCCd/1upbUwZeTouD/FJBw9qppe6/1eaqRp7D36UYe3KzLpfHQNgm9AzwgYYZrD4tNN6QBMq/VUIuam0G1aLgG8IYRLs41HYkJ root@jump0
+ flux:
+ url: https://gerrit.akraino.org/r/icn
+ branch: master
+ path: ./deploy/site/cluster-e2etest
+```
+
+A brief overview of the values is below. Refer to the machine and
+cluster charts in deploy/machine and deploy/cluster respectively for
+more details.
+
+- *machineName*: This will be the hostname for the machine, once it is
+ provisioned by Metal3.
+- *bmcUsername*: BMC username required to be provided for Ironic.
+- *bmcPassword*: BMC password required to be provided for Ironic.
+- *bmcAddress*: BMC server IPMI LAN IP address.
+- *networks*: A dictionary of the networks used by ICN. For more
+ information, refer to the *networkData* field of the BareMetalHost
+ resource definition.
+ - *macAddress*: The MAC address of the interface.
+ - *type*: The type of network, either dynamic ("ipv4_dhcp") or
+ static ("ipv4").
+ - *ipAddress*: Only valid for type "ipv4"; the IP address of the
+ interface.
+ - *gateway*: Only valid for type "ipv4"; the gateway of this
+ network.
+ - *nameservers*: Only valid for type "ipv4"; an array of DNS
+ servers.
+- *clusterName*: The name of the cluster.
+- *controlPlaneEndpoint*: The K8s control plane endpoint. This works
+ in cooperation with the *controlPlaneHostSelector* to ensure that it
+ addresses the control plane node.
+- *controlPlaneHostSelector*: A K8s match expression against labels on
+ the *BareMetalHost* machine resource (from the *machineLabels* value
+ of the machine Helm chart). This will be used by Cluster API to
+ select machines for the control plane.
+- *workersHostSelector*: A K8s match expression selecting worker
+ machines.
+- *userData*: User data values to be provisioned into each machine in
+ the cluster.
+ - *hashedPassword*: The hashed password of the default user on each
+ machine.
+ - *sshAuthorizedKey*: An authorized public key of the *root* user on
+ each machine.
+- *flux*: An optional repository to continuously reconcile the created
+ K8s cluster against.
+
+#### Running
+After configuring the machine and cluster site values, the next steps
+are to encrypt the secrets contained in the file, commit the file to
+source control, and create the Flux resources on the jump server
+pointing to the committed files.
+
+1. Create a key protect the secrets in the values if one does not
+ already exist. The key created below will be named "site-secrets".
+
+``` shell
+root@jump0:# ./deploy/site/site.sh create-gpg-key site-secrets
+```
+
+2. Encrypt the secrets in the site values.
+
+``` shell
+root@jump0:# ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets
+```
+
+3. Commit the site.yaml and additional files (sops.pub.asc,
+ .sops.yaml) created by sops-encrypt-site to a Git repository. For
+ the purposes of the next step, site.yaml will be committed to a Git
+ repository hosted at URL, on the specified BRANCH, and at location
+ PATH inside the source tree.
+
+4. Create the Flux resources to deploy the resources described by the
+ repository in step 3. This creates a GitRepository resource
+ containing the URL and BRANCH to synchronize, a Secret resource
+ containing the private key used to decrypt the secrets in the site
+ values, and a Kustomization resource with the PATH to the site.yaml
+ file at the GitRepository.
+
+```shell
+root@jump0:# ./deploy/site/site.sh flux-create-site URL BRANCH PATH site-secrets
+```
+
+The progress of the deployment may be monitored in a number of ways:
+
+``` shell
+root@jump0:# kubectl -n metal3 get baremetalhost
+root@jump0:# kubectl -n metal3 get cluster compute
+root@jump0:# clusterctl -n metal3 describe cluster compute
+```
+
+When the control plane is ready, the kubeconfig can be obtained with
+clusterctl and used to access the compute cluster:
+
+``` shell
+root@jump0:# clusterctl -n metal3 get kubeconfig compute >compute-admin.conf
+root@jump0:# kubectl --kubeconfig=compute-admin.conf cluster-info
+```