# Installation guide
-
## Hardware
-
### Overview
Due to the almost limitless number of possible hardware
configurations, this installation guide has chosen a concrete
configuration to use in the examples that follow.
+> NOTE: The example configuration's BMC does not support Redfish
+> virtual media, and therefore IPMI is used instead. When supported
+> by the BMC, it is recommended to use the more secure Redfish virtual
+> media option as shown [Quick start guide](quick-start.md).
+
The configuration contains the following three machines.
<table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
-
<colgroup>
<col class="org-left" />
-
<col class="org-right" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
</colgroup>
+
<thead>
<tr>
<th scope="col" class="org-left">Hostname</th>
<td class="org-left"> </td>
</tr>
-
<tr>
<td class="org-left">pod11-node3</td>
<td class="org-right">2xE5-2699</td>
<td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
</tr>
-
<tr>
<td class="org-left">pod11-node2</td>
<td class="org-right">2xE5-2699</td>
- The `sriov` network, available for the application data plane.
-
### Configuration
#### Baseboard Management Controller (BMC) configuration
The BMC IP address should be statically assigned using the machine's
-BMC tool or application.
+BMC tool or application. Configuration of the pod11-node3 machine is
+shown in [Appendix A](#bmc-configuration).
To verify IPMI is configured correctly for each cluster machine, use
ipmitool:
#### PXE Boot configuration
Each cluster machine must be configured to PXE boot from the interface
-attached to the `provisioning` network.
+attached to the `provisioning` network. Configuration of the
+pod11-node3 machine is shown in [Appendix A](#pxe-boot-configuration-1).
One method of verifying PXE boot is configured correctly is to access
the remote console of the machine and observe the boot process. If
configured properly to forward PXE boot requests (i.e. VLAN
configuration).
+### Additional BIOS configuration
-## Jump server
+Each cluster machine should also be configured to enable any desired
+features such as virtualization support. Configuration of the
+pod11-node3 machine is shown in [Appendix
+A](#additional-bios-configuration-1).
+## Jump server
### Configure the jump server
make jump_server
-
### Uninstallation
make clean_jump_server
-
## Compute clusters
-
### Overview
Before proceeding with the configuration, a basic understanding of the
Similar to the CAPI resources that ICN uses, ICN captures the Bare
Metal Operator resources it uses into a Helm chart.
-
### Configuration
-> NOTE:/ To assist in the migration of R5 and earlier release's use from
+> NOTE: To assist in the migration of R5 and earlier release's use from
> nodes.json and the Provisioning resource to the site YAML described
> below, a helper script is provided at tools/migration/to<sub>r6.sh</sub>.
The cluster and machine charts support either static or dynamic IPAM
in the baremetal network.
-Dynamic IPAM is configured by specifying the `networks` dictionary in
-the cluster chart. At least two entries must be included, the
-`baremetal` and `provisioning` networks. Under each entry, provide the
-predictable network interface name as the value of `interface` key.
-
+Dynamic IPAM is configured by specifying IP pools containing the
+address ranges to assign and the interface and network mapping to the
+pools. The IP pools are specified with the `ipPools` dictionary in
+the cluster chart. From the chart example values:
+
+ ipPools:
+ baremetal:
+ # start is the beginning of the address range in the pool.
+ start: 192.168.151.10
+ # end is the end of the address range in the pool.
+ end: 192.168.151.20
+ # prefix is the network prefix of addresses in the range.
+ prefix: 24
+ # gateway is optional.
+ #gateway: 192.168.151.1
+ # preAllocations are optional. Note that if the pool overlaps
+ # with the gateway, then a pre-allocation is required.
+ #preAllocations:
+ # controlPlane: 192.168.151.254
+
+The interface and network mapping is specified with the `networkData`
+dictionary in the cluster chart. From the chart example values:
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ interface: ens6
+ provisioning:
+ interface: ens5
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ # link is optional and defaults to the network name.
+ #link: baremetal
+ fromIPPool: baremetal
+ services:
+ dns:
+ - 8.8.8.8
+
+At least two entries must be included, the `baremetal` and
+`provisioning` networks and the provisioning network must always be
+type `ipv4DHCP`. Under each entry, provide the predictable network
+interface name as the value of `interface` key.
+
Note that this is in the cluster chart and therefore is in the form of
a template for each machine used in the cluster. If the machines are
sufficiently different such that the same interface name is not used
on each machine, then the static approach below must be used instead.
-
-Static IPAM is configured by specifying the `networks` dictionary in the
-machine chart. At least two entries must be included, the `baremetal`
-and `provisioning` networks. From the chart example values:
-
- networks:
- baremetal:
- macAddress: 00:1e:67:fe:f4:19
- # type is either ipv4 or ipv4_dhcp
- type: ipv4
- # ipAddress is only valid for type ipv4
- ipAddress: 10.10.110.21/24
- # gateway is only valid for type ipv4
- gateway: 10.10.110.1
- # nameservers is an array of DNS servers; only valid for type ipv4
- nameservers: ["8.8.8.8"]
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
-
-The provisioning network must always be type `ipv4_dhcp`.
+
+Static IPAM is configured similar to the dynamic IPAM. Instead of
+providing a template with the cluster chart, specific values are
+provided with the machine chart. From the chart example values:
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ private:
+ macAddress: 00:1e:67:f8:6a:40
+ storage:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ # link is optional and defaults to the network name.
+ #link: baremetal
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ private:
+ ipAddress: 10.10.112.2/24
+ storage:
+ ipAddress: 10.10.113.2/24
+ services:
+ dns: ["8.8.8.8"]
+
+Again, at least two entries must be included, the `baremetal` and
+`provisioning` networks and the provisioning network must always be
+type `ipv4DHCP`.
In either the static or dynamic case additional networks may be
included, however the static assignment option for an individual
network exists only when the machine chart approach is used.
-
+
+For additional information on configuring IPv4/IPV6 dual-stack
+operation, refer to [Appendix B](#appendix-b-ipv4ipv6-dual-stack).
+
##### Prerequisites
The first thing done is to create a `site.yaml` file containing a
bmcAddress: ipmi://10.10.110.12
bmcUsername: root
bmcPassword: root
- networks:
- baremetal:
- macAddress: 00:1e:67:fe:f4:19
- type: ipv4
- ipAddress: 10.10.110.22/24
- gateway: 10.10.110.1
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ private:
+ macAddress: 00:1e:67:f8:6a:40
+ storage:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ ipAddress: 10.10.110.22/24
+ gateway: 10.10.110.1
+ private:
+ ipAddress: 10.10.112.3/24
+ storage:
+ ipAddress: 10.10.113.3/24
+ services:
+ dns:
- 8.8.8.8
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
- private:
- macAddress: 00:1e:67:f8:6a:40
- type: ipv4
- ipAddress: 10.10.112.3/24
- storage:
- macAddress: 00:1e:67:f8:6a:41
- type: ipv4
- ipAddress: 10.10.113.3/24
-
+
##### Define a cluster
Important values in cluster definition include:
# ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
-
<a id="org6324e82"></a>
### Deployment
provider: icn
site: pod11
clusterName: icn
- cni: flannel
+ cni: calico
containerRuntime: containerd
containerdVersion: 1.4.11-1
controlPlaneEndpoint: 10.10.110.23
matchLabels:
machine: pod11-node3
controlPlanePrefix: 24
- dockerVersion: 5:20.10.10~3-0~ubuntu-focal
flux:
branch: master
- path: ./deploy/site/cluster-icn
+ decryptionSecret: # ...
+ path: ./deploy/site/pod11/cluster/icn
repositoryName: icn
- url: https://gerrit.akraino.org/r/icn
+ url: https://github.com/malsbat/icn
imageName: focal-server-cloudimg-amd64.img
+ ipam: ipv4
k8sVersion: v1.21.6
kubeVersion: 1.21.6-00
numControlPlaneMachines: 1
numWorkerMachines: 1
- podCidr: 10.244.64.0/18
+ podCidrBlocks:
+ - 10.244.64.0/18
+ serviceCidrBlocks:
+ - 10.244.0.0/18
userData:
- hashedPassword: $6$rounds=10000$bhRsNADLl$BzCcBaQ7Tle9AizUHcMKN2fygyPMqBebOuvhApI8B.pELWyFUaAWRasPOz.5Gf9bvCihakRnBTwsi217n2qQs1
+ hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
name: ubuntu
sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
workersHostSelector:
matchLabels:
machine: pod11-node2
-
+
# helm -n metal3 get values --all pod11-node2
COMPUTED VALUES:
bmcAddress: ipmi://10.10.110.12
+ bmcDisableCertificateVerification: false
bmcPassword: root
bmcUsername: root
machineLabels:
machine: pod11-node2
machineName: pod11-node2
- networks:
- baremetal:
- gateway: 10.10.110.1
- ipAddress: 10.10.110.22/24
- macAddress: 00:1e:67:fe:f4:19
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ sriov:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4:
+ baremetal:
+ gateway: 10.10.110.1
+ ipAddress: 10.10.110.22/24
+ sriov:
+ ipAddress: 10.10.113.3/24
+ ipv4DHCP:
+ provisioning: {}
+ services:
+ dns:
- 8.8.8.8
- type: ipv4
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
- sriov:
- ipAddress: 10.10.113.3/24
- macAddress: 00:1e:67:f8:6a:41
- type: ipv4
# helm -n metal3 get values --all pod11-node3
COMPUTED VALUES:
bmcAddress: ipmi://10.10.110.13
+ bmcDisableCertificateVerification: false
bmcPassword: root
bmcUsername: root
machineLabels:
machine: pod11-node3
machineName: pod11-node3
- networks:
- baremetal:
- gateway: 10.10.110.1
- ipAddress: 10.10.110.23/24
- macAddress: 00:1e:67:f1:5b:90
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:f1:5b:90
+ provisioning:
+ macAddress: 00:1e:67:f1:5b:91
+ sriov:
+ macAddress: 00:1e:67:f8:69:81
+ networks:
+ ipv4:
+ baremetal:
+ gateway: 10.10.110.1
+ ipAddress: 10.10.110.23/24
+ sriov:
+ ipAddress: 10.10.113.4/24
+ ipv4DHCP:
+ provisioning: {}
+ services:
+ dns:
- 8.8.8.8
- type: ipv4
- provisioning:
- macAddress: 00:1e:67:f1:5b:91
- type: ipv4_dhcp
- sriov:
- ipAddress: 10.10.113.4/24
- macAddress: 00:1e:67:f8:69:81
- type: ipv4
Once the workload cluster is ready, the deployment resources may be
examined similarly.
emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
...
-
### Verification
Basic self-tests of Kata, EMCO, and the other addons may be performed
root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
-
### Uninstallation
To destroy the workload cluster and deprovision its machines, it is
root@pod11-node5:# kubectl -n flux-system delete Kustomization icn-master-site-pod11
+## Appendix A: BMC and BIOS configuration of pod11-node3
+The BMC and BIOS configuration will vary depending on the vendor. The
+below is intended only to provide some guidance on what to look for in
+the hardware used in the chosen configuration.
+
+### BMC configuration
+
+BMC IP address configured in the BIOS.
+
+![img](./pod11-node3-bios-bmc-configuration.png "BMC LAN Configuration")
+
+BMC IP address configured in the web console.
+
+![img](./pod11-node3-ip-configuration.png "BMC LAN Configuration")
+
+IPMI configuration. Not shown is the cipher suite configuration.
+
+![img](./pod11-node3-ipmi-over-lan.png "IPMI over LAN")
+
+### PXE boot configuration
+
+The screens below show enabling PXE boot for the specified NIC and
+ensuring it is first in the boot order.
+
+![img](./pod11-node3-bios-enable-pxe.png "Enable PXE boot")
+
+![img](./pod11-node3-bios-nic-boot-order.png "NIC boot order")
+
+### Additional BIOS configuration
+
+The screens below show enabling virtualization options in the BIOS.
+
+![img](./pod11-node3-bios-vt-x.png "Enable Intel VT-x")
+
+![img](./pod11-node3-bios-vt-d.png "Enable Intel VT-d")
+
+## Appendix B: IPv4/IPv6 dual-stack
+
+To enable dual-stack with dynamic IPAM, create an additional IP pool
+of IPv6 addresses and reference it in the `networkData` dictionary.
+Note the use of the `link` value to assign the IPv6 address to the
+correct interface.
+
+ ipPools:
+ baremetal:
+ ...
+ baremetal6:
+ start: 2001:db8:0::10
+ end: 2001:db8:0::20
+ prefix: 64
+ gateway: 2001:db8:0::1
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ interface: ens6
+ provisioning:
+ interface: ens5
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ fromIPPool: baremetal
+ ipv6:
+ baremetal6:
+ link: baremetal
+ fromIPPool: baremetal6
+ services:
+ ...
+
+To enable dual-stack with static IPAM, assign the addresses in the
+`networkData` dictionary. Note the use of the `link` value to assign
+the IPv6 address to the correct interface.
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ ipv6:
+ baremetal6:
+ link: baremetal
+ ipAddress: 2001:db8:0::21/64
+ gateway: 2001:db8:0::1
+ services:
+ ..
+
+The last change needed is in the cluster chart values to configure
+dual-stack support and define the IPv6 CIDR blocks for pods and
+services.
+
+ ipam: dualstack
+ podCidrBlocks:
+ - 10.244.64.0/18
+ - 2001:db8:1::/64
+ serviceCidrBlocks:
+ - 10.244.0.0/18
+ - 2001:db8:2::/64