# Installation guide
-
## Hardware
-
### Overview
Due to the almost limitless number of possible hardware
<table id="orgf44d94a" border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
-
<colgroup>
<col class="org-left" />
-
<col class="org-right" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
-
<col class="org-left" />
</colgroup>
+
<thead>
<tr>
<th scope="col" class="org-left">Hostname</th>
<td class="org-left"> </td>
</tr>
-
<tr>
<td class="org-left">pod11-node3</td>
<td class="org-right">2xE5-2699</td>
<td class="org-left">IF3: 10.10.113.4 00:1e:67:f8:69:81 VLAN 113</td>
</tr>
-
<tr>
<td class="org-left">pod11-node2</td>
<td class="org-right">2xE5-2699</td>
- The `sriov` network, available for the application data plane.
-
### Configuration
#### Baseboard Management Controller (BMC) configuration
## Jump server
-
### Configure the jump server
The jump server is required to be pre-installed with an OS. ICN
make jump_server
-
### Uninstallation
make clean_jump_server
-
## Compute clusters
-
### Overview
Before proceeding with the configuration, a basic understanding of the
Similar to the CAPI resources that ICN uses, ICN captures the Bare
Metal Operator resources it uses into a Helm chart.
-
### Configuration
> NOTE: To assist in the migration of R5 and earlier release's use from
The cluster and machine charts support either static or dynamic IPAM
in the baremetal network.
-Dynamic IPAM is configured by specifying the `networks` dictionary in
-the cluster chart. At least two entries must be included, the
-`baremetal` and `provisioning` networks. Under each entry, provide the
-predictable network interface name as the value of `interface` key.
-
+Dynamic IPAM is configured by specifying IP pools containing the
+address ranges to assign and the interface and network mapping to the
+pools. The IP pools are specified with the `ipPools` dictionary in
+the cluster chart. From the chart example values:
+
+ ipPools:
+ baremetal:
+ # start is the beginning of the address range in the pool.
+ start: 192.168.151.10
+ # end is the end of the address range in the pool.
+ end: 192.168.151.20
+ # prefix is the network prefix of addresses in the range.
+ prefix: 24
+ # gateway is optional.
+ #gateway: 192.168.151.1
+ # preAllocations are optional. Note that if the pool overlaps
+ # with the gateway, then a pre-allocation is required.
+ #preAllocations:
+ # controlPlane: 192.168.151.254
+
+The interface and network mapping is specified with the `networkData`
+dictionary in the cluster chart. From the chart example values:
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ interface: ens6
+ provisioning:
+ interface: ens5
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ # link is optional and defaults to the network name.
+ #link: baremetal
+ fromIPPool: baremetal
+ services:
+ dns:
+ - 8.8.8.8
+
+At least two entries must be included, the `baremetal` and
+`provisioning` networks and the provisioning network must always be
+type `ipv4DHCP`. Under each entry, provide the predictable network
+interface name as the value of `interface` key.
+
Note that this is in the cluster chart and therefore is in the form of
a template for each machine used in the cluster. If the machines are
sufficiently different such that the same interface name is not used
on each machine, then the static approach below must be used instead.
-
-Static IPAM is configured by specifying the `networks` dictionary in the
-machine chart. At least two entries must be included, the `baremetal`
-and `provisioning` networks. From the chart example values:
-
- networks:
- baremetal:
- macAddress: 00:1e:67:fe:f4:19
- # type is either ipv4 or ipv4_dhcp
- type: ipv4
- # ipAddress is only valid for type ipv4
- ipAddress: 10.10.110.21/24
- # gateway is only valid for type ipv4
- gateway: 10.10.110.1
- # nameservers is an array of DNS servers; only valid for type ipv4
- nameservers: ["8.8.8.8"]
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
-
-The provisioning network must always be type `ipv4_dhcp`.
+
+Static IPAM is configured similar to the dynamic IPAM. Instead of
+providing a template with the cluster chart, specific values are
+provided with the machine chart. From the chart example values:
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ private:
+ macAddress: 00:1e:67:f8:6a:40
+ storage:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ # link is optional and defaults to the network name.
+ #link: baremetal
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ private:
+ ipAddress: 10.10.112.2/24
+ storage:
+ ipAddress: 10.10.113.2/24
+ services:
+ dns: ["8.8.8.8"]
+
+Again, at least two entries must be included, the `baremetal` and
+`provisioning` networks and the provisioning network must always be
+type `ipv4DHCP`.
In either the static or dynamic case additional networks may be
included, however the static assignment option for an individual
network exists only when the machine chart approach is used.
-
+
+For additional information on configuring IPv4/IPV6 dual-stack
+operation, refer to [Appendix B](#appendix-b-ipv4ipv6-dual-stack).
+
##### Prerequisites
The first thing done is to create a `site.yaml` file containing a
bmcAddress: ipmi://10.10.110.12
bmcUsername: root
bmcPassword: root
- networks:
- baremetal:
- macAddress: 00:1e:67:fe:f4:19
- type: ipv4
- ipAddress: 10.10.110.22/24
- gateway: 10.10.110.1
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ private:
+ macAddress: 00:1e:67:f8:6a:40
+ storage:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ ipAddress: 10.10.110.22/24
+ gateway: 10.10.110.1
+ private:
+ ipAddress: 10.10.112.3/24
+ storage:
+ ipAddress: 10.10.113.3/24
+ services:
+ dns:
- 8.8.8.8
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
- private:
- macAddress: 00:1e:67:f8:6a:40
- type: ipv4
- ipAddress: 10.10.112.3/24
- storage:
- macAddress: 00:1e:67:f8:6a:41
- type: ipv4
- ipAddress: 10.10.113.3/24
-
+
##### Define a cluster
Important values in cluster definition include:
# ./deploy/site/site.sh flux-create-site URL BRANCH PATH KEY_NAME
-
<a id="org6324e82"></a>
### Deployment
provider: icn
site: pod11
clusterName: icn
- cni: flannel
+ cni: calico
containerRuntime: containerd
containerdVersion: 1.4.11-1
controlPlaneEndpoint: 10.10.110.23
matchLabels:
machine: pod11-node3
controlPlanePrefix: 24
- dockerVersion: 5:20.10.10~3-0~ubuntu-focal
flux:
branch: master
- path: ./deploy/site/cluster-icn
+ decryptionSecret: # ...
+ path: ./deploy/site/pod11/cluster/icn
repositoryName: icn
- url: https://gerrit.akraino.org/r/icn
+ url: https://github.com/malsbat/icn
imageName: focal-server-cloudimg-amd64.img
+ ipam: ipv4
k8sVersion: v1.21.6
kubeVersion: 1.21.6-00
numControlPlaneMachines: 1
numWorkerMachines: 1
- podCidr: 10.244.64.0/18
+ podCidrBlocks:
+ - 10.244.64.0/18
+ serviceCidrBlocks:
+ - 10.244.0.0/18
userData:
- hashedPassword: $6$rounds=10000$bhRsNADLl$BzCcBaQ7Tle9AizUHcMKN2fygyPMqBebOuvhApI8B.pELWyFUaAWRasPOz.5Gf9bvCihakRnBTwsi217n2qQs1
+ hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
name: ubuntu
sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwLj/ekRDjp354W8kcGLagjudjTBZO8qBffJ4mNb01EJueUbLvM8EwCv2zu9lFKHD+nGkc1fkB3RyCn5OqzQDTAIpp82nOHXtrbKAZPg2ob8BlfVAz34h5r1bG78lnMH1xk7HKNbf73h9yzUEKiyrd8DlhJcJrsOZTPuTdRrIm7jxScDJpHFjy8tGISNMcnBGrNS9ukaRLK+PiEfDpuRtw/gOEf58NXgu38BcNm4tYfacHYuZFUbNCqj9gKi3btZawgybICcqrNqF36E/XXMfCS1qxZ7j9xfKjxWFgD9gW/HkRtV6K11NZFEvaYBFBA9S/GhLtk9aY+EsztABthE0J root@pod11-node5
workersHostSelector:
matchLabels:
machine: pod11-node2
-
+
# helm -n metal3 get values --all pod11-node2
COMPUTED VALUES:
bmcAddress: ipmi://10.10.110.12
+ bmcDisableCertificateVerification: false
bmcPassword: root
bmcUsername: root
machineLabels:
machine: pod11-node2
machineName: pod11-node2
- networks:
- baremetal:
- gateway: 10.10.110.1
- ipAddress: 10.10.110.22/24
- macAddress: 00:1e:67:fe:f4:19
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ sriov:
+ macAddress: 00:1e:67:f8:6a:41
+ networks:
+ ipv4:
+ baremetal:
+ gateway: 10.10.110.1
+ ipAddress: 10.10.110.22/24
+ sriov:
+ ipAddress: 10.10.113.3/24
+ ipv4DHCP:
+ provisioning: {}
+ services:
+ dns:
- 8.8.8.8
- type: ipv4
- provisioning:
- macAddress: 00:1e:67:fe:f4:1a
- type: ipv4_dhcp
- sriov:
- ipAddress: 10.10.113.3/24
- macAddress: 00:1e:67:f8:6a:41
- type: ipv4
# helm -n metal3 get values --all pod11-node3
COMPUTED VALUES:
bmcAddress: ipmi://10.10.110.13
+ bmcDisableCertificateVerification: false
bmcPassword: root
bmcUsername: root
machineLabels:
machine: pod11-node3
machineName: pod11-node3
- networks:
- baremetal:
- gateway: 10.10.110.1
- ipAddress: 10.10.110.23/24
- macAddress: 00:1e:67:f1:5b:90
- nameservers:
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:f1:5b:90
+ provisioning:
+ macAddress: 00:1e:67:f1:5b:91
+ sriov:
+ macAddress: 00:1e:67:f8:69:81
+ networks:
+ ipv4:
+ baremetal:
+ gateway: 10.10.110.1
+ ipAddress: 10.10.110.23/24
+ sriov:
+ ipAddress: 10.10.113.4/24
+ ipv4DHCP:
+ provisioning: {}
+ services:
+ dns:
- 8.8.8.8
- type: ipv4
- provisioning:
- macAddress: 00:1e:67:f1:5b:91
- type: ipv4_dhcp
- sriov:
- ipAddress: 10.10.113.4/24
- macAddress: 00:1e:67:f8:69:81
- type: ipv4
Once the workload cluster is ready, the deployment resources may be
examined similarly.
emco services-clm-7ff876dfc-vgncs 1/1 Running 3 16h 10.244.65.58 pod11-node2 <none> <none>
...
-
### Verification
Basic self-tests of Kata, EMCO, and the other addons may be performed
root@pod11-node5:# CLUSTER_NAME=icn ./deploy/kata/kata.sh test
root@pod11-node5:# CLUSTER_NAME=icn ./deploy/addons/addons.sh test
-
### Uninstallation
To destroy the workload cluster and deprovision its machines, it is


+
+## Appendix B: IPv4/IPv6 dual-stack
+
+To enable dual-stack with dynamic IPAM, create an additional IP pool
+of IPv6 addresses and reference it in the `networkData` dictionary.
+Note the use of the `link` value to assign the IPv6 address to the
+correct interface.
+
+ ipPools:
+ baremetal:
+ ...
+ baremetal6:
+ start: 2001:db8:0::10
+ end: 2001:db8:0::20
+ prefix: 64
+ gateway: 2001:db8:0::1
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ interface: ens6
+ provisioning:
+ interface: ens5
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ fromIPPool: baremetal
+ ipv6:
+ baremetal6:
+ link: baremetal
+ fromIPPool: baremetal6
+ services:
+ ...
+
+To enable dual-stack with static IPAM, assign the addresses in the
+`networkData` dictionary. Note the use of the `link` value to assign
+the IPv6 address to the correct interface.
+
+ networkData:
+ links:
+ ethernets:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ networks:
+ ipv4DHCP:
+ provisioning: {}
+ ipv4:
+ baremetal:
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ ipv6:
+ baremetal6:
+ link: baremetal
+ ipAddress: 2001:db8:0::21/64
+ gateway: 2001:db8:0::1
+ services:
+ ..
+
+The last change needed is in the cluster chart values to configure
+dual-stack support and define the IPv6 CIDR blocks for pods and
+services.
+
+ ipam: dualstack
+ podCidrBlocks:
+ - 10.244.64.0/18
+ - 2001:db8:1::/64
+ serviceCidrBlocks:
+ - 10.244.0.0/18
+ - 2001:db8:2::/64
$ virsh -c qemu:///system net-list
Name State Autostart Persistent
----------------------------------------------------------
- vm-baremetal active yes yes
+ vagrant-libvirt active no yes
+ vm-baremetal active no yes
vm-provisioning active no yes
$ curl --insecure -u admin:password https://192.168.121.1:8000/redfish/v1/Managers
cluster. The jump server will be responsible for creating the
cluster.
-We also created two networks, baremetal and provisioning, and a third
-network overlaid upon the baremetal network using [Virtual Redfish
+We also created two networks, baremetal and provisioning. The [Virtual
+Redfish
BMC](https://docs.openstack.org/sushy-tools/latest/user/dynamic-emulator.html)
-for issuing Redfish requests to the virtual machines.
+used for issuing Redfish requests to the virtual machines is overlaid
+on the vagrant-libvirt network.
It's worth looking at these networks in more detail as they will be
important during configuration of the jump server and cluster.
$ virsh -c qemu:///system net-dumpxml vm-baremetal
- <network connections='3' ipv6='yes'>
+ <network connections='3'>
<name>vm-baremetal</name>
<uuid>216db810-de49-4122-a284-13fd2e44da4b</uuid>
<forward mode='nat'>
<port start='1024' end='65535'/>
</nat>
</forward>
- <bridge name='virbr3' stp='on' delay='0'/>
+ <bridge name='vm0' stp='on' delay='0'/>
<mac address='52:54:00:a3:e7:09'/>
<ip address='192.168.151.1' netmask='255.255.255.0'>
- <dhcp>
- <range start='192.168.151.1' end='192.168.151.254'/>
- </dhcp>
</ip>
</network>
The baremetal network provides outbound network access through the
-host and also assigns DHCP addresses in the range `192.168.151.2` to
-`192.168.151.254` to the virtual machines (the host itself is
-`192.168.151.1`).
+host. No DHCP server is present on this network. Address assignment to
+the virtual machines is done using the (Metal3
+IPAM)[https://metal3.io/blog/2020/07/06/IP_address_manager.html] while
+the host itself is `192.168.151.1`.
$ virsh -c qemu:///system net-dumpxml vm-provisioning
<network connections='3'>
<name>vm-provisioning</name>
<uuid>d06de3cc-b7ca-4b09-a49d-a1458c45e072</uuid>
- <bridge name='vm0' stp='on' delay='0'/>
+ <bridge name='vm1' stp='on' delay='0'/>
<mac address='52:54:00:3e:38:a5'/>
</network>
$ virsh -c qemu:///system dumpxml vm-jump
...
<interface type='network'>
- <mac address='52:54:00:a8:97:6d'/>
- <source network='vm-baremetal' bridge='virbr3'/>
+ <mac address='52:54:00:fc:a8:01'/>
+ <source network='vagrant-libvirt' bridge='virbr1'/>
<target dev='vnet0'/>
<model type='virtio'/>
<alias name='ua-net-0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='network'>
- <mac address='52:54:00:80:3d:4c'/>
- <source network='vm-provisioning' bridge='vm0'/>
+ <mac address='52:54:00:a8:97:6d'/>
+ <source network='vm-baremetal' bridge='vm0'/>
<target dev='vnet1'/>
<model type='virtio'/>
<alias name='ua-net-1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</interface>
+ <interface type='network'>
+ <mac address='52:54:00:80:3d:4c'/>
+ <source network='vm-provisioning' bridge='vm1'/>
+ <target dev='vnet2'/>
+ <model type='virtio'/>
+ <alias name='ua-net-2'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
+ </interface>
...
-The baremetal network NIC in the jump server is the first NIC present
+The baremetal network NIC in the jump server is the second NIC present
in the machine and depending on the device naming scheme in place will
-be called `ens5` or `eth0`. Similarly, the provisioning network NIC will
-be `ens6` or `eth1`.
+be called `ens6` or `eth1`. Similarly, the provisioning network NIC will
+be `ens7` or `eth2`.
$ virsh -c qemu:///system dumpxml vm-machine-1
...
<interface type='network'>
<mac address='52:54:00:c6:75:40'/>
- <source network='vm-provisioning' bridge='vm0'/>
- <target dev='vnet2'/>
+ <source network='vm-provisioning' bridge='vm1'/>
+ <target dev='vnet3'/>
<model type='virtio'/>
<alias name='ua-net-0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:20:a3:0a'/>
- <source network='vm-baremetal' bridge='virbr3'/>
+ <source network='vm-baremetal' bridge='vm0'/>
<target dev='vnet4'/>
<model type='virtio'/>
<alias name='ua-net-1'/>
physical machine will typically provide this as a configuration option
in the BIOS settings.
-
## Install the jump server components
$ vagrant ssh jump
Before telling ICN to start installing the components, it must first
know which is the provisioning network NIC. Recall that in the jump
-server the provisioning network NIC is `eth1`.
+server the provisioning network NIC is `eth2`.
Edit `user_config.sh` to the below.
#!/usr/bin/env bash
- export IRONIC_INTERFACE="eth1"
+ export IRONIC_INTERFACE="eth2"
Now install the jump server components.
single-node Kubernetes cluster.
root@jump:/icn# kubectl cluster-info
- Kubernetes control plane is running at https://192.168.151.45:6443
-
+ Kubernetes control plane is running at https://192.168.121.126:6443
+
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
-The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is installed, with the [Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
+The next is that [Cluster API](https://cluster-api.sigs.k8s.io/) is
+installed, with the
+[Metal3](https://github.com/metal3-io/cluster-api-provider-metal3)
infrastructure provider and Kubeadm bootstrap provider. These
components provide the base for creating clusters with ICN.
Before moving on to the next step, let's take one last look at the
provisioning NIC we set in `user_config.sh`.
- root@jump:/icn# ip link show dev eth1
- 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
+ root@jump:/icn# ip link show dev eth2
+ 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master provisioning state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:80:3d:4c brd ff:ff:ff:ff:ff:ff
The `master provisioning` portion indicates that this interface is now
will communicate with the machines to be provisioned when it is time
to install an operating system.
-
## Create a cluster
root@jump:/icn# make vm_cluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
-
## Next steps
At this point you may proceed with the [Installation
the [Deployment](installation-guide.md#Deployment) sub-section to
examine the cluster creation process in more detail.
-
<a id="org48e2dc9"></a>