+In some deployment models, you can combine Net C and Net A to be the
+same networks, but the developer should take care of IP address
+management between Net A and IPMI address of the server.
+
+Also note that the IPMI NIC may share the same RJ-45 jack with another
+one of the NICs.
+
+# Pre-installation Requirements
+There are two main components in ICN Infra Local Controller - Local
+Controller and K8s compute cluster.
+
+### Local Controller
+The Local Controller will reside in the jump server to run the Cluster
+API controllers with the Kubeadm bootstrap provider and Metal3
+infrastructure provider.
+
+### K8s Compute Cluster
+The K8s compute cluster will actually run the workloads and is
+installed on bare metal servers.
+
+## Hardware Requirements
+
+### Minimum Hardware Requirement
+All-in-one VM based deployment requires servers with at least 32 GB
+RAM and 32 CPUs.
+
+### Recommended Hardware Requirements
+Recommended hardware requirements are servers with 64GB Memory, 32
+CPUs and SRIOV network cards.
+
+## Software Prerequisites
+The jump server is required to be pre-installed with Ubuntu 18.04.
+
+## Database Prerequisites
+No prerequisites for ICN blueprint.
+
+## Other Installation Requirements
+
+### Jump Server Requirements
+
+#### Jump Server Hardware Requirements
+- Local Controller: at least three network interfaces.
+- Bare metal servers: four network interfaces, including one IPMI interface.
+- Four or more hubs, with cabling, to connect four networks.
+
+(Tested as below)
+Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch)
+---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------
+jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 |
+
+#### Jump Server Software Requirements
+ICN supports Ubuntu 18.04. The ICN blueprint installs all required
+software during `make jump_server`.
+
+### Network Requirements
+Please refer to figure 1 for all the network requirements of the ICN
+blueprint.
+
+Please make sure you have 3 distinguished networks - Net A, Net B and
+Net C as mentioned in figure 1. Local Controller uses the Net B and
+Net C to provision the bare metal servers to do the OS provisioning.
+
+### Bare Metal Server Requirements
+
+### K8s Compute Cluster
+
+#### Compute Server Hardware Requirements
+(Tested as below)
+Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch)
+---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------
+node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
+node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
+node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
+
+#### Compute Server Software Requirements
+The Local Controller will install all the software in compute servers
+from the OS to the software required to bring up the K8s cluster.
+
+### Execution Requirements (Bare Metal Only)
+The ICN blueprint checks all the precondition and execution
+requirements for bare metal.
+
+# Installation High-Level Overview
+Installation is two-step process:
+- Installation of the Local Controller.
+- Installation of a compute cluster.
+
+## Bare Metal Deployment Guide
+
+### Install Bare Metal Jump Server
+
+#### Creating the Settings Files
+
+##### Local Controller Network Configuration Reference
+The user will find the network configuration file named as
+"user_config.sh" in the ICN parent directory.
+
+`user_config.sh`
+``` shell
+#!/bin/bash
+
+#Ironic Metal3 settings for provisioning network (Net B)
+export IRONIC_INTERFACE="eno2"
+```
+
+#### Running
+After configuring the network configuration file, please run `make
+jump_server` from the ICN parent directory as shown below:
+
+``` shell
+root@jump0:# git clone "https://gerrit.akraino.org/r/icn"
+Cloning into 'icn'...
+remote: Counting objects: 69, done
+remote: Finding sources: 100% (69/69)
+remote: Total 4248 (delta 13), reused 4221 (delta 13)
+Receiving objects: 100% (4248/4248), 7.74 MiB | 21.84 MiB/s, done.
+Resolving deltas: 100% (1078/1078), done.
+root@jump0:# cd icn/
+root@jump0:# make jump_server
+```
+
+The following steps occurs once the `make jump_server` command is
+given.
+1. All the software required to run the bootstrap cluster is
+ downloaded and installed.
+2. K8s cluster to maintain the bootstrap cluster and all the servers
+ in the edge location is installed.
+3. Metal3 specific network configuration such as local DHCP server
+ networking for each edge location, Ironic networking for both
+ provisioning network and IPMI LAN network are identified and
+ created.
+4. The Cluster API controllers, bootstrap, and infrastructure
+ providers and configured and installed.
+5. The Flux controllers are installed.
+
+#### Creating a compute cluster
+A compute cluster is composed of installations of two types of Helm
+charts: machine and cluster. The specific installations of these Helm
+charts are defined in HelmRelease resources consumed by the Flux
+controllers in the jump server. The user is required to provide the
+machine and cluster specific values in the HelmRelease resources.
+
+##### Preconfiguration for the compute cluster in Jump Server
+The user is required to provide the IPMI information of the servers
+and the values of the compute cluster they connect to the Local
+Controller.
+
+If the baremetal network provides a DHCP server with gateway and DNS
+server information, and each server has identical hardware then a
+cluster template can be used. Otherwise these values must also be
+provided with the values for each server. Refer to the machine chart
+in icn/deploy/machine for more details. In the example below, no DHCP
+server is present in the baremetal network.
+
+> *NOTE:* To assist in the migration of R5 and earlier release's use
+> from `nodes.json` and the Provisioning resource to a site YAML, a
+> helper script is provided at `tools/migration/to_r6.sh`.
+
+`site.yaml`
+``` yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: metal3
+---
+apiVersion: source.toolkit.fluxcd.io/v1beta1
+kind: GitRepository
+metadata:
+ name: icn
+ namespace: metal3
+spec:
+ gitImplementation: go-git
+ interval: 1m0s
+ ref:
+ branch: master
+ timeout: 20s
+ url: https://gerrit.akraino.org/r/icn
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: machine-node1
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/machine
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ machineName: node1
+ machineLabels:
+ machine: node1
+ bmcAddress: ipmi://10.10.110.11
+ bmcUsername: admin
+ bmcPassword: password
+ networks:
+ baremetal:
+ macAddress: 00:1e:67:fe:f4:19
+ type: ipv4
+ ipAddress: 10.10.110.21/24
+ gateway: 10.10.110.1
+ nameservers: ["8.8.8.8"]
+ provisioning:
+ macAddress: 00:1e:67:fe:f4:1a
+ type: ipv4_dhcp
+ sriov:
+ macAddress: 00:1e:67:f8:6a:41
+ type: ipv4
+ ipAddress: 10.10.113.3/24
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: machine-node2
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/machine
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ machineName: node2
+ machineLabels:
+ machine: node2
+ bmcAddress: ipmi://10.10.110.12
+ bmcUsername: admin
+ bmcPassword: password
+ networks:
+ baremetal:
+ macAddress: 00:1e:67:f1:5b:90
+ type: ipv4
+ ipAddress: 10.10.110.22/24
+ gateway: 10.10.110.1
+ nameservers: ["8.8.8.8"]
+ provisioning:
+ macAddress: 00:1e:67:f1:5b:91
+ type: ipv4_dhcp
+ sriov:
+ macAddress: 00:1e:67:f8:69:81
+ type: ipv4
+ ipAddress: 10.10.113.4/24
+---
+apiVersion: helm.toolkit.fluxcd.io/v2beta1
+kind: HelmRelease
+metadata:
+ name: cluster-compute
+ namespace: metal3
+spec:
+ interval: 5m
+ chart:
+ spec:
+ chart: deploy/cluster
+ sourceRef:
+ kind: GitRepository
+ name: icn
+ interval: 1m
+ values:
+ clusterName: compute
+ controlPlaneEndpoint: 10.10.110.21
+ controlPlaneHostSelector:
+ matchLabels:
+ machine: node1
+ workersHostSelector:
+ matchLabels:
+ machine: node2
+ userData:
+ hashedPassword: $6$rounds=10000$PJLOBdyTv23pNp$9RpaAOcibbXUMvgJScKK2JRQioXW4XAVFMRKqgCB5jC4QmtAdbA70DU2jTcpAd6pRdEZIaWFjLCNQMBmiiL40.
+ sshAuthorizedKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrxu+fSrU51vgAO5zP5xWcTU8uLv4MkUZptE2m1BJE88JdQ80kz9DmUmq2AniMkVTy4pNeUW5PsmGJa+anN3MPM99CR9I37zRqy5i6rUDQgKjz8W12RauyeRMIBrbdy7AX1xasoTRnd6Ta47bP0egiFb+vUGnlTFhgfrbYfjbkJhVfVLCTgRw8Yj0NSK16YEyhYLbLXpix5udRpXSiFYIyAEWRCCsWJWljACr99P7EF82vCGI0UDGCCd/1upbUwZeTouD/FJBw9qppe6/1eaqRp7D36UYe3KzLpfHQNgm9AzwgYYZrD4tNN6QBMq/VUIuam0G1aLgG8IYRLs41HYkJ root@jump0
+ flux:
+ url: https://gerrit.akraino.org/r/icn
+ branch: master
+ path: ./deploy/site/cluster-icn
+```
+
+A brief overview of the values is below. Refer to the machine and
+cluster charts in deploy/machine and deploy/cluster respectively for
+more details.
+
+- *machineName*: This will be the hostname for the machine, once it is
+ provisioned by Metal3.
+- *bmcUsername*: BMC username required to be provided for Ironic.
+- *bmcPassword*: BMC password required to be provided for Ironic.
+- *bmcAddress*: BMC server IPMI LAN IP address.
+- *networks*: A dictionary of the networks used by ICN. For more
+ information, refer to the *networkData* field of the BareMetalHost
+ resource definition.
+ - *macAddress*: The MAC address of the interface.
+ - *type*: The type of network, either dynamic ("ipv4_dhcp") or
+ static ("ipv4").
+ - *ipAddress*: Only valid for type "ipv4"; the IP address of the
+ interface.
+ - *gateway*: Only valid for type "ipv4"; the gateway of this
+ network.
+ - *nameservers*: Only valid for type "ipv4"; an array of DNS
+ servers.
+- *clusterName*: The name of the cluster.
+- *controlPlaneEndpoint*: The K8s control plane endpoint. This works
+ in cooperation with the *controlPlaneHostSelector* to ensure that it
+ addresses the control plane node.
+- *controlPlaneHostSelector*: A K8s match expression against labels on
+ the *BareMetalHost* machine resource (from the *machineLabels* value
+ of the machine Helm chart). This will be used by Cluster API to
+ select machines for the control plane.
+- *workersHostSelector*: A K8s match expression selecting worker
+ machines.
+- *userData*: User data values to be provisioned into each machine in
+ the cluster.
+ - *hashedPassword*: The hashed password of the default user on each
+ machine.
+ - *sshAuthorizedKey*: An authorized public key of the *root* user on
+ each machine.
+- *flux*: An optional repository to continuously reconcile the created
+ K8s cluster against.
+
+#### Running
+After configuring the machine and cluster site values, the next steps
+are to encrypt the secrets contained in the file, commit the file to
+source control, and create the Flux resources on the jump server
+pointing to the committed files.
+
+1. Create a key protect the secrets in the values if one does not
+ already exist. The key created below will be named "site-secrets".
+
+``` shell
+root@jump0:# ./deploy/site/site.sh create-gpg-key site-secrets
+```
+
+2. Encrypt the secrets in the site values.
+
+``` shell
+root@jump0:# ./deploy/site/site.sh sops-encrypt-site site.yaml site-secrets