- Net B (internal network) -- Provisioning network used by Ironic to
do inspection.
- Net C (internal network) -- IPMI LAN to do IPMI protocol for the OS
- provisioning. The NICs support IPMI. Use IPMI tool to set the static
- IP address.
+ provisioning. The NICs support IPMI. The IP address should be
+ statically assigned via the IPMI tool or other means.
- Net D (internal network) -- Data plane network for the Akraino
application. Using the SR-IOV networking and fiber cables. Intel
25GB and 40GB FLV NICs.
same networks, but the developer should take care of IP address
management between Net A and IPMI address of the server.
+Also note that the IPMI NIC may share the same RJ-45 jack with another
+one of the NICs.
+
# Pre-installation Requirements
There are two main components in ICN Infra Local Controller - Local
Controller and k8s compute cluster.
- Bare metal servers: four network interfaces, including one IPMI interface.
- Four or more hubs, with cabling, to connect four networks.
+(Tested as below)
Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch)
---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------
-jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110 (DMZ)<br/>eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)<br/>IF3: VLAN 113 (Storage) VLAN 1115 (Public)
+jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 |
#### Jump Server Software Requirements
ICN supports Ubuntu 18.04. The ICN blueprint installs all required
(Tested as below)
Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch)
---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------
-node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110 (DMZ)<br/>eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)<br/>IF3: VLAN 113 (Storage) VLAN 1115 (Public)
-node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110 (DMZ)<br/>eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)<br/>IF3: VLAN 113 (Storage) VLAN 1115 (Public)
-node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110 (DMZ)<br/>eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)<br/>IF3: VLAN 113 (Storage) VLAN 1115 (Public)
+node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
+node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
+node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)<br/>180 (SSD) | eth0: VLAN 110<br/>eno1: VLAN 110<br/>eno2: VLAN 111 | eno3: VLAN 113
#### Compute Server Software Requirements
The Local Controller will install all the software in compute servers
The user is required to provide the IPMI information of the servers
they connect to the Local Controller by editing node JSON sample file
in the directory icn/deploy/metal3/scripts/nodes.json.sample as
-below. This example only shows 2 servers. If you want to increase
-servers, just add another array.
+below. This example only shows 2 servers, statically configured on the
+baremetal network. If you want to increase servers, just add another
+array. If the baremetal network provides a DHCP server with gateway
+and DNS server information, just change the baremetal type to "ipv4".
+ICN provides DHCP servers for the provisioning network.
`node.json.sample`
``` json
"address": "10.10.10.11"
},
"os": {
- "image_name": "bionic-server-cloudimg-amd64.img",
+ "image_name": "focal-server-cloudimg-amd64.img",
"username": "ubuntu",
"password": "mypasswd"
+ },
+ "net": {
+ "links": [
+ {
+ "id": "baremetal_nic",
+ "ethernet_mac_address": "00:1e:67:fe:f4:19",
+ "type": "phy"
+ },
+ {
+ "id": "provisioning_nic",
+ "ethernet_mac_address": "00:1e:67:fe:f4:1a",
+ "type": "phy"
+ },
+ {
+ "id": "sriov_nic",
+ "ethernet_mac_address": "00:1e:67:f8:6a:41",
+ "type": "phy"
+ }
+ ],
+ "networks": [
+ {
+ "id": "baremetal",
+ "link": "baremetal_nic",
+ "type": "ipv4",
+ "ip_address": "10.10.110.21/24",
+ "gateway": "10.10.110.1",
+ "dns_nameservers": ["8.8.8.8"]
+ },
+ {
+ "id": "provisioning",
+ "link": "provisioning_nic",
+ "type": "ipv4_dhcp"
+ },
+ {
+ "id": "sriov",
+ "link": "sriov_nic",
+ "type": "ipv4",
+ "ip_address": "10.10.113.2/24"
+ }
+ ],
+ "services": []
}
},
{
"address": "10.10.10.12"
},
"os": {
- "image_name": "bionic-server-cloudimg-amd64.img",
+ "image_name": "focal-server-cloudimg-amd64.img",
"username": "ubuntu",
"password": "mypasswd"
+ },
+ "net": {
+ "links": [
+ {
+ "id": "baremetal_nic",
+ "ethernet_mac_address": "00:1e:67:f1:5b:90",
+ "type": "phy"
+ },
+ {
+ "id": "provisioning_nic",
+ "ethernet_mac_address": "00:1e:67:f1:5b:91",
+ "type": "phy"
+ },
+ {
+ "id": "sriov_nic",
+ "ethernet_mac_address": "00:1e:67:f8:69:81",
+ "type": "phy"
+ }
+ ],
+ "networks": [
+ {
+ "id": "baremetal",
+ "link": "baremetal_nic",
+ "type": "ipv4",
+ "ip_address": "10.10.110.22/24",
+ "gateway": "10.10.110.1",
+ "dns_nameservers": ["8.8.8.8"]
+ },
+ {
+ "id": "provisioning",
+ "link": "provisioning_nic",
+ "type": "ipv4_dhcp"
+ },
+ {
+ "id": "sriov",
+ "link": "sriov_nic",
+ "type": "ipv4",
+ "ip_address": "10.10.113.3/24"
+ }
+ ],
+ "services": []
}
}]
}
- *image_name*: Images name should be in qcow2 format.
- *username*: Login username for the OS provisioned.
- *password*: Login password for the OS provisioned.
+- *net*: Bare metal network information is a json field. It describes
+ the interfaces and networks used by ICN. For more information,
+ refer to the *networkData* field of the BareMetalHost resource
+ definition.
+ - *links*: An array of interfaces.
+ - *id*: The ID of the interface. This is used in the network
+ definitions to associate the interface with its network
+ configuration.
+ - *ethernet_mac_address*: The MAC address of the interface.
+ - *type*: The type of interface. Valid values are "phy".
+ - *networks*: An array of networks.
+ - *id*: The ID of the network.
+ - *link*: The ID of the link this network definition applies to.
+ - *type*: The type of network, either dynamic ("ipv4_dhcp") or
+ static ("ipv4").
+ - *ip_address*: Only valid for type "ipv4"; the IP address of the
+ interface.
+ - *gateway*: Only valid for type "ipv4"; the gateway of this
+ network.
+ - *dns_nameservers*: Only valid for type "ipv4"; an array of DNS
+ servers.
#### Creating the Settings Files
``` shell
#!/bin/bash
-#Local Controller - Bootstrap cluster DHCP connection
-#BS_DHCP_INTERFACE defines the interfaces, to which ICN DHCP deployment will bind
-#e.g. export BS_DHCP_INTERFACE="ens513f0"
-export BS_DHCP_INTERFACE=
-
-#BS_DHCP_INTERFACE_IP defines the IPAM for the ICN DHCP to be managed.
-#e.g. export BS_DHCP_INTERFACE_IP="172.31.1.1/24"
-export BS_DHCP_INTERFACE_IP=
-
#Edge Location Provider Network configuration
#Net A - Provider Network
-#If provider having specific Gateway and DNS server details in the edge location
-#export PROVIDER_NETWORK_GATEWAY="10.10.110.1"
-export PROVIDER_NETWORK_GATEWAY=
-#export PROVIDER_NETWORK_DNS="8.8.8.8"
-export PROVIDER_NETWORK_DNS=
+#If provider having specific Gateway and DNS server details in the edge location,
+#supply those values in nodes.json.
#Ironic Metal3 settings for provisioning network
#Interface to which Ironic provision network to be connected
#Net B - Provisioning Network
-#e.g. export IRONIC_INTERFACE="eno1"
-export IRONIC_INTERFACE=
+export IRONIC_INTERFACE="eno2"
#Ironic Metal3 setting for IPMI LAN Network
#Interface to which Ironic IPMI LAN should bind
#Net C - IPMI LAN Network
-#e.g. export IRONIC_IPMI_INTERFACE="eno2"
-export IRONIC_IPMI_INTERFACE=
-
-#Interface IP for the IPMI LAN, ICN verfiy the LAN Connection is active or not
-#e.g. export IRONIC_IPMI_INTERFACE_IP="10.10.10.10"
-#Net C - IPMI LAN Network
-export IRONIC_IPMI_INTERFACE_IP=
+export IRONIC_IPMI_INTERFACE="eno1"
```
#### Running
![Figure 2](figure-2.png)*Figure 2: Virtual Deployment Architecture*
Virtual deployment is used for the development environment using
-Metal3 virtual deployment to create VM with PXE boot. VM Ansible
-scripts the node inventory file in /opt/ironic. No setting is required
-from the user to deploy the virtual deployment.
+Vagrant to create VMs with PXE boot. No setting is required from the
+user to deploy the virtual deployment.
### Snapshot Deployment Overview
No snapshot is implemented in ICN R2.
#### Install Jump Server
Jump server is required to be installed with Ubuntu 18.04. This will
-install all the VMs and install the k8s clusters. Same as bare metal
-deployment, use `make vm_install` to install virtual deployment.
+install all the VMs and install the k8s clusters.
#### Verifying the Setup - VMs
-`make verify_all` installs two VMs with name master-0 and worker-0
-with 8GB RAM and 8 vCPUs and installs k8s cluster on the VMs using the
-ICN BPA operator and install the ICN BPA REST API verifier. BPA
-operator installs the multi-cluster KUD to bring up k8s with all
-addons and plugins.
+To verify the virtual deployment, execute the following commands:
+``` shell
+$ vagrant up --no-parallel
+$ vagrant ssh jump
+vagrant@jump:~$ sudo su
+root@jump:/home/vagrant# cd /icn
+root@jump:/icn# make verifier
+```
+`vagrant up --no-parallel` creates three VMs: vm-jump, vm-machine-1,
+and vm-machine-2, each with 16GB RAM and 8 vCPUs. `make verifier`
+installs the ICN BPA operator and the ICN BPA REST API verifier into
+vm-jump, and then installs a k8s cluster on the vm-machine VMs using
+the ICN BPA operator. The BPA operator installs the multi-cluster KUD
+to bring up k8s with all addons and plugins.
# Verifying the Setup
ICN blueprint checks all the setup in both bare metal and VM
servers. Openstack baremetal node shows all state of the server right
from power, storage.
-**Why provide network is required?**
+**Why provider network (baremetal network configuration) is required?**
-Generally, provider network DHCP servers in lab provide the router and
-DNS server details. In some lab setup DHCP server don't provide this
-information.
+Generally, provider network DHCP servers in a lab provide the router
+and DNS server details. In some labs, there is no DHCP server or the
+DHCP server does not provide this information.
# License