X-Git-Url: https://gerrit.akraino.org/r/gitweb?a=blobdiff_plain;f=README.md;h=3f76e11dccc3d385b6253e1faecdf7fe4a5402a4;hb=308b436e60c4f9477641a196fe5a53996fd9bc92;hp=d37b920f160d4f97143cc389a450fc5b14865e99;hpb=8746005ea2de220ec510f146434742cb38a37d98;p=icn.git diff --git a/README.md b/README.md index d37b920..3f76e11 100644 --- a/README.md +++ b/README.md @@ -28,8 +28,8 @@ bare metal servers connect to the network D, the SRIOV network. - Net B (internal network) -- Provisioning network used by Ironic to do inspection. - Net C (internal network) -- IPMI LAN to do IPMI protocol for the OS - provisioning. The NICs support IPMI. Use IPMI tool to set the static - IP address. + provisioning. The NICs support IPMI. The IP address should be + statically assigned via the IPMI tool or other means. - Net D (internal network) -- Data plane network for the Akraino application. Using the SR-IOV networking and fiber cables. Intel 25GB and 40GB FLV NICs. @@ -38,6 +38,9 @@ In some deployment models, you can combine Net C and Net A to be the same networks, but the developer should take care of IP address management between Net A and IPMI address of the server. +Also note that the IPMI NIC may share the same RJ-45 jack with another +one of the NICs. + # Pre-installation Requirements There are two main components in ICN Infra Local Controller - Local Controller and k8s compute cluster. @@ -76,9 +79,10 @@ No prerequisites for ICN blueprint. - Bare metal servers: four network interfaces, including one IPMI interface. - Four or more hubs, with cabling, to connect four networks. +(Tested as below) Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) ---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------ -jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110 (DMZ)
eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)
IF3: VLAN 113 (Storage) VLAN 1115 (Public) +jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | #### Jump Server Software Requirements ICN supports Ubuntu 18.04. The ICN blueprint installs all required @@ -100,9 +104,9 @@ Net C to provision the bare metal servers to do the OS provisioning. (Tested as below) Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) ---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------ -node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110 (DMZ)
eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)
IF3: VLAN 113 (Storage) VLAN 1115 (Public) -node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110 (DMZ)
eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)
IF3: VLAN 113 (Storage) VLAN 1115 (Public) -node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110 (DMZ)
eno1: VLAN 111 (Admin) | eno2: VLAN 112 (Private) VLAN 114 (Management)
IF3: VLAN 113 (Storage) VLAN 1115 (Public) +node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 +node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 +node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 #### Compute Server Software Requirements The Local Controller will install all the software in compute servers @@ -128,8 +132,11 @@ command `make install`. The user is required to provide the IPMI information of the servers they connect to the Local Controller by editing node JSON sample file in the directory icn/deploy/metal3/scripts/nodes.json.sample as -below. This example only shows 2 servers. If you want to increase -servers, just add another array. +below. This example only shows 2 servers, statically configured on the +baremetal network. If you want to increase servers, just add another +array. If the baremetal network provides a DHCP server with gateway +and DNS server information, just change the baremetal type to "ipv4". +ICN provides DHCP servers for the provisioning network. `node.json.sample` ``` json @@ -142,9 +149,50 @@ servers, just add another array. "address": "10.10.10.11" }, "os": { - "image_name": "bionic-server-cloudimg-amd64.img", + "image_name": "focal-server-cloudimg-amd64.img", "username": "ubuntu", "password": "mypasswd" + }, + "net": { + "links": [ + { + "id": "baremetal_nic", + "ethernet_mac_address": "00:1e:67:fe:f4:19", + "type": "phy" + }, + { + "id": "provisioning_nic", + "ethernet_mac_address": "00:1e:67:fe:f4:1a", + "type": "phy" + }, + { + "id": "sriov_nic", + "ethernet_mac_address": "00:1e:67:f8:6a:41", + "type": "phy" + } + ], + "networks": [ + { + "id": "baremetal", + "link": "baremetal_nic", + "type": "ipv4", + "ip_address": "10.10.110.21/24", + "gateway": "10.10.110.1", + "dns_nameservers": ["8.8.8.8"] + }, + { + "id": "provisioning", + "link": "provisioning_nic", + "type": "ipv4_dhcp" + }, + { + "id": "sriov", + "link": "sriov_nic", + "type": "ipv4", + "ip_address": "10.10.113.2/24" + } + ], + "services": [] } }, { @@ -155,9 +203,50 @@ servers, just add another array. "address": "10.10.10.12" }, "os": { - "image_name": "bionic-server-cloudimg-amd64.img", + "image_name": "focal-server-cloudimg-amd64.img", "username": "ubuntu", "password": "mypasswd" + }, + "net": { + "links": [ + { + "id": "baremetal_nic", + "ethernet_mac_address": "00:1e:67:f1:5b:90", + "type": "phy" + }, + { + "id": "provisioning_nic", + "ethernet_mac_address": "00:1e:67:f1:5b:91", + "type": "phy" + }, + { + "id": "sriov_nic", + "ethernet_mac_address": "00:1e:67:f8:69:81", + "type": "phy" + } + ], + "networks": [ + { + "id": "baremetal", + "link": "baremetal_nic", + "type": "ipv4", + "ip_address": "10.10.110.22/24", + "gateway": "10.10.110.1", + "dns_nameservers": ["8.8.8.8"] + }, + { + "id": "provisioning", + "link": "provisioning_nic", + "type": "ipv4_dhcp" + }, + { + "id": "sriov", + "link": "sriov_nic", + "type": "ipv4", + "ip_address": "10.10.113.3/24" + } + ], + "services": [] } }] } @@ -179,6 +268,27 @@ servers, just add another array. - *image_name*: Images name should be in qcow2 format. - *username*: Login username for the OS provisioned. - *password*: Login password for the OS provisioned. +- *net*: Bare metal network information is a json field. It describes + the interfaces and networks used by ICN. For more information, + refer to the *networkData* field of the BareMetalHost resource + definition. + - *links*: An array of interfaces. + - *id*: The ID of the interface. This is used in the network + definitions to associate the interface with its network + configuration. + - *ethernet_mac_address*: The MAC address of the interface. + - *type*: The type of interface. Valid values are "phy". + - *networks*: An array of networks. + - *id*: The ID of the network. + - *link*: The ID of the link this network definition applies to. + - *type*: The type of network, either dynamic ("ipv4_dhcp") or + static ("ipv4"). + - *ip_address*: Only valid for type "ipv4"; the IP address of the + interface. + - *gateway*: Only valid for type "ipv4"; the gateway of this + network. + - *dns_nameservers*: Only valid for type "ipv4"; an array of DNS + servers. #### Creating the Settings Files @@ -190,39 +300,20 @@ The user will find the network configuration file named as ``` shell #!/bin/bash -#Local Controller - Bootstrap cluster DHCP connection -#BS_DHCP_INTERFACE defines the interfaces, to which ICN DHCP deployment will bind -#e.g. export BS_DHCP_INTERFACE="ens513f0" -export BS_DHCP_INTERFACE= - -#BS_DHCP_INTERFACE_IP defines the IPAM for the ICN DHCP to be managed. -#e.g. export BS_DHCP_INTERFACE_IP="172.31.1.1/24" -export BS_DHCP_INTERFACE_IP= - #Edge Location Provider Network configuration #Net A - Provider Network -#If provider having specific Gateway and DNS server details in the edge location -#export PROVIDER_NETWORK_GATEWAY="10.10.110.1" -export PROVIDER_NETWORK_GATEWAY= -#export PROVIDER_NETWORK_DNS="8.8.8.8" -export PROVIDER_NETWORK_DNS= +#If provider having specific Gateway and DNS server details in the edge location, +#supply those values in nodes.json. #Ironic Metal3 settings for provisioning network #Interface to which Ironic provision network to be connected #Net B - Provisioning Network -#e.g. export IRONIC_INTERFACE="eno1" -export IRONIC_INTERFACE= +export IRONIC_INTERFACE="eno2" #Ironic Metal3 setting for IPMI LAN Network #Interface to which Ironic IPMI LAN should bind #Net C - IPMI LAN Network -#e.g. export IRONIC_IPMI_INTERFACE="eno2" -export IRONIC_IPMI_INTERFACE= - -#Interface IP for the IPMI LAN, ICN verfiy the LAN Connection is active or not -#e.g. export IRONIC_IPMI_INTERFACE_IP="10.10.10.10" -#Net C - IPMI LAN Network -export IRONIC_IPMI_INTERFACE_IP= +export IRONIC_IPMI_INTERFACE="eno1" ``` #### Running @@ -290,9 +381,8 @@ The following steps occurs once the `make install` command is given. ![Figure 2](figure-2.png)*Figure 2: Virtual Deployment Architecture* Virtual deployment is used for the development environment using -Metal3 virtual deployment to create VM with PXE boot. VM Ansible -scripts the node inventory file in /opt/ironic. No setting is required -from the user to deploy the virtual deployment. +Vagrant to create VMs with PXE boot. No setting is required from the +user to deploy the virtual deployment. ### Snapshot Deployment Overview No snapshot is implemented in ICN R2. @@ -301,15 +391,23 @@ No snapshot is implemented in ICN R2. #### Install Jump Server Jump server is required to be installed with Ubuntu 18.04. This will -install all the VMs and install the k8s clusters. Same as bare metal -deployment, use `make vm_install` to install virtual deployment. +install all the VMs and install the k8s clusters. #### Verifying the Setup - VMs -`make verify_all` installs two VMs with name master-0 and worker-0 -with 8GB RAM and 8 vCPUs and installs k8s cluster on the VMs using the -ICN BPA operator and install the ICN BPA REST API verifier. BPA -operator installs the multi-cluster KUD to bring up k8s with all -addons and plugins. +To verify the virtual deployment, execute the following commands: +``` shell +$ vagrant up --no-parallel +$ vagrant ssh jump +vagrant@jump:~$ sudo su +root@jump:/home/vagrant# cd /icn +root@jump:/icn# make verifier +``` +`vagrant up --no-parallel` creates three VMs: vm-jump, vm-machine-1, +and vm-machine-2, each with 16GB RAM and 8 vCPUs. `make verifier` +installs the ICN BPA operator and the ICN BPA REST API verifier into +vm-jump, and then installs a k8s cluster on the vm-machine VMs using +the ICN BPA operator. The BPA operator installs the multi-cluster KUD +to bring up k8s with all addons and plugins. # Verifying the Setup ICN blueprint checks all the setup in both bare metal and VM @@ -432,11 +530,11 @@ the Ironic logs and baremetal operator to look at the state of servers. Openstack baremetal node shows all state of the server right from power, storage. -**Why provide network is required?** +**Why provider network (baremetal network configuration) is required?** -Generally, provider network DHCP servers in lab provide the router and -DNS server details. In some lab setup DHCP server don't provide this -information. +Generally, provider network DHCP servers in a lab provide the router +and DNS server details. In some labs, there is no DHCP server or the +DHCP server does not provide this information. # License