X-Git-Url: https://gerrit.akraino.org/r/gitweb?a=blobdiff_plain;f=README.md;h=3f76e11dccc3d385b6253e1faecdf7fe4a5402a4;hb=308b436e60c4f9477641a196fe5a53996fd9bc92;hp=b3b2307f05bd577cd54dfc2065cdb1587b8d0575;hpb=9229632e18132f4e1a7a82ddc38078715be30be7;p=icn.git diff --git a/README.md b/README.md index b3b2307..3f76e11 100644 --- a/README.md +++ b/README.md @@ -82,7 +82,7 @@ No prerequisites for ICN blueprint. (Tested as below) Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) ---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------ -jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 112 +jump0 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | #### Jump Server Software Requirements ICN supports Ubuntu 18.04. The ICN blueprint installs all required @@ -104,9 +104,9 @@ Net C to provision the bare metal servers to do the OS provisioning. (Tested as below) Hostname | CPU Model | Memory | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) ---------|-----------|--------|---------|--------------------------------------------------|------------------------------------------------------ -node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 112
eno4: VLAN 113 -node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 112
eno4: VLAN 113 -node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 112
eno4: VLAN 113 +node1 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 +node2 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 +node3 | Intel 2xE5-2699 | 64GB | 3TB (Sata)
180 (SSD) | eth0: VLAN 110
eno1: VLAN 110
eno2: VLAN 111 | eno3: VLAN 113 #### Compute Server Software Requirements The Local Controller will install all the software in compute servers @@ -136,7 +136,7 @@ below. This example only shows 2 servers, statically configured on the baremetal network. If you want to increase servers, just add another array. If the baremetal network provides a DHCP server with gateway and DNS server information, just change the baremetal type to "ipv4". -ICN provides DHCP servers for the provisioning and bootstrap networks. +ICN provides DHCP servers for the provisioning network. `node.json.sample` ``` json @@ -149,7 +149,7 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "address": "10.10.10.11" }, "os": { - "image_name": "bionic-server-cloudimg-amd64.img", + "image_name": "focal-server-cloudimg-amd64.img", "username": "ubuntu", "password": "mypasswd" }, @@ -165,11 +165,6 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "ethernet_mac_address": "00:1e:67:fe:f4:1a", "type": "phy" }, - { - "id": "bootstrap_nic", - "ethernet_mac_address": "00:1e:67:f8:6a:40", - "type": "phy" - }, { "id": "sriov_nic", "ethernet_mac_address": "00:1e:67:f8:6a:41", @@ -190,11 +185,6 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "link": "provisioning_nic", "type": "ipv4_dhcp" }, - { - "id": "bootstrap", - "link": "bootstrap_nic", - "type": "ipv4_dhcp" - }, { "id": "sriov", "link": "sriov_nic", @@ -213,7 +203,7 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "address": "10.10.10.12" }, "os": { - "image_name": "bionic-server-cloudimg-amd64.img", + "image_name": "focal-server-cloudimg-amd64.img", "username": "ubuntu", "password": "mypasswd" }, @@ -224,11 +214,6 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "ethernet_mac_address": "00:1e:67:f1:5b:90", "type": "phy" }, - { - "id": "bootstrap_nic", - "ethernet_mac_address": "00:1e:67:f8:69:80", - "type": "phy" - }, { "id": "provisioning_nic", "ethernet_mac_address": "00:1e:67:f1:5b:91", @@ -254,11 +239,6 @@ ICN provides DHCP servers for the provisioning and bootstrap networks. "link": "provisioning_nic", "type": "ipv4_dhcp" }, - { - "id": "bootstrap", - "link": "bootstrap_nic", - "type": "ipv4_dhcp" - }, { "id": "sriov", "link": "sriov_nic", @@ -320,13 +300,6 @@ The user will find the network configuration file named as ``` shell #!/bin/bash -#Local Controller - Bootstrap cluster DHCP connection -#BS_DHCP_INTERFACE defines the interfaces, to which ICN DHCP deployment will bind -export BS_DHCP_INTERFACE="eno3" - -#BS_DHCP_INTERFACE_IP defines the IPAM for the ICN DHCP to be managed. -export BS_DHCP_INTERFACE_IP="172.31.1.1/24" - #Edge Location Provider Network configuration #Net A - Provider Network #If provider having specific Gateway and DNS server details in the edge location, @@ -341,10 +314,6 @@ export IRONIC_INTERFACE="eno2" #Interface to which Ironic IPMI LAN should bind #Net C - IPMI LAN Network export IRONIC_IPMI_INTERFACE="eno1" - -#Interface IP for the IPMI LAN, ICN verfiy the LAN Connection is active or not -#Net C - IPMI LAN Network -export IRONIC_IPMI_INTERFACE_IP="10.10.10.10" ``` #### Running @@ -412,9 +381,8 @@ The following steps occurs once the `make install` command is given. ![Figure 2](figure-2.png)*Figure 2: Virtual Deployment Architecture* Virtual deployment is used for the development environment using -Metal3 virtual deployment to create VM with PXE boot. VM Ansible -scripts the node inventory file in /opt/ironic. No setting is required -from the user to deploy the virtual deployment. +Vagrant to create VMs with PXE boot. No setting is required from the +user to deploy the virtual deployment. ### Snapshot Deployment Overview No snapshot is implemented in ICN R2. @@ -426,11 +394,20 @@ Jump server is required to be installed with Ubuntu 18.04. This will install all the VMs and install the k8s clusters. #### Verifying the Setup - VMs -`make verify_all` installs two VMs with name master-0 and worker-0 -with 8GB RAM and 8 vCPUs and installs k8s cluster on the VMs using the -ICN BPA operator and install the ICN BPA REST API verifier. BPA -operator installs the multi-cluster KUD to bring up k8s with all -addons and plugins. +To verify the virtual deployment, execute the following commands: +``` shell +$ vagrant up --no-parallel +$ vagrant ssh jump +vagrant@jump:~$ sudo su +root@jump:/home/vagrant# cd /icn +root@jump:/icn# make verifier +``` +`vagrant up --no-parallel` creates three VMs: vm-jump, vm-machine-1, +and vm-machine-2, each with 16GB RAM and 8 vCPUs. `make verifier` +installs the ICN BPA operator and the ICN BPA REST API verifier into +vm-jump, and then installs a k8s cluster on the vm-machine VMs using +the ICN BPA operator. The BPA operator installs the multi-cluster KUD +to bring up k8s with all addons and plugins. # Verifying the Setup ICN blueprint checks all the setup in both bare metal and VM