initial commit of SEBA example files 39/1939/2 master
authordavidplunkett <dp7642@att.com>
Fri, 8 Nov 2019 05:14:44 +0000 (05:14 +0000)
committerdavidplunkett <dp7642@att.com>
Fri, 8 Nov 2019 05:24:35 +0000 (05:24 +0000)
Signed-off-by: davidplunkett <dp7642@att.com>
Change-Id: Ib05f402f4ffcaecea69def3eb69b08e7c04d910a
Signed-off-by: davidplunkett <dp7642@att.com>
LICENSE
README.md [new file with mode: 0644]
REC_blueprint.yaml [new file with mode: 0644]
index.rst [new file with mode: 0644]
objects.yaml [new file with mode: 0644]
user_config.yaml [new file with mode: 0644]
workflows/REC_create.py [new file with mode: 0755]
workflows/gencerts.sh [new file with mode: 0755]
workflows/pod_create.sh [new file with mode: 0644]

diff --git a/LICENSE b/LICENSE
index 261eeb9..d645695 100644 (file)
--- a/LICENSE
+++ b/LICENSE
@@ -1,3 +1,4 @@
+
                                  Apache License
                            Version 2.0, January 2004
                         http://www.apache.org/licenses/
diff --git a/README.md b/README.md
new file mode 100644 (file)
index 0000000..e37cd24
--- /dev/null
+++ b/README.md
@@ -0,0 +1,16 @@
+Radio Edge Cloud
+================
+
+This repository contains the Akraino SEBA blueprint which is
+intended to consumed by the Akraino Regional Controller in order to deploy the
+software of the Akraino Telco Appliance blueprint familiy in a prescribed manner
+onto a tested hardware configuration.
+
+The SEBA blueprint uses the Radio Edge Cloud (REC) build process and iso files
+so you will see multiple reference to the REC in the included files.
+
+The SEBA blueprint may also be deployed in a semi-manual manner without the aid
+of the Regional Controller, and may be deployed on hardware other than the
+tested configurations, but in that case it will not be the full "appliance" that
+conforms with a hardware+software configuration that was tested by the SEBA
+Continuous Integration / Continuous Deployment (CI/CD) pipeline.
diff --git a/REC_blueprint.yaml b/REC_blueprint.yaml
new file mode 100644 (file)
index 0000000..e4f6a86
--- /dev/null
@@ -0,0 +1,46 @@
+#
+# Copyright (c) 2019 AT&T Intellectual Property. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#        https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#  This file defines version 1.0.0 of the REC (Radio Edge Cloud) blueprint,
+#  for use by the regional controller).  It should be loaded into the RC
+#  (using the "rc_cli blueprint create" command) before a POD is created.
+#
+---
+blueprint: 1.0.0
+name: Radio Edge Cloud
+version: 1.0.0
+description: This Blueprint defines an instance of the Radio Edge Cloud
+  (from the Telco Appliance family of blueprints).
+yaml:
+  # Required hardware profiles (can match on either UUID or name)
+  # Note: UUIDs would likely require a global registry of HW profiles.
+  hardware_profile:
+    or:
+      - {uuid: 8a17384a-71d4-11e9-9e4c-0017f20fe1b8}
+      - {uuid: 9897a008-71d4-11e9-8bda-0017f20dbff8}
+      - {uuid: a4b4a570-71d4-11e9-adc2-0017f208759e}
+  workflow:
+    # Workflow that is invoked when the POD is created
+    create:
+      url: 'http://www.example.org/blueprints/REC/REC_create.py'
+      components:
+        # This script is used by the REC_create.py workflow to generate
+        # self-signed certs for the remote-installer
+        - 'http://www.example.org/blueprints/REC/gencerts.sh'
+      input_schema:
+        iso_primary: {type: string}
+        iso_secondary: {type: string}
+        input_yaml: {type: string}
+        rc_host: {type: string}
diff --git a/index.rst b/index.rst
new file mode 100644 (file)
index 0000000..62536b8
--- /dev/null
+++ b/index.rst
@@ -0,0 +1,165 @@
+..
+      Copyright (c) 2019 AT&T Intellectual Property. All Rights Reserved.
+
+      Licensed under the Apache License, Version 2.0 (the "License");
+      you may not use this file except in compliance with the License.
+      You may obtain a copy of the License at
+
+          http://www.apache.org/licenses/LICENSE-2.0
+
+      Unless required by applicable law or agreed to in writing, software
+      distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+      WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+      License for the specific language governing permissions and limitations
+      under the License.
+
+Instructions for installing REC using the Regional Controller and the REC Blueprint
+===================================================================================
+
+1. The Regional Controller should already be running somewhere (hopefully on a machine or
+   VM dedicated for this purpose). See here_ for instructions on how to start the regional
+   controller.
+
+   .. _here: https://wiki.akraino.org/display/AK/Starting+the+Regional+Controller
+   
+2. Clone the *seba* repository using
+
+   .. code-block:: bash
+
+     git clone https://gerrit.akraino.org/r/seba.git
+
+   We will use the following files from this repository:
+
+  .. code-block:: bash
+
+    ./REC_blueprint.yaml
+    ./objects.yaml
+    ./workflows/gencerts.sh
+    ./workflows/REC_create.py
+
+  You will need to provide a web server where some of these files may be fetched by the
+  Regional Controller.
+       
+3. Edit the file *objects.yaml*.
+
+   - Update the *nodes* stanza to define the nodes in your cluster, including the Out of
+     Band IP address for each node, as well as the name of the hardware type.  Currently REC
+     is defined to run on the three types of hardware listed in the *hardware* stanza.
+   - If you want to give the edgesite a different name, update the 'edgesites' stanza.
+
+4. Edit the file *REC_blueprint.yaml* to to update the URLs (the two lines that contain
+   ``www.example.org``) for the create workflow script (*REC_create.py*), and the
+   *gencerts.sh* script.  These URLs should point to the web server and path where you will
+   store these files. The rest of the blueprint should be kept unchanged.
+
+5. Create and edit a copy of *user_config.yaml*.  See these instructions_ on how to create
+   this file.
+
+   .. _instructions: https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Aboutuser_config.yaml
+
+6. Copy the two workflows scripts and the *user_config.yaml* file to your web server.
+   Note: the provided *gencerts.sh* just generates some self-signed certificates for use
+   by the *remote-installer* Docker container, with some pre-defined defaults; if you want
+   to provide your own certificates, you will need to modify or replace this script.
+   Set and export the following variable:
+
+   .. code-block:: bash
+
+     export USER_CONFIG_URL=<URL of user_config.yaml>
+
+7. Clone the *api-server* repository.  This provides the CLI tools used to interact with the
+   Regional Controller.  Add the scripts from this repository to your PATH:
+
+   .. code-block:: bash
+
+     git clone https://gerrit.akraino.org/r/regional_controller/api-server
+     export PATH=$PATH:$PWD/api-server/scripts
+
+8. Define where the Regional Controller is located, as well as the login/password to use
+   (the login/password shown here are the built-in values and do not need to be changed
+   if you have not changed them on the Regional Controller):
+
+   .. code-block:: bash
+
+     export RC_HOST=<IP or DNS name of Regional Controller>
+     export USER=admin
+     export PW=admin123
+
+9. Load the objects defined in *objects.yaml* into the Regional Controller using:
+
+   .. code-block:: bash
+
+     rc_loaddata -H $RC_HOST -u $USER -p $PW -A objects.yaml
+
+10. Load the blueprint into the Regional Controller using:
+
+   .. code-block:: bash
+
+     rc_cli -H $RC_HOST -u $USER -p $PW blueprint create REC_blueprint.yaml
+
+11. Get the UUIDs of the edgesite and the blueprint from the Regional Controller using:
+
+    .. code-block:: bash
+
+      rc_cli -H $RC_HOST -u $USER -p $PW blueprint list
+      rc_cli -H $RC_HOST -u $USER -p $PW edgesite list
+
+    These are needed to create the POD.  You will also see the UUID of the Blueprint displayed
+    when you create the Blueprint in step 10 (it is at the tail end of the URL that is printed).
+    Set and export them as the environment variables ESID and BPID.
+
+    .. code-block:: bash
+
+      export ESID=<UUID of edgesite in the RC>
+      export BPID=<UUID of blueprint in the RC>
+
+12. Figure out which REC ISO images you want to use to build your cluster.  These are
+    located here:
+    https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/
+    Figure out which build you want, and then set and export the following variables:
+
+    .. code-block:: bash
+
+         export BUILD=<buildnumber>
+         export ISO_PRIMARY_URL=https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/$BUILD/install.iso
+         export ISO_SECONDARY_URL=https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/$BUILD/bootcd.iso
+
+    Note: the Akraino Release 1 image is build #9.
+
+13. Create the *POD.yaml* file as follows:
+
+    .. code-block:: bash
+
+         cat > POD.yaml <<EOF
+         name: My_Radio_Edge_Cloud_POD
+         description: Put a description of the POD here.
+         blueprint: $BPID
+         edgesite: $ESID
+         yaml:
+           iso_primary: '$ISO_PRIMARY_URL'
+           iso_secondary: '$ISO_SECONDARY_URL'
+           input_yaml: '$USER_CONFIG_URL'
+           rc_host: $RC_HOST
+         EOF
+
+14. Create the POD using:
+
+    .. code-block:: bash
+
+         rc_cli -H $RC_HOST -u $USER -p $PW pod create POD.yaml
+
+    This will cause the POD to be created, and the *REC_create.py* workflow script to be
+    run on the Regional Controller's workflow engine. This in turn will pull in the ISO
+    images, and install REC on your cluster.
+
+15. If you want to monitor ongoing progess of the installation, you can issue periodic calls
+    to monitor the POD with:
+
+    .. code-block:: bash
+
+         rc_cli -H $RC_HOST -u $USER -p $PW pod show $PODID
+
+    where $PODID is the UUID of the POD. This will show all the messages logged by the
+    workflow, as well as the current status of the workflow. The status will be WORKFLOW
+    while the workflow is running, and wil change to ACTIVE if the workflow completes
+    succesfully, or FAILED, if the workflow fails.
diff --git a/objects.yaml b/objects.yaml
new file mode 100644 (file)
index 0000000..db7d568
--- /dev/null
@@ -0,0 +1,69 @@
+#
+# Copyright (c) 2019 AT&T Intellectual Property. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#        https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#  This file maps out the laboratory hardware for an example laboratory, in
+#  terms of hardware profiles, regions, edgesites, and nodes.  This can all
+#  be loaded into the RC via the ``rc_loaddata.sh'' tool.
+#
+#  Changes should be made to this file, which will be run against the RC before
+#  every CD job.
+#
+---
+hardware:
+  Nokia_OE19:
+    uuid: 8a17384a-71d4-11e9-9e4c-0017f20fe1b8
+    description: Nokia OpenEdge hardware for the REC Blueprint
+    yaml:
+      todo: put hardware details here
+      rack_layout:
+        height: 1U
+        chassis:
+          layout: airframe
+          height: 3U
+          units: 5
+  Dell_740xd:
+    uuid: 9897a008-71d4-11e9-8bda-0017f20dbff8
+    description: Dell 740xd hardware for the REC Blueprint
+    yaml:
+      todo: put hardware details here
+      rack_layout:
+        height: 2U
+  HPE_DL380g10:
+    uuid: a4b4a570-71d4-11e9-adc2-0017f208759e
+    description: HPE DL380 Gen 10 hardware for the REC Blueprint
+    yaml:
+      todo: put hardware details here
+      rack_layout:
+        height: 2U
+
+edgesites:
+  SEBA_OpenEdge1:
+    description: The first SEBA cluster
+    nodes: [node1, node2, node3]
+    regions: [00000000-0000-0000-0000-000000000000]
+
+nodes:
+  node1:
+    hardware: Nokia_OE19
+    yaml:
+      oob_ip: 10.65.3.57
+  node2:
+    hardware: Nokia_OE19
+    yaml:
+      oob_ip: 10.65.3.56
+  node3:
+    hardware: Nokia_OE19
+    yaml:
+      oob_ip: 10.65.3.55
diff --git a/user_config.yaml b/user_config.yaml
new file mode 100644 (file)
index 0000000..74b1122
--- /dev/null
@@ -0,0 +1,127 @@
+---
+version: 2.0.0
+name: seba-foundry
+
+description: SEBA Deployment on OpenEdge
+
+time:
+    ntp_servers: [216.239.35.4, 216.239.35.5]
+    zone: America/New_York
+
+users:
+    admin_user_name: cloudadmin
+    admin_user_password: "$6$XXXXXXXX$C3fvJHW8o1383ZTb.vQ86wfjK7VxI7N7KwE0PxQrPdDRpotJMY8wcB2XHUQCheuHf44KGrg.AMGoI3d37IHua/"
+    initial_user_name: myadmin
+    initial_user_password: XXXXXXXX
+    admin_password: XXXXXXXXX
+
+networking:
+    dns: [ 8.8.8.8, 8.8.4.4 ]
+    mtu: 9000
+    infra_external:
+        mtu: 1500
+        network_domains:
+            rack-1:
+                cidr: 10.65.1.0/24
+                gateway: 10.65.1.1
+                vlan: 751
+                ip_range_start: 10.65.1.50
+                ip_range_end: 10.65.1.60
+    infra_storage_cluster:
+        network_domains:
+            rack-1:
+                cidr: 192.168.11.0/24
+                ip_range_start: 192.168.11.51
+                ip_range_end: 192.168.11.60
+                vlan: 3911
+    infra_internal:
+        network_domains:
+            rack-1:
+                cidr: 192.168.12.0/24
+                ip_range_start: 192.168.12.51
+                ip_range_end: 192.168.12.60
+                vlan: 3912
+
+caas:
+    docker_size_quota: 2G
+    helm_operation_timeout: 900
+    docker0_cidr: 172.17.0.1/16
+    instantiation_timeout: 60
+    helm_parameters: { "registry_url": "registry.kube-system.svc.nokia.net" }
+    encrypted_ca: ["U2FsdGVkX1+iaWyYk3W01IFpfVdughR5aDKo2NpcBw2UStYnepHlr5IJD3euo1lS\n7agR5K2My8zYdWFTYYqZncVfYZt7Tc8zB2yzATEIHEV8PQuZRHqPdR+/OrwqjwA6\ni/R+4Ec3Kko6eS0VWvwDfdhhK/nwczNNVFOtWXCwz/w7AnI9egiXnlHOq2P/tsO6\np3e9J6ly5ZLp9WbDk2ddAXChnJyC6PlF7ou/UpFOvTEXRgWrWZV6SUAgdxg5Evam\ndmmwqjRubAcxSo7Y8djHtspsB2HqYs90BCBtINHrEj5WnRDNMR/kWryw1+S7zL1G\nwrpDykBRbq/5jRQjqO/Ct98yNDdGSWZ+kqMDfLriH4pQoOzMcicT4KRplQNX2q9O\nT/7CXKmmB3uBxM7a9k2LS22Ljszyd2vxth4jA+SLNOB5IT8FmfDY3PvNnvKaDGQ4\nuWPASyjpPjms3LwsKeu+T8RcKcJJPoZMNZGLm/5jVqm3RXbMvtI0oEaHWsVaSuwX\nnMgGQHNHop+LK+5a0InYn4ZJo9sbvrHp9Vz4Vo+AzqTVXwA4NEHfqMvpphG+aRCb\ncPJggJqnF6s5CAPDRvwXzqjjVQy2P1/AhJugW7HZw3dtux4xe3RZ+AMS2YW+fSi1\nIxAGlsLL28KJMc5ACxX5cuSB/nO19afpf6zyOPIk0ZVh8+bxmB4YBRzGLTSnFNr3\ndauT9/gCU85ThE93rIfPW6PRyp9juEBLjgTpqDQPn5APoJIIW1ZQWr6tvSlT04Hc\nw0HZ7EcAC7EmmaQYTyL6iifHiZHop9g2clXA0MU9USQggMOKxFrxEyF4iWdsCCXP\nfTA3bgzvlvqfk9p2Cu9DOmRHGLby2YSj+oghsFDCfhfM1v2Ip2YGPdJM6y7kNX19\nkBpV4Rfcw0NCg2hhXbHZ7LtejlQ1ht8HnmY5/AnJ/HRdnPb+fcdgS9ZFcGsAH2ze\nSe7hb+MNp80JsuX4A+jOjBacjwL+KbX5RDJp//5dEmqJDkbfMctL1KukBaDrbpci\np/TeVmLhwlQogeVuF/Y5vCokq6M5+f28jFJ+R+P2oBY3fAvBhmd+ZmGbUWXxmMF+\nV3mpFkYqXWS+mtVh8Fs0nhrCkqRLTmBj5UNhsMcZ4vGfiu+dPMQi62wa6GoGVjus\nIj/Upal9RYwthSykUKcWu0KEB929/e4Sz0Y6s3Pzy1+xdmKDPtaBUH9UT3LjMVvY\nordeL0UjKYqWcvpb7Vfma3UD0tz6n/CyHNDVhA/FioadEy6iJvL316Kf3to69cN+\nvKWav/IeazxdhBSbatPKN3qwESkzr3el2yrdZL4qehflRMp0rFuzZfRB69UFPbgq\nkTQlJHb0OaJTt6er/XfjtMZoctW7xtYf58CqMJ06QxK5kLKc5Yib73cVyzhmmIz4\nEtUs10QCA5AihHgVES8ZrgZKWDhR+pmFPG3eVitJoUeDNEe9vVEEX8TiWu+H1OHG\n8UyCKFyyPCj5OwVbwGSgQg=="]
+    encrypted_ca_key: ["U2FsdGVkX1+WlNST+WkysFUHYAPfViWe01tCCQsXPsWsUskB4oNNC78bXdEv33+3\ncDlubc9F0ZiHxkng70LKCFV5KQneHfg6c3lPaM4zwaJ34UCf80riIoYVozxqnK/S\nTAs0i0rJmzRz4hkTre4xV0I2ZucW3gquP4/s1yUK3IJF84SDfEi26uPsBOrUpU9Q\nIBxY2rldK+yZUZUFehQb82dvin0CSiXDY63cYLJMYEwWBfJEeY+RGMuZuuGp3qgy\nyVfByZ5/kwF9qa6+ToYw2zXiokGFfBqiAFnXU7Q6Wcu2qndMQoiy3jFU2DjEQi6N\nVgZHzrPUUUrmQGALyA5blVvNHVQyq4rmMmsTEI02xclz8m7Yzd/HEFo/C5z5x+My\n2SOIBIRCy6bTSpzU7iixl5U6r5/XfrfQoJ+OwRq1/P2QmJ2swqzcLOUpDlquDeuP\nd46ceWMO8nlimRps4cX5nQRI1SLaypH1rRiQpnIP7q+jrHEco6wStc458rzX1WxW\nhPMjnnlVhH4sJNqh5c5/1BvzSBdnx0qIBcFA6fR8XfL//DmRFsAfRaxVVWadpusc\nXfh4LNNqR9HmoNH6yfBpd66yBYsjFbWip0WKMwdhNBqN1a94OFvRS4+iUfskjC2w\n4w4YjPluRBxI5t9eT4wX8D328ikgP4ZQrPdUZoDpLThhRZ62pTOknOeVj+C7799O\nEbopqGg+6BIXZHakmzB6I/fyjthoLBbxpyqNvKlGGamMNI3d7wq1vwTHch5QLO+w\n5fuRqoIRUtGscSQXp8EOb4kiaxhXXJLkVJw7auOdqxqxQbIf+dt2ViwdyFNjdHz8\ngPFcAom0GO+T7xHMF1H6xqUXkB4QzTK934pMVoIwu5MezBlz8bxj5+EeF7Ptkdnj\nq4rwihGY7aEhPrXVoq19tsbMYwDGZQvbTKtWDOxrD6ruTDTwZxVZcEOAX5KCF0Oq\nqRcrCBcLNERm4FSAgUK90v71TNQoMpVea3/01Ec8GbHJfozvrmAVqBpbF0ajlM1/\nZvGrnmVrJEk/PelCEu+Ni9zrn7DxGZqJ7lbcDU7Nq/18KNvOQah4Ryh9aDKVSD4r\nvgZKzIHPRgKoHTxTZ2uP1LBgK2Ux1RjhlAcZFAmWYxg/qluxnHKCimZ04rIjI0if\nN0wSI7uh8TsyidZv+iKpG+JqW5oe7R8xLlU3ceFllkghAGVRn/UyirGXYPzxXbfB\naphYFBuj6FbtdisM7euX2A9F2OUM2reditR/z6q1Ety1xX9aNudQJ1YcL6yr7pGI\nIX3NANlp2Ra9Fr95ne9aEnwdMmGsQ5DjxHczEc3EcDEbFuH6C/XDzYqtOGyFe/pI\nZgPSiys157GB/GzSfOsErvA+EVWKmU8PiLl461s/OV25m0thG5+03yXKRsymX371\nXAg+hHqe2x5PRjwuUDmruEM/P3LHQeMb4YdhI3DfFyUExtJ/Q/38GgB1XNAuDu0R\n3EyV01Umm6IrYDQWpngjGGmiimOdpLFHkQbxDNiRr8QX5eshAbVlI19DINCiRl/u\njh4TqRZMl6YI4oQZDYqCrBrqZLljm/DBhgvr2jnq9ed3dIKlHbrkw3sjBuwINZjw\naduL3U+WTUvUCY/VtlxJZdU1kVLwSnkDh+8HK/eZ7AuHWjQjD9JzArCo5CCMMFJL\noY0IKxzhhP+4BmaMabwcuooxMjWR3fu3T0sgcTEZtG61wcSUDW0gw6c5QAxmq7It\nqzP2b1eNPp05oMJ6ALIe+8MQMM94HigbSiLB3/rFS8KkhZcdJliBc+Ig6TBFx9QW\nS0Jh4WgJn0B5laiI7DRp0E9bUUnLLEFTdA9P9T1DcIwngPuv6IYNQdzYluaX6cvy\nNhCH+XdbaFkA9KOsp69uZWqzweoejAo24Cj71J9H4yMzBDWi7/fL4YQqjS6zC9JY\ny3zhk8VGi9SYtMB1bPdmxBlCyLElZ6qf/cyjsWN89oTTITCYbSuIrB4piJH35t17\nd7eFZ7QXMampJzCQyAcKsxTDVdeKhHjVxsnSWuvmlR31Hmrxw3yQQH2pbGLcHBWJ\ngz+/xpgxh5x0dGzqOKqgfGOtBOSpzHFMuuoXToYbcAIwMVRcTPnVR7B1kOm2OiLG\nhuOxX29DypSM9HjsmoeffJaUoZ2wvBK4QZNpe5Jb80An/aO+8/oKmtaZgJqectsM\nfrVSLZtdPnH62lPy1i5CnoFI6JkX7oficJw8YQqswRp2z5HL9cSEAiR3MOr/Yco+\njJu5IidT3u5+hUlIdZtEtA=="]
+
+storage:
+    backends:
+        lvm:
+            enabled: false
+        ceph:
+            osd_pool_default_size: 2
+            enabled: true
+
+network_profiles:
+    controller_network:
+        linux_bonding_options: "mode=lacp"
+        ovs_bonding_options: "mode=lacp"
+
+        bonding_interfaces:
+            bond0: [ens11f0,ens11f1]
+
+        interface_net_mapping:
+            bond0: [infra_internal, infra_external, infra_storage_cluster]
+
+performance_profiles:
+    caas_cpu_profile:
+        caas_cpu_pools:
+            exclusive_pool_percentage: 25
+            shared_pool_percentage: 75
+
+storage_profiles:
+    caas_worker_docker_profile:
+        lvm_instance_storage_partitions: ["1"]
+        mount_dir: /var/lib/docker
+        mount_options: noatime,nodiratime,logbufs=8,pquota
+        backend: bare_lvm
+        lv_name: docker
+
+    ceph_backend_profile:
+        backend: ceph
+        nr_of_ceph_osd_disks: 2
+        ceph_pg_openstack_caas_share_ratio: "0:1"
+
+hosts:
+    controller-1:
+        service_profiles:     [ caas_master, storage ]
+        network_profiles:     [ controller_network ]
+        storage_profiles:     [ ceph_backend_profile ]
+        performance_profiles: [ caas_cpu_profile ]
+        network_domain: rack-1
+        hwmgmt:
+            address:  10.65.3.57
+            user:     admin
+            password: XXXXXXXX
+    controller-2:
+        service_profiles:     [ caas_master, storage ]
+        network_profiles:     [ controller_network ]
+        storage_profiles:     [ ceph_backend_profile ]
+        performance_profiles: [ caas_cpu_profile ]
+        network_domain: rack-1
+        hwmgmt:
+            address:  10.65.3.56
+            user:     admin
+            password: XXXXXXXX
+    controller-3:
+        service_profiles:     [ caas_master, storage ]
+        network_profiles:     [ controller_network ]
+        storage_profiles:     [ ceph_backend_profile ]
+        performance_profiles: [ caas_cpu_profile ]
+        network_domain: rack-1
+        hwmgmt:
+            address:  10.65.3.55
+            user:     admin
+            password: XXXXXXXX
+
+host_os:
+    lockout_time: 300
+    failed_login_attempts: 5
+...
diff --git a/workflows/REC_create.py b/workflows/REC_create.py
new file mode 100755 (executable)
index 0000000..3467b53
--- /dev/null
@@ -0,0 +1,260 @@
+#!/usr/bin/python3
+#
+#       Copyright (c) 2019 AT&T Intellectual Property. All Rights Reserved.
+#
+#       Licensed under the Apache License, Version 2.0 (the "License");
+#       you may not use this file except in compliance with the License.
+#       You may obtain a copy of the License at
+#
+#           http://www.apache.org/licenses/LICENSE-2.0
+#
+#       Unless required by applicable law or agreed to in writing, software
+#       distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#       WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#       License for the specific language governing permissions and limitations
+#       under the License.
+#
+
+"""
+REC_create.py - This workflow is used to create a REC POD by way of the remote-installer
+container.   The remote-installer is started if it is not running.  Parameters passed to
+this script (via the INPUT.yaml file) are:
+  iso_primary - the main installer.iso file to use
+  iso_secondary - the secondary bootcd.iso file
+  input_yaml - the YAML file passed to remote_installer
+  rc_host - the IP address or DNS name of the RC
+"""
+
+import datetime
+import docker
+import requests, urllib3
+import os, sys, time, yaml
+import POD
+
+WORKDIR      = os.path.abspath(os.path.dirname(__file__))
+RI_NAME      = 'remote-installer'
+RI_IMAGE     = 'nexus3.akraino.org:10003/akraino/remote-installer:latest'
+RI_DIR       = '/workflow/remote-installer'
+CERT_DIR     = RI_DIR + '/certificates'
+EXTERNALROOT = '/data'
+NETWORK      = 'host'
+WAIT_TIME    = 150
+HTTPS_PORT   = 8443
+API_PORT     = 15101
+ADMIN_PASSWD = 'recAdm1n'
+REMOVE_ISO   = False
+HOST_IP      = '127.0.0.1'
+
+def start(ds, **kwargs):
+    # Read the user input from the POST
+    global HOST_IP
+    urllib3.disable_warnings()
+    yaml = read_yaml(WORKDIR + '/INPUT.yaml')
+    REC_ISO_IMAGE_NAME        = yaml['iso_primary']
+    REC_PROVISIONING_ISO_NAME = yaml['iso_secondary']
+    INPUT_YAML_URL            = yaml['input_yaml']
+    HOST_IP                   = yaml['rc_host']
+    CLOUDNAME                 = 'CL-'+POD.POD
+    ISO                       = '%s/images/install-%s.iso' % (RI_DIR, POD.POD)
+    BOOTISO                   = '%s/images/bootcd-%s.iso'  % (RI_DIR, POD.POD)
+    USERCONF                  = '%s/user-configs/%s/user_config.yaml' % (RI_DIR, CLOUDNAME)
+
+    print('-----------------------------------------------------------------------------------------------')
+    print('                      POD is '+POD.POD)
+    print('                CLOUDNAME is '+CLOUDNAME)
+    print('                  WORKDIR is '+WORKDIR)
+    print('                  HOST_IP is '+HOST_IP)
+    print('             EXTERNALROOT is '+EXTERNALROOT)
+    print('       REC_ISO_IMAGE_NAME is '+REC_ISO_IMAGE_NAME)
+    print('REC_PROVISIONING_ISO_NAME is '+REC_PROVISIONING_ISO_NAME)
+    print('           INPUT_YAML_URL is '+INPUT_YAML_URL)
+    print('                      ISO is '+ISO)
+    print('                  BOOTISO is '+BOOTISO)
+    print('                 USERCONF is '+USERCONF)
+    print('-----------------------------------------------------------------------------------------------')
+
+    # Setup RI_DIR
+    initialize_RI(CLOUDNAME)
+
+    # Fetch the three files into WORKDIR
+    fetchURL(REC_ISO_IMAGE_NAME,        WORKDIR + '/install.iso');
+    fetchURL(REC_PROVISIONING_ISO_NAME, WORKDIR + '/bootcd.iso');
+    fetchURL(INPUT_YAML_URL,            WORKDIR + '/user_config.yaml');
+
+    # Link files to RI_DIR with unique names
+    os.link(WORKDIR + '/install.iso', ISO)
+    os.link(WORKDIR + '/bootcd.iso', BOOTISO)
+    os.link(WORKDIR + '/user_config.yaml', USERCONF)
+    PWFILE = '%s/user-configs/%s/admin_passwd' % (RI_DIR, CLOUDNAME)
+    with open(PWFILE, "w") as f:
+        f.write(ADMIN_PASSWD + '\n')
+
+    # Start the remote_installer
+    client = docker.from_env()
+    namefilt = { 'name': RI_NAME }
+    ri = client.containers.list(filters=namefilt)
+    if len(ri) == 0:
+        print(RI_NAME + ' is not running.')
+        c = start_RI(client)
+
+    else:
+        print(RI_NAME + ' is running.')
+        c = ri[0]
+
+    # Send request to remote_installer
+    id = send_request(HOST_IP, CLOUDNAME, ISO, BOOTISO)
+
+    # Wait up to WAIT_TIME minutes for completion
+    if wait_for_completion(HOST_IP, id, WAIT_TIME):
+        print('Installation failed after %d minutes.' % (WAIT_TIME))
+        sys.exit(1)
+
+    # Remove the ISOs?
+    if REMOVE_ISO:
+        for iso in (WORKDIR + '/install.iso', ISO, WORKDIR + '/bootcd.iso', BOOTISO):
+            os.unlink(iso)
+
+    # Done!
+    print('Installation complete!')
+    # sys.exit(0)  Don't exit as this will cause the task to fail!
+    return 'Complete.'
+
+def read_yaml(input_file):
+    print('Reading '+input_file+' ...')
+    with open(input_file, 'r') as stream:
+        try:
+            return yaml.safe_load(stream)
+        except yaml.YAMLError as exc:
+            print(exc)
+            sys.exit(1)
+
+def send_request(ri_ip, CLOUDNAME, ISO, BOOTISO):
+    URL     = 'https://%s:%d/v1/installations' % (ri_ip, API_PORT)
+    print('Sending request to '+URL+' ...')
+    headers = {'Content-type': 'application/json'}
+    content = {
+        'cloud-name': CLOUDNAME,
+        'iso': os.path.basename(ISO),
+        'provisioning-iso': os.path.basename(BOOTISO)
+    }
+    certs    = (CERT_DIR+'/clientcert.pem', CERT_DIR+'/clientkey.pem')
+    response = requests.post(URL, json=content, headers=headers, cert=certs, verify=False)
+    print(response)
+    return response.json().get('uuid')
+
+def create_podevent(msg='Default msg', level='INFO'):
+    API_HOST = 'http://arc-api:8080'
+    if os.environ.get('LOGGING_USER') and os.environ.get('LOGGING_PASSWORD'):
+        payload  = {'name': os.environ['LOGGING_USER'], 'password': os.environ['LOGGING_PASSWORD']}
+        response = requests.post(API_HOST+'/api/v1/login', json=payload)
+        token    = response.headers['X-ARC-Token']
+        headers  = {'X-ARC-Token': token}
+        payload  = {'uuid': POD.POD, 'level': level, 'message': msg}
+        response = requests.post(API_HOST+'/api/v1/podevent', headers=headers, json=payload)
+
+def wait_for_completion(ri_ip, id, ntimes):
+    """
+    Wait (up to ntimes minutes) for the remote_installer to finish.
+    Any status other than 'completed' is considered a failure.
+    """
+    status = 'ongoing'
+    URL    = 'https://%s:%d/v1/installations/%s/state' % (ri_ip, API_PORT, id)
+    certs  = (CERT_DIR+'/clientcert.pem', CERT_DIR+'/clientkey.pem')
+    lastevent = ''
+    while status == 'ongoing' and ntimes > 0:
+        time.sleep(60)
+        response = requests.get(URL, cert=certs, verify=False)
+        j = response.json()
+        t = (
+            str(j.get('status')),
+            str(j.get('percentage')),
+            str(j.get('description'))
+        )
+        event = 'Status is %s (%s) %s' % t
+        print('%s: %s' % (datetime.datetime.now().strftime('%x %X'), event))
+        if event != lastevent:
+            create_podevent(event)
+        lastevent = event
+        status = j.get('status')
+        ntimes = ntimes - 1
+    return status != 'completed'
+
+def fetchURL(url, dest):
+    print('Fetching '+url+' ...')
+    r = requests.get(url)
+    with open(dest, 'wb') as f1:
+        f1.write(r.content)
+
+def initialize_RI(CLOUDNAME):
+    """ Create the directory structure needed by the remote-installer """
+    dirs = (
+        RI_DIR,
+        RI_DIR+'/certificates',
+        RI_DIR+'/images',
+        RI_DIR+'/installations',
+        RI_DIR+'/user-configs',
+        RI_DIR+'/user-configs/'+CLOUDNAME
+    )
+    for dir in dirs:
+        if not os.path.isdir(dir):
+            print('mkdir '+dir)
+            os.mkdir(dir)
+
+def start_RI(client):
+    """
+    Start the remote-installer container (assumed to already be built somewhere).
+    Before starting, make sure the certificates directory is populated.  If not,
+    generate some self-signed certificates.
+    """
+    # If needed, create certificates (11 files) in RI_DIR/certificates
+    if not os.path.exists(CERT_DIR+'/clientcert.pem') or not os.path.exists(CERT_DIR+'/clientkey.pem'):
+        print('Generating some self-signed certificates.')
+        script = WORKDIR + '/gencerts.sh'
+        cmd = 'bash %s %s' % (script, RI_DIR+'/certificates')
+        print('os.system('+cmd+')')
+        os.system(cmd)
+
+    print('Starting %s.' % RI_NAME)
+    env = {
+        'API_PORT': API_PORT, 'HOST_ADDR': HOST_IP, 'HTTPS_PORT': HTTPS_PORT,
+        'PW': ADMIN_PASSWD, 'SSH_PORT': 22222
+    }
+    vols = {
+        EXTERNALROOT+RI_DIR: {'bind': '/opt/remoteinstaller', 'mode': 'rw'}
+    }
+    try:
+        c = client.containers.run(
+            image=RI_IMAGE,
+            name=RI_NAME,
+            network_mode=NETWORK,
+            environment=env,
+            volumes=vols,
+            detach=True,
+            remove=True,
+            privileged=True
+        )
+
+        # Wait 5 minutes for it to be running
+        n = 0
+        while c.status != 'running' and n < 10:
+            time.sleep(30)
+            c.reload()
+            n = n + 1
+        if c.status != 'running' and n >= 10:
+            print('Container took to long to start!')
+            sys.exit(1)
+        return c
+
+    except docker.errors.ImageNotFound as ex:
+        # If the specified image does not exist.
+        print(ex)
+        sys.exit(1)
+
+    except docker.errors.APIError as ex:
+        # If the server returns an error.
+        print(ex)
+        sys.exit(1)
+
+    except:
+        print('other error!')
+        sys.exit(1)
diff --git a/workflows/gencerts.sh b/workflows/gencerts.sh
new file mode 100755 (executable)
index 0000000..8fff54a
--- /dev/null
@@ -0,0 +1,214 @@
+#!/bin/bash
+#
+#  Script to create self-signed certificates in directory $1.
+#
+
+cd $1
+
+cat > openssl-ca.cnf << EOF
+HOME            = .
+RANDFILE        = \$ENV::HOME/.rnd
+
+####################################################################
+[ ca ]
+default_ca    = CA_default      # The default ca section
+
+[ CA_default ]
+
+dir               = /root/ca
+default_days     = 1000         # How long to certify for
+default_crl_days = 30           # How long before next CRL
+default_md       = sha256       # Use public key default MD
+preserve         = no           # Keep passed DN ordering
+
+x509_extensions = ca_extensions # The extensions to add to the cert
+
+email_in_dn     = no            # Don't concat the email in the DN
+copy_extensions = copy          # Required to copy SANs from CSR to cert
+
+####################################################################
+[ req ]
+prompt = no
+default_bits       = 4096
+default_keyfile    = cakey.pem
+distinguished_name = ca_distinguished_name
+x509_extensions    = ca_extensions
+string_mask        = utf8only
+
+####################################################################
+[ ca_distinguished_name ]
+countryName           = FI
+organizationName      = Nokia OY
+# commonName          = Nokia
+# commonName_default  = Test Server
+# emailAddress        = test@server.com
+stateOrProvinceName   = Uusimaa
+localityName          = Espoo
+
+####################################################################
+[ ca_extensions ]
+
+subjectKeyIdentifier   = hash
+authorityKeyIdentifier = keyid:always, issuer
+basicConstraints       = critical, CA:true
+keyUsage               = keyCertSign, cRLSign
+EOF
+
+cat > openssl-server.cnf << EOF
+HOME            = .
+RANDFILE        = \$ENV::HOME/.rnd
+
+####################################################################
+[ req ]
+prompt = no
+default_bits       = 2048
+default_keyfile    = serverkey.pem
+distinguished_name = server_distinguished_name
+req_extensions     = server_req_extensions
+string_mask        = utf8only
+
+####################################################################
+[ server_distinguished_name ]
+countryName           = FI
+organizationName      = Nokia NET
+commonName            = Test Server
+# emailAddress        = test@server.com
+stateOrProvinceName   = Uusimaa
+localityName          = Espoo
+
+####################################################################
+[ server_req_extensions ]
+
+subjectKeyIdentifier = hash
+basicConstraints     = CA:FALSE
+keyUsage             = digitalSignature, keyEncipherment
+subjectAltName       = @alternate_names
+nsComment            = "OpenSSL Generated Certificate"
+
+####################################################################
+[ alternate_names ]
+
+DNS.1  = server.com
+EOF
+
+cat > openssl-client.cnf << EOF
+HOME            = .
+RANDFILE        = \$ENV::HOME/.rnd
+
+####################################################################
+[ req ]
+prompt = no
+default_bits       = 2048
+default_keyfile    = clientkey.pem
+distinguished_name = client_distinguished_name
+req_extensions     = client_req_extensions
+string_mask        = utf8only
+
+####################################################################
+[ client_distinguished_name ]
+countryName          = DE
+organizationName     = Customer X
+commonName           = Customer
+emailAddress         = test@client.com
+
+####################################################################
+[ client_req_extensions ]
+
+subjectKeyIdentifier = hash
+basicConstraints     = CA:FALSE
+keyUsage             = digitalSignature, keyEncipherment
+subjectAltName       = @alternate_names
+nsComment            = "OpenSSL Generated Certificate"
+
+####################################################################
+[ alternate_names ]
+
+DNS.1  = ramuller.zoo.dynamic.nsn-net.net
+DNS.2  = www.client.com
+DNS.3  = mail.client.com
+DNS.4  = ftp.client.com
+EOF
+
+cat > openssl-ca-sign.cnf << EOF
+HOME            = .
+RANDFILE        = \$ENV::HOME/.rnd
+
+####################################################################
+[ ca ]
+default_ca    = CA_default      # The default ca section
+
+[ CA_default ]
+
+default_days     = 1000         # How long to certify for
+default_crl_days = 30           # How long before next CRL
+default_md       = sha256       # Use public key default MD
+preserve         = no           # Keep passed DN ordering
+
+x509_extensions = ca_extensions # The extensions to add to the cert
+
+email_in_dn     = no            # Don't concat the email in the DN
+copy_extensions = copy          # Required to copy SANs from CSR to cert
+base_dir      = .
+certificate   = \$base_dir/cacert.pem   # The CA certifcate
+private_key   = \$base_dir/cakey.pem    # The CA private key
+new_certs_dir = \$base_dir              # Location for new certs after signing
+database      = \$base_dir/index.txt    # Database index file
+serial        = \$base_dir/serial.txt   # The current serial number
+
+unique_subject = no  # Set to 'no' to allow creation of
+                     # several certificates with same subject.
+
+####################################################################
+[ req ]
+prompt = no
+default_bits       = 4096
+default_keyfile    = cakey.pem
+distinguished_name = ca_distinguished_name
+x509_extensions    = ca_extensions
+string_mask        = utf8only
+
+####################################################################
+[ ca_distinguished_name ]
+countryName           = FI
+organizationName      = Nokia OY
+# commonName          = Nokia
+# commonName_default  = Test Server
+# emailAddress        = test@server.com
+stateOrProvinceName   = Uusimaa
+localityName          = Espoo
+
+####################################################################
+[ ca_extensions ]
+
+subjectKeyIdentifier   = hash
+authorityKeyIdentifier = keyid:always, issuer
+basicConstraints       = critical, CA:true
+keyUsage               = keyCertSign, cRLSign
+
+####################################################################
+[ signing_policy ]
+countryName            = optional
+stateOrProvinceName    = optional
+localityName           = optional
+organizationName       = optional
+organizationalUnitName = optional
+commonName             = supplied
+emailAddress           = optional
+
+####################################################################
+[ signing_req ]
+subjectKeyIdentifier   = hash
+authorityKeyIdentifier = keyid,issuer
+basicConstraints       = CA:FALSE
+keyUsage               = digitalSignature, keyEncipherment
+EOF
+
+openssl req -config openssl-ca.cnf -x509 -newkey rsa:2048 -sha256 -nodes -out cacert.pem     -outform PEM
+openssl req -config openssl-server.cnf   -newkey rsa:2048 -sha256 -nodes -out servercert.csr -outform PEM 
+openssl req -config openssl-client.cnf   -newkey rsa:2048 -sha256 -nodes -out clientcert.csr -outform PEM
+echo -n   > index.txt
+echo '01' > serial.txt
+echo -n   > index-ri.txt
+echo '01' > serial-ri.txt
+echo -e "y\ny\n" | openssl ca -config openssl-ca-sign.cnf -policy signing_policy -extensions signing_req -out servercert.pem -infiles servercert.csr
+echo -e "y\ny\n" | openssl ca -config openssl-ca-sign.cnf -policy signing_policy -extensions signing_req -out clientcert.pem -infiles clientcert.csr
diff --git a/workflows/pod_create.sh b/workflows/pod_create.sh
new file mode 100644 (file)
index 0000000..1116811
--- /dev/null
@@ -0,0 +1,227 @@
+#!/bin/bash
+# Copyright 2019 AT&T
+
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#Work-flow:
+#
+#0.  Get values for the environment variables
+
+# The following must be provided.
+   HOST_IP=
+   CLOUDNAME=
+   ADMIN_PASSWD=
+
+# The next set may be modified if necessary but are best left as-is
+   HTTPS_PORT=8443
+   API_PORT=15101
+       # Max time (in minutes) to wait for the remote-installer to return completed
+       # Currently 2.5 hours
+       MAX_TIME=150
+
+
+   # The rest should probably not be changed
+   WORKDIR=$(dirname $0)
+   BASEDIR=$WORKDIR
+   EXTERNALROOT=/data
+   NETWORK=host
+
+   # these will come from the Blueprint file and are available in "INPUT.yaml"
+   tr , '\012' < $WORKDIR/INPUT.yaml |tr -d '{}'|sed -e 's/^  *//' -e 's/: /=/' >/tmp/env
+   . /tmp/env
+   REC_ISO_IMAGE_NAME=$iso_primary
+   REC_PROVISIONING_ISO_NAME=$iso_secondary
+   INPUT_YAML_URL=$input_yaml
+   cat <<EOF
+   --------------------------------------------
+   WORKDIR is $WORKDIR
+   HOST_IP is $HOST_IP
+   EXTERNALROOT is $EXTERNALROOT
+   REC_ISO_IMAGE_NAME is $REC_ISO_IMAGE_NAME
+   REC_PROVISIONING_ISO_NAME is $REC_PROVISIONING_ISO_NAME
+   INPUT_YAML_URL is $INPUT_YAML_URL
+   --------------------------------------------
+EOF
+
+#1. Create a new directory to be used for holding the installation artifacts.
+
+   #create the base directory
+   mkdir -p $BASEDIR
+
+   #images sub-directory
+   mkdir -p $BASEDIR/images
+
+   #certificates sub-directory
+   mkdir -p $BASEDIR/certificates
+
+   #user configuration and cloud admin information
+   mkdir -p $BASEDIR/user-configs
+
+   #installation logs directory
+   mkdir -p $BASEDIR/installations
+
+#2. Get REC golden image from REC Nexus artifacts and copy it to the images sub-directory under the directory created in (1).
+
+   cd $BASEDIR/images/
+   FILENAME=$(echo "${REC_ISO_IMAGE_NAME##*/}")
+   curl $REC_ISO_IMAGE_NAME > $FILENAME
+
+#3. Get REC booting image from REC Nexus artifacts and copy it to the images sub-directory under the directory created in (1).
+
+   cd $BASEDIR/images/
+   FILENAME=$(echo "${REC_PROVISIONING_ISO_NAME##*/}")
+   curl $REC_PROVISIONING_ISO_NAME > $FILENAME
+
+#4. Get the user-config.yaml file and admin_password file for the CD environment from the
+#   cd-environments repo and copy it to the user-configs sub-directory under the directory
+#   created in (1). Copy the files to a cloud-specific directory identified by the cloudname.
+
+   cd $BASEDIR/user-configs/
+   mkdir $CLOUDNAME
+   cd $CLOUDNAME
+   curl $INPUT_YAML_URL > user_config.yaml
+   ln user_config.yaml user_config.yml
+   echo $ADMIN_PASSWD > admin_passwd
+
+#5. Checkout the remote-installer repo from LF
+
+   mkdir $BASEDIR/git
+   cd $BASEDIR/git
+   git clone https://gerrit.akraino.org/r/ta/remote-installer
+
+#6. Copy the sever certificates, the client certificates in addition to CA certificate to
+#  the certificates sub-directory under the directory created in (1). 
+#   The following certificates are expected to be available in the directory:
+#
+#   cacert.pem: The CA certificate
+#   servercert.pem: The server certificate signed by the CA
+#   serverkey.pem: The server key
+#   clientcert.pem: The client certificate signed by the CA
+#   clientkey.pem: The client key
+#
+
+       cd $BASEDIR/git/remote-installer/test/certificates
+       ./create.sh
+       cp *.pem $BASEDIR/certificates
+
+#7. Build the remote installer docker-image.
+    cd $BASEDIR/git/remote-installer/scripts/
+    echo $0: ./build.sh "$HTTPS_PORT" "$API_PORT"
+    ./build.sh "$HTTPS_PORT" "$API_PORT"
+
+#8. Start the remote installer
+
+   cd $BASEDIR/git/remote-installer/scripts/
+   echo $0: ./start.sh -b "$EXTERNALROOT$BASEDIR" -e "$HOST_IP" -s "$HTTPS_PORT" -a "$API_PORT" -p "$ADMIN_PASSWD"
+   if ! ./start.sh -b "$EXTERNALROOT$BASEDIR" -e "$HOST_IP" -s "$HTTPS_PORT" -a "$API_PORT" -p "$ADMIN_PASSWD"
+   then
+       echo Failed to run workflow
+       exit 1
+   fi
+
+#9. Wait for the remote installer to become running.
+#   check every 30 seconds to see if it has it has a status of "running"
+
+    DOCKER_STATUS=""
+
+    while [ ${#DOCKER_STATUS} -eq 0 ]; do
+        sleep 30
+
+        DOCKER_ID=$(docker ps | grep remote-installer | awk ' {print $1}')
+        DOCKER_STATUS=$(docker ps -f status=running | grep $DOCKER_ID)
+    done
+
+#10. Start the installation by sending the following http request to the installer API
+
+#    POST url: https://localhost:$API_PORT/v1/installations
+#    REQ body json- encoded
+#    {
+#        'cloudname': $CLOUDNAME,
+#        'iso': $REC_ISO_IMAGE_NAME,
+#        'provisioning-iso': $REC_PROVISIONING_ISO_NAME
+#    }
+#    REP body json-encoded
+#    {
+#        'uuid': $INSTALLATION_UUID
+#    }
+
+rec=$(basename $REC_ISO_IMAGE_NAME)
+boot=$(basename $REC_PROVISIONING_ISO_NAME)
+cat >/tmp/data <<EOF
+{
+       "cloud-name": "$CLOUDNAME",
+       "iso": "$rec",
+       "provisioning-iso": "$boot"
+}
+EOF
+
+       # Get the IP address of the remote installer container
+       # RI_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' remote-installer)
+       RI_IP=$HOST_IP
+
+       echo "$0: Posting to https://$RI_IP:$API_PORT/v1/installations"
+    RESPONSE=$(
+       curl -k \
+               --header "Content-Type: application/json" \
+               -d @/tmp/data \
+               --cert $BASEDIR/certificates/clientcert.pem \
+                       --key  $BASEDIR/certificates/clientkey.pem \
+               https://$RI_IP:$API_PORT/v1/installations)
+       echo "$0: RESPONSE IS $RESPONSE"
+
+    INSTALLATION_UUID=$(echo $RESPONSE | jq -r ".uuid")
+
+#11. Follow the progress of the installation by sending the following http request to the installer API
+
+#    GET url: https://localhost:$API_PORT/v1/installations/$INSTALLATION_UUID
+#
+#    REP body json-encoded
+#    {
+#        'status': <ongoing|completed|failed>,
+#        'description': <description>,
+#        'percentage': <the progess precentage>
+#    }
+#
+#
+
+# check the status every minute until it has become "completed"
+# (for a maximum of MAX_TIME minutes)
+
+    STATUS="ongoing"
+       NTIMES=$MAX_TIME
+    while [ "$STATUS" == "ongoing" -a $NTIMES -gt 0 ]; do
+        sleep 60
+        NTIMES=$((NTIMES - 1))
+        RESPONSE=$(curl -k --silent \
+               --cert $BASEDIR/certificates/clientcert.pem \
+                       --key  $BASEDIR/certificates/clientkey.pem \
+               https://$RI_IP:$API_PORT/v1/installations/$INSTALLATION_UUID/state)
+        STATUS=$(echo $RESPONSE | jq -r ".status")
+        PCT=$(   echo $RESPONSE | jq -r ".percentage")
+        DESCR=$( echo $RESPONSE | jq -r ".description")
+        echo "$(date): Status is $STATUS ($PCT) $DESCR"
+    done
+       if [ "$STATUS" == "ongoing" -a $NTIMES -eq 0 ]
+       then
+               echo "Installation failed after $MAX_TIME minutes."
+               exit 1
+       fi
+       echo "Installation complete!"
+
+#12. When installation is completed stop the remote installer.
+
+    cd $BASEDIR/git/remote-installer/scripts/
+    ./stop.sh
+
+       exit 0