Fixed typo and converted tabs.
Signed-off-by: Eby, Robert (re2429) <re2429@att.com>
Change-Id: I4ac9a2b1b6b0f1bf9c1f5cee97b7e058f56caba4
Telco Appliance family of blueprints).
yaml:
# Required hardware profiles (can match on either UUID or name)
- # Note: UUIDs will likely require a global registry of HW profiles.
+ # Note: UUIDs would likely require a global registry of HW profiles.
hardware_profile:
or:
- { uuid: 8a17384a-71d4-11e9-9e4c-0017f20fe1b8 }
workflow:
# Workflow that is invoked when the POD is created
create:
- # This URL is a direct link to the REC pod_create workflow on Gerrit.
- # It lacks several required input variables, so should be copied to a
- # local webserver and customized with input variables before deployment.
- # Change this URL to the new location of the workflow script.
- url: https://gerrit.akraino.org/r/gitweb?p=rec.git;a=blob_plain;f=workflows/pod_create.sh;hb=HEAD
+ url: 'http://www.example.org/blueprints/REC/REC_create.py'
+ components:
+ # This script is used to generate self-signed certs for the remote-installer
+ - 'http://www.example.org/blueprints/REC/gencerts.sh'
input_schema:
iso_primary: { type: string }
iso_secondary: { type: string }
input_yaml: { type: string }
-
-
- # Workflow that is invoked when the POD is deleted
-# delete:
-# The delete workflow has not been written yet. This is a placeholder.
- #url: https://gerrit.akraino.org/r/gitweb?p=rec.git;a=blob_plain;f=workflows/pod_delete.sh;hb=HEAD
+ rc_host: { type: string }
--- /dev/null
+..
+ Copyright (c) 2019 AT&T Intellectual Property. All Rights Reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+Instructions for installing REC using the Regional Controller and the REC Blueprint
+===================================================================================
+
+1. The Regional Controller should already be running somewhere (hopefully on a machine or
+ VM dedicated for this purpose). See here_ for instructions on how to start the regional
+ controller.
+
+ .. _here: https://wiki.akraino.org/display/AK/Starting+the+Regional+Controller
+
+2. Clone the *rec* repository using
+
+ .. code-block:: bash
+
+ git clone https://gerrit.akraino.org/r/rec.git
+
+ We will use the following files from this repository:
+
+ .. code-block:: bash
+
+ ./REC_blueprint.yaml
+ ./objects.yaml
+ ./workflows/gencerts.sh
+ ./workflows/REC_create.py
+
+ You will need to provide a web server where some of these files may be fetched by the
+ Regional Controller.
+
+3. Edit the file *objects.yaml*.
+
+ - Update the *nodes* stanza to define the nodes in your cluster, including the Out of
+ Band IP address for each node, as well as the name of the hardware type. Currently REC
+ is defined to run on the three types of hardware listed in the *hardware* stanza.
+ - If you want to give the edgesite a different name, update the 'edgesites' stanza.
+
+4. Edit the file *REC_blueprint.yaml* to to update the URLs (the two lines that contain
+ ``www.example.org``) for the create workflow script (*REC_create.py*), and the
+ *gencerts.sh* script. These URLs should point to the web server and path where you will
+ store these files. The rest of the blueprint should be kept unchanged.
+
+5. Create and edit a copy of *user_config.yaml*. See these instructions_ on how to create
+ this file.
+
+ .. _instructions: https://wiki.akraino.org/display/AK/REC+Installation+Guide#RECInstallationGuide-Aboutuser_config.yaml
+
+6. Copy the two workflows scripts and the *user_config.yaml* file to your web server.
+ Note: the provided *gencerts.sh* just generates some self-signed certificates for use
+ by the *remote-installer* Docker container, with some pre-defined defaults; if you want
+ to provide your own certificates, you will need to modify or replace this script.
+ Set and export the following variable:
+
+ .. code-block:: bash
+
+ export USER_CONFIG_URL=<URL of user_config.yaml>
+
+7. Clone the *api-server* repository. This provides the CLI tools used to interact with the
+ Regional Controller. Add the scripts from this repository to your PATH:
+
+ .. code-block:: bash
+
+ git clone https://gerrit.akraino.org/r/regional_controller/api-server
+ export PATH=$PATH:$PWD/api-server/scripts
+
+8. Define where the Regional Controller is located, as well as the login/password to use
+ (the login/password shown here are the built-in values and do not need to be changed
+ if you have not changed them on the Regional Controller):
+
+ .. code-block:: bash
+
+ export RC_HOST=<IP or DNS name of Regional Controller>
+ export USER=admin
+ export PW=admin123
+
+9. Load the objects defined in *objects.yaml* into the Regional Controller using:
+
+ .. code-block:: bash
+
+ rc_loaddata -H $RC_HOST -u $USER -p $PW -A objects.yaml
+
+10. Load the blueprint into the Regional Controller using:
+
+ .. code-block:: bash
+
+ rc_cli -H $RC_HOST -u $USER -p $PW blueprint create REC_blueprint.yaml
+
+11. Get the UUIDs of the edgesite and the blueprint from the Regional Controller using:
+
+ .. code-block:: bash
+
+ rc_cli -H $RC_HOST -u $USER -p $PW blueprint list
+ rc_cli -H $RC_HOST -u $USER -p $PW edgesite list
+
+ These are needed to create the POD. You will also see the UUID of the Blueprint displayed
+ when you create the Blueprint in step 10 (it is at the tail end of the URL that is printed).
+ Set and export them as the environment variables ESID and BPID.
+
+ .. code-block:: bash
+
+ export ESID=<UUID of edgesite in the RC>
+ export BPID=<UUID of blueprint in the RC>
+
+12. Figure out which REC ISO images you want to use to build your cluster. These are
+ located here:
+ https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/
+ Figure out which build you want, and then set and export the following variables:
+
+ .. code-block:: bash
+
+ export BUILD=<buildnumber>
+ export ISO_PRIMARY_URL=https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/$BUILD/install.iso
+ export ISO_SECONDARY_URL=https://nexus.akraino.org/content/repositories/images-snapshots/TA/release-1/images/$BUILD/bootcd.iso
+
+ Note: the Akraino Release 1 image is build #9.
+
+13. Create the *POD.yaml* file as follows:
+
+ .. code-block:: bash
+
+ cat > POD.yaml <<EOF
+ name: My_Radio_Edge_Cloud_POD
+ description: Put a description of the POD here.
+ blueprint: $BPID
+ edgesite: $ESID
+ yaml:
+ iso_primary: '$ISO_PRIMARY_URL'
+ iso_secondary: '$ISO_SECONDARY_URL'
+ input_yaml: '$USER_CONFIG_URL'
+ rc_host: $RC_HOST
+ EOF
+
+14. Create the POD using:
+
+ .. code-block:: bash
+
+ rc_cli -H $RC_HOST -u $USER -p $PW pod create POD.yaml
+
+ This will cause the POD to be created, and the *REC_create.py* workflow script to be
+ run on the Regional Controller's workflow engine. This in turn will pull in the ISO
+ images, and install REC on your cluster.
+
+15. If you want to monitor ongoing progess of the installation, you can issue periodic calls
+ to monitor the POD with:
+
+ .. code-block:: bash
+
+ rc_cli -H $RC_HOST -u $USER -p $PW pod show $PODID
+
+ where $PODID is the UUID of the POD. This will show all the messages logged by the
+ workflow, as well as the current status of the workflow. The status will be WORKFLOW
+ while the workflow is running, and wil change to ACTIVE if the workflow completes
+ succesfully, or FAILED, if the workflow fails.
--- /dev/null
+#
+# Copyright (c) 2019 AT&T Intellectual Property. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# This file maps out the MT REC CD laboratory hardware, in terms of hardware profiles,
+# regions, edgesites, and nodes. This can all be loaded into the RC via the
+# ``rc_loaddata.sh'' tool.
+#
+# Changes should be made to this file, which will be run against the RC before every CD job.
+#
+# Source:
+# https://tspace.web.att.com/files/app/file/e654e429-dd1e-4775-8449-3ccd4496a13f
+# https://wiki.web.att.com/display/CloudArch/Row+4+Rack+6
+#
+hardware:
+ Nokia_OE19:
+ uuid: 8a17384a-71d4-11e9-9e4c-0017f20fe1b8
+ description: Nokia OpenEdge hardware for the REC Blueprint
+ yaml:
+ todo: put hardware details here
+ rack_layout:
+ height: 1U
+ chassis:
+ layout: airframe
+ height: 3U
+ units: 5
+ Dell_740xd:
+ uuid: 9897a008-71d4-11e9-8bda-0017f20dbff8
+ description: Dell 740xd hardware for the REC Blueprint
+ yaml:
+ todo: put hardware details here
+ rack_layout:
+ height: 2U
+ HPE_DL380g10:
+ uuid: a4b4a570-71d4-11e9-adc2-0017f208759e
+ description: HPE DL380 Gen 10 hardware for the REC Blueprint
+ yaml:
+ todo: put hardware details here
+ rack_layout:
+ height: 2U
+
+edgesites:
+ REC_Edgesite:
+ description: The first REC cluster
+ nodes: [ node1, node2, node3, node4, node5 ]
+ regions: [ 00000000-0000-0000-0000-000000000000 ]
+
+nodes:
+ node1:
+ hardware: Nokia_OE19
+ yaml:
+ oob_ip: 172.1.1.201
+ node2:
+ hardware: Nokia_OE19
+ yaml:
+ oob_ip: 172.1.1.202
+ node3:
+ hardware: Nokia_OE19
+ yaml:
+ oob_ip: 172.1.1.203
+ node4:
+ hardware: Nokia_OE19
+ yaml:
+ oob_ip: 172.1.1.204
+ node5:
+ hardware: Nokia_OE19
+ yaml:
+ oob_ip: 172.1.1.205
--- /dev/null
+#!/usr/bin/python3
+#
+# Copyright (c) 2019 AT&T Intellectual Property. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+"""
+REC_create.py - This workflow is used to create a REC POD by way of the remote-installer
+container. The remote-installer is started if it is not running. Parameters passed to
+this script (via the INPUT.yaml file) are:
+ iso_primary - the main installer.iso file to use
+ iso_secondary - the secondary bootcd.iso file
+ input_yaml - the YAML file passed to remote_installer
+ rc_host - the IP address or DNS name of the RC
+"""
+
+import datetime
+import docker
+import requests
+import os, sys, time, yaml
+import POD
+
+WORKDIR = os.path.abspath(os.path.dirname(__file__))
+RI_NAME = 'remote-installer'
+RI_IMAGE = 'nexus3.akraino.org:10003/akraino/remote-installer:latest'
+RI_DIR = '/workflow/remote-installer'
+CERT_DIR = RI_DIR + '/certificates'
+EXTERNALROOT = '/data'
+NETWORK = 'host'
+WAIT_TIME = 150
+HTTPS_PORT = 8443
+API_PORT = 15101
+ADMIN_PASSWD = 'recAdm1n'
+REMOVE_ISO = False
+
+def start(ds, **kwargs):
+ # Read the user input from the POST
+ yaml = read_yaml(WORKDIR + '/INPUT.yaml')
+ REC_ISO_IMAGE_NAME = yaml['iso_primary']
+ REC_PROVISIONING_ISO_NAME = yaml['iso_secondary']
+ INPUT_YAML_URL = yaml['input_yaml']
+ HOST_IP = yaml['rc_host']
+ CLOUDNAME = 'CL-'+POD.POD
+ ISO = '%s/images/install-%s.iso' % (RI_DIR, POD.POD)
+ BOOTISO = '%s/images/bootcd-%s.iso' % (RI_DIR, POD.POD)
+ USERCONF = '%s/user-configs/%s/user_config.yaml' % (RI_DIR, CLOUDNAME)
+
+ print('-----------------------------------------------------------------------------------------------')
+ print(' POD is '+POD.POD)
+ print(' CLOUDNAME is '+CLOUDNAME)
+ print(' WORKDIR is '+WORKDIR)
+ print(' HOST_IP is '+HOST_IP)
+ print(' EXTERNALROOT is '+EXTERNALROOT)
+ print(' REC_ISO_IMAGE_NAME is '+REC_ISO_IMAGE_NAME)
+ print('REC_PROVISIONING_ISO_NAME is '+REC_PROVISIONING_ISO_NAME)
+ print(' INPUT_YAML_URL is '+INPUT_YAML_URL)
+ print(' ISO is '+ISO)
+ print(' BOOTISO is '+BOOTISO)
+ print(' USERCONF is '+USERCONF)
+ print('-----------------------------------------------------------------------------------------------')
+
+ # Setup RI_DIR
+ initialize_RI(CLOUDNAME)
+
+ # Fetch the three files into WORKDIR
+ fetchURL(REC_ISO_IMAGE_NAME, WORKDIR + '/install.iso');
+ fetchURL(REC_PROVISIONING_ISO_NAME, WORKDIR + '/bootcd.iso');
+ fetchURL(INPUT_YAML_URL, WORKDIR + '/user_config.yaml');
+
+ # Link files to RI_DIR with unique names
+ os.link(WORKDIR + '/install.iso', ISO)
+ os.link(WORKDIR + '/bootcd.iso', BOOTISO)
+ os.link(WORKDIR + '/user_config.yaml', USERCONF)
+ PWFILE = '%s/user-configs/%s/admin_passwd' % (RI_DIR, CLOUDNAME)
+ with open(PWFILE, "w") as f:
+ f.write(ADMIN_PASSWD + '\n')
+
+ # Start the remote_installer
+ client = docker.from_env()
+ namefilt = { 'name': RI_NAME }
+ ri = client.containers.list(filters=namefilt)
+ if len(ri) == 0:
+ print(RI_NAME + ' is not running.')
+ c = start_RI(client)
+
+ else:
+ print(RI_NAME + ' is running.')
+ c = ri[0]
+
+ # Send request to remote_installer
+ id = send_request(HOST_IP, CLOUDNAME, ISO, BOOTISO)
+
+ # Wait up to WAIT_TIME minutes for completion
+ if wait_for_completion(HOST_IP, id, WAIT_TIME):
+ print('Installation failed after %d minutes.' % (WAIT_TIME))
+ sys.exit(1)
+
+ # Remove the ISOs?
+ if REMOVE_ISO:
+ for iso in (WORKDIR + '/install.iso', ISO, WORKDIR + '/bootcd.iso', BOOTISO):
+ os.unlink(iso)
+
+ # Done!
+ print('Installation complete!')
+ # sys.exit(0) Don't exit as this will cause the task to fail!
+ return 'Complete.'
+
+def read_yaml(input_file):
+ print('Reading '+input_file+' ...')
+ with open(input_file, 'r') as stream:
+ try:
+ return yaml.safe_load(stream)
+ except yaml.YAMLError as exc:
+ print(exc)
+ sys.exit(1)
+
+def send_request(ri_ip, CLOUDNAME, ISO, BOOTISO):
+ URL = 'https://%s:%d/v1/installations' % (ri_ip, API_PORT)
+ print('Sending request to '+URL+' ...')
+ headers = {'Content-type': 'application/json'}
+ content = {
+ 'cloud-name': CLOUDNAME,
+ 'iso': os.path.basename(ISO),
+ 'provisioning-iso': os.path.basename(BOOTISO)
+ }
+ certs = (CERT_DIR+'/clientcert.pem', CERT_DIR+'/clientkey.pem')
+ response = requests.post(URL, json=content, headers=headers, cert=certs, verify=False)
+ print(response)
+ return response.json().get('uuid')
+
+def wait_for_completion(ri_ip, id, ntimes):
+ """
+ Wait (up to ntimes minutes) for the remote_installer to finish.
+ Any status other than 'completed' is considered a failure.
+ """
+ status = 'ongoing'
+ URL = 'https://%s:%d/v1/installations/%s/state' % (ri_ip, API_PORT, id)
+ certs = (CERT_DIR+'/clientcert.pem', CERT_DIR+'/clientkey.pem')
+ while status == 'ongoing' and ntimes > 0:
+ time.sleep(60)
+ response = requests.get(URL, cert=certs, verify=False)
+ j = response.json()
+ t = (
+ datetime.datetime.now().strftime('%x %X'),
+ str(j.get('status')),
+ str(j.get('percentage')),
+ str(j.get('description'))
+ )
+ print('%s: Status is %s (%s) %s' % t)
+ status = j.get('status')
+ ntimes = ntimes - 1
+ return status != 'completed'
+
+def fetchURL(url, dest):
+ print('Fetching '+url+' ...')
+ r = requests.get(url)
+ with open(dest, 'wb') as f1:
+ f1.write(r.content)
+
+def initialize_RI(CLOUDNAME):
+ """ Create the directory structure needed by the remote-installer """
+ dirs = (
+ RI_DIR,
+ RI_DIR+'/certificates',
+ RI_DIR+'/images',
+ RI_DIR+'/installations',
+ RI_DIR+'/user-configs',
+ RI_DIR+'/user-configs/'+CLOUDNAME
+ )
+ for dir in dirs:
+ if not os.path.isdir(dir):
+ print('mkdir '+dir)
+ os.mkdir(dir)
+
+def start_RI(client):
+ """
+ Start the remote-installer container (assumed to already be built somewhere).
+ Before starting, make sure the certificates directory is populated. If not,
+ generate some self-signed certificates.
+ """
+ # If needed, create certificates (11 files) in RI_DIR/certificates
+ if not os.path.exists(CERT_DIR+'/clientcert.pem') or not os.path.exists(CERT_DIR+'/clientkey.pem'):
+ print('Generating some self-signed certificates.')
+ script = WORKDIR + '/gencerts.sh'
+ cmd = 'bash %s %s' % (script, RI_DIR+'/certificates')
+ print('os.system('+cmd+')')
+ os.system(cmd)
+
+ print('Starting %s.' % RI_NAME)
+ env = {
+ 'API_PORT': API_PORT, 'HOST_ADDR': HOST_IP, 'HTTPS_PORT': HTTPS_PORT,
+ 'PW': ADMIN_PASSWD, 'SSH_PORT': 22222
+ }
+ vols = {
+ EXTERNALROOT+RI_DIR: {'bind': '/opt/remoteinstaller', 'mode': 'rw'}
+ }
+ try:
+ c = client.containers.run(
+ image=RI_IMAGE,
+ name=RI_NAME,
+ network_mode=NETWORK,
+ environment=env,
+ volumes=vols,
+ detach=True,
+ remove=True,
+ privileged=True
+ )
+
+ # Wait 5 minutes for it to be running
+ n = 0
+ while c.status != 'running' and n < 10:
+ time.sleep(30)
+ c.reload()
+ n = n + 1
+ if c.status != 'running' and n >= 10:
+ print('Container took to long to start!')
+ sys.exit(1)
+ return c
+
+ except docker.errors.ImageNotFound as ex:
+ # If the specified image does not exist.
+ print(ex)
+ sys.exit(1)
+
+ except docker.errors.APIError as ex:
+ # If the server returns an error.
+ print(ex)
+ sys.exit(1)
+
+ except:
+ print('other error!')
+ sys.exit(1)
--- /dev/null
+#!/bin/bash
+#
+# Script to create self-signed certificates in directory $1.
+#
+
+cd $1
+
+cat > openssl-ca.cnf << EOF
+HOME = .
+RANDFILE = \$ENV::HOME/.rnd
+
+####################################################################
+[ ca ]
+default_ca = CA_default # The default ca section
+
+[ CA_default ]
+
+dir = /root/ca
+default_days = 1000 # How long to certify for
+default_crl_days = 30 # How long before next CRL
+default_md = sha256 # Use public key default MD
+preserve = no # Keep passed DN ordering
+
+x509_extensions = ca_extensions # The extensions to add to the cert
+
+email_in_dn = no # Don't concat the email in the DN
+copy_extensions = copy # Required to copy SANs from CSR to cert
+
+####################################################################
+[ req ]
+prompt = no
+default_bits = 4096
+default_keyfile = cakey.pem
+distinguished_name = ca_distinguished_name
+x509_extensions = ca_extensions
+string_mask = utf8only
+
+####################################################################
+[ ca_distinguished_name ]
+countryName = FI
+organizationName = Nokia OY
+# commonName = Nokia
+# commonName_default = Test Server
+# emailAddress = test@server.com
+stateOrProvinceName = Uusimaa
+localityName = Espoo
+
+####################################################################
+[ ca_extensions ]
+
+subjectKeyIdentifier = hash
+authorityKeyIdentifier = keyid:always, issuer
+basicConstraints = critical, CA:true
+keyUsage = keyCertSign, cRLSign
+EOF
+
+cat > openssl-server.cnf << EOF
+HOME = .
+RANDFILE = \$ENV::HOME/.rnd
+
+####################################################################
+[ req ]
+prompt = no
+default_bits = 2048
+default_keyfile = serverkey.pem
+distinguished_name = server_distinguished_name
+req_extensions = server_req_extensions
+string_mask = utf8only
+
+####################################################################
+[ server_distinguished_name ]
+countryName = FI
+organizationName = Nokia NET
+commonName = Test Server
+# emailAddress = test@server.com
+stateOrProvinceName = Uusimaa
+localityName = Espoo
+
+####################################################################
+[ server_req_extensions ]
+
+subjectKeyIdentifier = hash
+basicConstraints = CA:FALSE
+keyUsage = digitalSignature, keyEncipherment
+subjectAltName = @alternate_names
+nsComment = "OpenSSL Generated Certificate"
+
+####################################################################
+[ alternate_names ]
+
+DNS.1 = server.com
+EOF
+
+cat > openssl-client.cnf << EOF
+HOME = .
+RANDFILE = \$ENV::HOME/.rnd
+
+####################################################################
+[ req ]
+prompt = no
+default_bits = 2048
+default_keyfile = clientkey.pem
+distinguished_name = client_distinguished_name
+req_extensions = client_req_extensions
+string_mask = utf8only
+
+####################################################################
+[ client_distinguished_name ]
+countryName = DE
+organizationName = Customer X
+commonName = Customer
+emailAddress = test@client.com
+
+####################################################################
+[ client_req_extensions ]
+
+subjectKeyIdentifier = hash
+basicConstraints = CA:FALSE
+keyUsage = digitalSignature, keyEncipherment
+subjectAltName = @alternate_names
+nsComment = "OpenSSL Generated Certificate"
+
+####################################################################
+[ alternate_names ]
+
+DNS.1 = ramuller.zoo.dynamic.nsn-net.net
+DNS.2 = www.client.com
+DNS.3 = mail.client.com
+DNS.4 = ftp.client.com
+EOF
+
+cat > openssl-ca-sign.cnf << EOF
+HOME = .
+RANDFILE = \$ENV::HOME/.rnd
+
+####################################################################
+[ ca ]
+default_ca = CA_default # The default ca section
+
+[ CA_default ]
+
+default_days = 1000 # How long to certify for
+default_crl_days = 30 # How long before next CRL
+default_md = sha256 # Use public key default MD
+preserve = no # Keep passed DN ordering
+
+x509_extensions = ca_extensions # The extensions to add to the cert
+
+email_in_dn = no # Don't concat the email in the DN
+copy_extensions = copy # Required to copy SANs from CSR to cert
+base_dir = .
+certificate = \$base_dir/cacert.pem # The CA certifcate
+private_key = \$base_dir/cakey.pem # The CA private key
+new_certs_dir = \$base_dir # Location for new certs after signing
+database = \$base_dir/index.txt # Database index file
+serial = \$base_dir/serial.txt # The current serial number
+
+unique_subject = no # Set to 'no' to allow creation of
+ # several certificates with same subject.
+
+####################################################################
+[ req ]
+prompt = no
+default_bits = 4096
+default_keyfile = cakey.pem
+distinguished_name = ca_distinguished_name
+x509_extensions = ca_extensions
+string_mask = utf8only
+
+####################################################################
+[ ca_distinguished_name ]
+countryName = FI
+organizationName = Nokia OY
+# commonName = Nokia
+# commonName_default = Test Server
+# emailAddress = test@server.com
+stateOrProvinceName = Uusimaa
+localityName = Espoo
+
+####################################################################
+[ ca_extensions ]
+
+subjectKeyIdentifier = hash
+authorityKeyIdentifier = keyid:always, issuer
+basicConstraints = critical, CA:true
+keyUsage = keyCertSign, cRLSign
+
+####################################################################
+[ signing_policy ]
+countryName = optional
+stateOrProvinceName = optional
+localityName = optional
+organizationName = optional
+organizationalUnitName = optional
+commonName = supplied
+emailAddress = optional
+
+####################################################################
+[ signing_req ]
+subjectKeyIdentifier = hash
+authorityKeyIdentifier = keyid,issuer
+basicConstraints = CA:FALSE
+keyUsage = digitalSignature, keyEncipherment
+EOF
+
+openssl req -config openssl-ca.cnf -x509 -newkey rsa:2048 -sha256 -nodes -out cacert.pem -outform PEM
+openssl req -config openssl-server.cnf -newkey rsa:2048 -sha256 -nodes -out servercert.csr -outform PEM
+openssl req -config openssl-client.cnf -newkey rsa:2048 -sha256 -nodes -out clientcert.csr -outform PEM
+echo -n > index.txt
+echo '01' > serial.txt
+echo -n > index-ri.txt
+echo '01' > serial-ri.txt
+echo -e "y\ny\n" | openssl ca -config openssl-ca-sign.cnf -policy signing_policy -extensions signing_req -out servercert.pem -infiles servercert.csr
+echo -e "y\ny\n" | openssl ca -config openssl-ca-sign.cnf -policy signing_policy -extensions signing_req -out clientcert.pem -infiles clientcert.csr