1 .. ############################################################################
2 .. Copyright (c) 2019 AT&T, ENEA AB, Nokia and others #
4 .. Licensed under the Apache License, Version 2.0 (the "License"); #
5 .. you maynot use this file except in compliance with the License. #
7 .. You may obtain a copy of the License at #
8 .. http://www.apache.org/licenses/LICENSE-2.0 #
10 .. Unless required by applicable law or agreed to in writing, software #
11 .. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT #
12 .. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
13 .. See the License for the specific language governing permissions and #
14 .. limitations under the License. #
15 .. ############################################################################
21 The Makefile in this directory is used to build and push all
22 the validation containers. The default registry is **akraino** on
23 dockerhub, but only CI jenkins slaves are authorized to push
24 images to that registry. If you want to push to your own test registry, set
25 the REGISTRY variables as in the commands below.
27 To build and push the images:
29 .. code-block:: console
31 make all [ REGISTRY=<dockerhub_registry> ]
33 To just build the containers, use the command:
35 .. code-block:: console
37 make build-all [ REGISTRY=<dockerhub_registry> ]
42 Building and pushing the container
43 ----------------------------------
45 To build just the k8s container, use the command:
47 .. code-block:: console
49 make k8s-build [ REGISTRY=<dockerhub_registry> ]
51 To both build and push the container, use the command:
53 .. code-block:: console
55 make k8s [ REGISTRY=<dockerhub_registry> ]
60 The k8s image is meant to be ran from a server that has access to the
61 kubernetes cluster (jenkins slave, jumpserver, etc).
63 Before running the image, copy the folder ~/.kube from your kubernetes
64 master node to a local folder (e.g. /home/jenkins/k8s_access).
66 Container needs to be started with the kubernetes access folder mounted.
67 Optionally, the results folder can be mounted as well; this way the logs are
68 stored on the local server.
70 .. code-block:: console
72 docker run -ti -v /home/jenkins/k8s_access:/root/.kube/ \
73 -v /home/jenkins/k8s_results:/opt/akraino/validation/results/ \
74 akraino/validation:k8s-latest
76 By default, the container will run the k8s conformance test. If you want to
77 enter the container, add */bin/sh* at the end of the command above
82 Building and pushing the container
83 ----------------------------------
85 To build just the postgresql container, use the command:
87 .. code-block:: console
89 make mariadb-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
91 To both build and push the container, use the command:
93 .. code-block:: console
95 make mariadb [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
99 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
101 CONTAINER_NAME, name of the container, default value is akraino-validation-mariadb
102 MARIADB_ROOT_PASSWORD, the desired mariadb root user password, this variable is required
103 UI_ADMIN_PASSWORD, the desired Blueprint Validation UI password for the admin user, this variable is required
104 UI_AKRAINO_PASSWORD, the desired Blueprint Validation UI password for the akraino user, this variable is required
105 REGISTRY, registry of the mariadb image, default value is akraino
106 NAME, name of the mariadb image, default value is validation
107 TAG_PRE, first part of the image version, default value is mariadb
108 TAG_VER, last part of the image version, default value is latest
109 MARIADB_HOST_PORT, port on which mariadb is exposed on host, default value is 3307
111 If you want to deploy the container, you can run this script with the appropriate parameters.
113 Example (assuming you have used the default variables for building the image using the make command):
115 .. code-block:: console
117 ./deploy.sh MARIADB_ROOT_PASSWORD=password UI_ADMIN_PASSWORD=admin UI_AKRAINO_PASSWORD=akraino
122 Building and pushing the container
123 ----------------------------------
125 To build just the UI container, use the command:
127 .. code-block:: console
129 make ui-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
131 To both build and push the container, use the command:
133 .. code-block:: console
135 make ui [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
139 In order for the container to be easily created, the deploy.sh script has been developed. This script accepts the following as input parameters:
141 CONTAINER_NAME, name of the contaner, default value is akraino-validation-ui
142 DB_CONNECTION_URL, the URL connection with the akraino database of the maridb instance, this variable is required
143 MARIADB_ROOT_PASSWORD, mariadb root user password, this variable is required
144 REGISTRY, registry of the mariadb image, default value is akraino
145 NAME, name of the mariadb image, default value is validation
146 TAG_PRE, first part of the image version, default value is ui
147 TAG_VER, last part of the image version, default value is latest
148 JENKINS_URL, the URL of the Jenkins instance, this variable is required
149 JENKINS_USERNAME, the Jenkins user name, this variable is required
150 JENKINS_USER_PASSWORD, the Jenkins user password, this variable is required
151 JENKINS_JOB_NAME, the name of Jenkins job capable of executing the blueprint validation tests, this variable is required
152 NEXUS_PROXY, the proxy needed in order for the Nexus server to be reachable, default value is none
153 JENKINS_PROXY, the proxy needed in order for the Jenkins server to be reachable, default value is none
155 Note that, for a functional UI, the following prerequisites are needed:
157 - The mariadb container in up and running state
158 - A Jenkins instance capable of running the blueprint validation test
159 - A Nexus repo in which all the test results are stored.
161 Look at the UI README file for more info.
163 If you want to deploy the container, you can run the aforementioned script with the appropriate parameters.
165 Example (assuming you have used the default variables for building the image using the make command):
167 .. code-block:: console
169 ./deploy.sh DB_CONNECTION_URL=172.17.0.3:3306/akraino MARIADB_ROOT_PASSWORD=password JENKINS_URL=http://192.168.2.2:8080 JENKINS_USERNAME=name JENKINS_USER_PASSWORD=jenkins_pwd JENKINS_JOB_NAME=job1
171 The kube-conformance container
172 ==============================
174 Building and pushing the container
175 ----------------------------------
177 To build just the kube-conformance container, use the command:
179 .. code-block:: console
181 make kube-conformance-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
183 To both build and push the container, use the command:
185 .. code-block:: console
187 make kube-conformance [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
192 This is a standalone container able to launch Kubernetes end-to-end tests,
193 for the purposes of conformance testing.
195 It is a thin wrapper around the `e2e.test` binary in the upstream Kubernetes
196 distribution, which drops results in a predetermined location for use as a
197 [Heptio Sonobuoy](https://github.com/heptio/sonobuoy) plugin.
199 To learn more about conformance testing and its Sonobuoy integration, read the
200 [conformance guide](https://github.com/heptio/sonobuoy/blob/master/docs/conformance-testing.md).
204 .. code-block:: console
206 docker run -ti akraino/validation:kube-conformance-v1.11
208 By default, the container will run the `run_e2e.sh` script. If you want to
209 enter the container, add */bin/sh* at the end of the command above
211 Normally, this conainer is not used directly, but instead leveraged via
214 The sonobuoy-plugin-systemd-logs container
215 ==========================================
217 Building and pushing the container
218 ----------------------------------
220 To build just the sonobuoy-plugin-systemd-logs container, use the command:
222 .. code-block:: console
224 make sonobuoy-plugin-systemd-logs-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
226 To both build and push the container, use the command:
228 .. code-block:: console
230 make sonobuoy-plugin-systemd-logs [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
235 This is a simple standalone container that gathers log information from
236 systemd, by chrooting into the node's filesystem and running `journalctl`.
238 This container is used by [Heptio Sonobuoy](https://github.com/heptio/sonobuoy)
239 for gathering host logs in a Kubernetes cluster.
243 .. code-block:: console
245 docker run -ti akraino/validation:sonobuoy-plugin-systemd-logs-latest
247 By default, the container will run the `get_systemd_logs.sh` script. If you
248 want to enter the container, add */bin/sh* at the end of the command above.
250 Normally, this conainer is not used directly, but instead leveraged via