1 .. ############################################################################
2 .. Copyright (c) 2019 AT&T, ENEA AB, Nokia and others #
4 .. Licensed under the Apache License, Version 2.0 (the "License"); #
5 .. you maynot use this file except in compliance with the License. #
7 .. You may obtain a copy of the License at #
8 .. http://www.apache.org/licenses/LICENSE-2.0 #
10 .. Unless required by applicable law or agreed to in writing, software #
11 .. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT #
12 .. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #
13 .. See the License for the specific language governing permissions and #
14 .. limitations under the License. #
15 .. ############################################################################
21 The Makefile in this directory is used to build and push all
22 the validation containers. The default registry is **akraino** on
23 dockerhub, but only CI jenkins slaves are authorized to push
24 images to that registry. If you want to push to your own test registry, set
25 the REGISTRY variables as in the commands below.
27 To build and push the images:
29 .. code-block:: console
31 make all [ REGISTRY=<dockerhub_registry> ]
33 To just build the containers, use the command:
35 .. code-block:: console
37 make build-all [ REGISTRY=<dockerhub_registry> ]
42 Building and pushing the container
43 ----------------------------------
45 To build just the k8s container, use the command:
47 .. code-block:: console
49 make k8s-build [ REGISTRY=<dockerhub_registry> ]
51 To both build and push the container, use the command:
53 .. code-block:: console
55 make k8s [ REGISTRY=<dockerhub_registry> ]
60 The k8s image is meant to be ran from a server that has access to the
61 kubernetes cluster (jenkins slave, jumpserver, etc).
63 Before running the image, copy the folder ~/.kube from your kubernetes
64 master node to a local folder (e.g. /home/jenkins/k8s_access).
66 Container needs to be started with the kubernetes access folder mounted.
67 Optionally, the results folder can be mounted as well; this way the logs are
68 stored on the local server.
70 .. code-block:: console
72 docker run -ti -v /home/jenkins/k8s_access:/root/.kube/ \
73 -v /home/jenkins/k8s_results:/opt/akraino/validation/results/ \
74 akraino/validation:k8s-latest
76 By default, the container will run the k8s conformance test. If you want to
77 enter the container, add */bin/sh* at the end of the command above
80 The postgresql container
81 ========================
83 Building and pushing the container
84 ----------------------------------
86 To build just the postgresql container, use the command:
88 .. code-block:: console
90 make postgresql-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
92 To both build and push the container, use the command:
94 .. code-block:: console
96 make postgresql [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
101 If you want to deploy the container, you can run the corresponding deploy.sh script with the appropriate parameters.
105 .. code-block:: console
107 ./deploy.sh POSTGRES_PASSWORD=password
113 Building and pushing the container
114 ----------------------------------
116 To build just the ui container, you must first compile the ui project.
117 Then use the command:
119 .. code-block:: console
121 make ui-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
123 To both build and push the container, use the command:
125 .. code-block:: console
127 make ui [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
132 If you want to deploy the container, you can run the corresponding deploy.sh script with the appropriate parameters.
133 Note, that you must also build and run the postgresql container for a functional UI.
137 .. code-block:: console
139 ./deploy.sh postgres_db_user_pwd=password jenkins_url=http://192.168.2.2:8080 jenkins_user_name=name jenkins_user_pwd=jenkins_pwd jenkins_job_name=job1 nexus_results_url=https://nexus.akraino.org/content/sites/logs proxy_ip=172.28.40.9 proxy_port=3128
141 The kube-conformance container
142 ==============================
144 Building and pushing the container
145 ----------------------------------
147 To build just the kube-conformance container, use the command:
149 .. code-block:: console
151 make kube-conformance-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
153 To both build and push the container, use the command:
155 .. code-block:: console
157 make kube-conformance [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
162 This is a standalone container able to launch Kubernetes end-to-end tests,
163 for the purposes of conformance testing.
165 It is a thin wrapper around the `e2e.test` binary in the upstream Kubernetes
166 distribution, which drops results in a predetermined location for use as a
167 [Heptio Sonobuoy](https://github.com/heptio/sonobuoy) plugin.
169 To learn more about conformance testing and its Sonobuoy integration, read the
170 [conformance guide](https://github.com/heptio/sonobuoy/blob/master/docs/conformance-testing.md).
174 .. code-block:: console
176 docker run -ti akraino/validation:kube-conformance-v1.11
178 By default, the container will run the `run_e2e.sh` script. If you want to
179 enter the container, add */bin/sh* at the end of the command above
181 Normally, this conainer is not used directly, but instead leveraged via
184 The sonobuoy-plugin-systemd-logs container
185 ==========================================
187 Building and pushing the container
188 ----------------------------------
190 To build just the sonobuoy-plugin-systemd-logs container, use the command:
192 .. code-block:: console
194 make sonobuoy-plugin-systemd-logs-build [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
196 To both build and push the container, use the command:
198 .. code-block:: console
200 make sonobuoy-plugin-systemd-logs [ REGISTRY=<dockerhub_registry> NAME=<image_name>]
205 This is a simple standalone container that gathers log information from
206 systemd, by chrooting into the node's filesystem and running `journalctl`.
208 This container is used by [Heptio Sonobuoy](https://github.com/heptio/sonobuoy)
209 for gathering host logs in a Kubernetes cluster.
213 .. code-block:: console
215 docker run -ti akraino/validation:sonobuoy-plugin-systemd-logs-latest
217 By default, the container will run the `get_systemd_logs.sh` script. If you
218 want to enter the container, add */bin/sh* at the end of the command above.
220 Normally, this conainer is not used directly, but instead leveraged via