From: xinhuili Date: Wed, 14 Aug 2019 06:25:38 +0000 (+0800) Subject: Fix seba charts X-Git-Url: https://gerrit.akraino.org/r/gitweb?a=commitdiff_plain;h=refs%2Fchanges%2F02%2F1402%2F1;p=iec%2Fxconnect.git Fix seba charts This patch is to fix seba charts. Signed-off-by: XINHUI LI Change-Id: I60e5bf5cd7c64207c2b09b1db4791e8bfa7ef3b5 --- diff --git a/src/seba_charts/.gitignore b/src/seba_charts/.gitignore new file mode 100644 index 0000000..610b0d5 --- /dev/null +++ b/src/seba_charts/.gitignore @@ -0,0 +1,2 @@ +#IDE files +.remote-sync.json diff --git a/src/seba_charts/LICENSE b/src/seba_charts/LICENSE new file mode 100644 index 0000000..261eeb9 --- /dev/null +++ b/src/seba_charts/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/src/seba_charts/README.md b/src/seba_charts/README.md new file mode 100644 index 0000000..bc39185 --- /dev/null +++ b/src/seba_charts/README.md @@ -0,0 +1,2 @@ +# seba_charts +Helm charts for bringing up SEBA diff --git a/src/seba_charts/att-workflow/Chart.yaml b/src/seba_charts/att-workflow/Chart.yaml new file mode 100755 index 0000000..6c2b4f4 --- /dev/null +++ b/src/seba_charts/att-workflow/Chart.yaml @@ -0,0 +1,6 @@ +apiVersion: v1 +appVersion: 1.1.5 +description: A Helm chart for XOS's "att-workflow" +icon: https://guide.opencord.org/logos/cord.svg +name: att-workflow +version: 1.0.2 diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/.helmignore b/src/seba_charts/att-workflow/charts/att-workflow-driver/.helmignore new file mode 100755 index 0000000..f0c1319 --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/Chart.yaml b/src/seba_charts/att-workflow/charts/att-workflow-driver/Chart.yaml new file mode 100755 index 0000000..9ade1fb --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/Chart.yaml @@ -0,0 +1,5 @@ +appVersion: 1.0.12 +description: A Helm chart for XOS's "att-workflow-driver" service +icon: https://guide.opencord.org/logos/cord.svg +name: att-workflow-driver +version: 1.0.12 diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_helpers.tpl b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_helpers.tpl new file mode 100755 index 0000000..86daf56 --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_helpers.tpl @@ -0,0 +1,80 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Copyright 2018-present Open Networking Foundation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{/* +Expand the name of the chart. +*/}} +{{- define "att-workflow-driver.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "att-workflow-driver.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "att-workflow-driver.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- define "att-workflow-driver.serviceConfig" -}} +name: att-workflow-driver +accessor: + username: {{ .Values.xosAdminUser | quote }} + password: {{ .Values.xosAdminPassword | quote }} + endpoint: {{ .Values.xosCoreService | quote }} +event_bus: + endpoint: {{ .Values.kafkaService | quote }} + kind: kafka +logging: + version: 1 + handlers: + console: + class: logging.StreamHandler + file: + class: logging.handlers.RotatingFileHandler + filename: /var/log/xos.log + maxBytes: 10485760 + backupCount: 5 + kafka: + class: kafkaloghandler.KafkaLogHandler + bootstrap_servers: + - "{{ .Values.kafkaService }}:9092" + topic: xos.log.att-workflow-driver + loggers: + '': + handlers: + - console + - file + - kafka + level: DEBUG +{{- end -}} diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_tosca.tpl b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_tosca.tpl new file mode 100755 index 0000000..1e92a0c --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/_tosca.tpl @@ -0,0 +1,30 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Copyright 2018-present Open Networking Foundation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{- define "att-workflow-driver.serviceTosca" -}} +tosca_definitions_version: tosca_simple_yaml_1_0 +description: Set up att-workflow-driver service +imports: + - custom_types/attworkflowdriverservice.yaml + +topology_template: + node_templates: + service#att-workflow-driver: + type: tosca.nodes.AttWorkflowDriverService + properties: + name: att-workflow-driver + kind: oss +{{- end -}} diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/configmap.yaml b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/configmap.yaml new file mode 100755 index 0000000..ce09cb5 --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/configmap.yaml @@ -0,0 +1,22 @@ +--- +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: v1 +kind: ConfigMap +metadata: + name: att-workflow-driver +data: + serviceConfig: | +{{ include "att-workflow-driver.serviceConfig" . | indent 4 }} diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/deployment.yaml b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/deployment.yaml new file mode 100755 index 0000000..2b7b439 --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/templates/deployment.yaml @@ -0,0 +1,77 @@ +--- + +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: {{ template "att-workflow-driver.fullname" . }} + labels: + app: {{ template "att-workflow-driver.name" . }} + chart: {{ template "att-workflow-driver.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + replicas: {{ .Values.replicaCount }} + selector: + matchLabels: + app: {{ template "att-workflow-driver.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "att-workflow-driver.name" . }} + release: {{ .Release.Name }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + spec: + containers: + - name: {{ .Chart.Name }} + image: {{ .Values.global.registry }}{{ .Values.image.repository }}:{{ tpl .Values.image.tag . }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + resources: +{{ toYaml .Values.resources | indent 12 }} + volumeMounts: + - name: att-workflow-driver-config + mountPath: /opt/xos/synchronizers/att-workflow-driver/mounted_config.yaml + subPath: mounted_config.yaml + - name: certchain-volume + mountPath: /usr/local/share/ca-certificates/local_certs.crt + subPath: config/ca_cert_chain.pem + volumes: + - name: att-workflow-driver-config + configMap: + name: att-workflow-driver + items: + - key: serviceConfig + path: mounted_config.yaml + - name: certchain-volume + configMap: + name: ca-certificates + items: + - key: chain + path: config/ca_cert_chain.pem + {{- with .Values.nodeSelector }} + nodeSelector: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} diff --git a/src/seba_charts/att-workflow/charts/att-workflow-driver/values.yaml b/src/seba_charts/att-workflow/charts/att-workflow-driver/values.yaml new file mode 100755 index 0000000..9d8273f --- /dev/null +++ b/src/seba_charts/att-workflow/charts/att-workflow-driver/values.yaml @@ -0,0 +1,45 @@ +--- +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Default values for vOLT +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +replicaCount: 1 + +nameOverride: "" +fullnameOverride: "" + +image: + repository: 'akrainoenea/att-workflow-driver-synchronizer' + tag: '{{ .Chart.AppVersion }}' + pullPolicy: 'Always' + +global: + registry: '' + +xosAdminUser: "admin@opencord.org" +xosAdminPassword: "letmein" +xosCoreService: "xos-core:50051" + +kafkaService: "cord-kafka" + +resources: {} + +nodeSelector: {} + +tolerations: [] + +affinity: {} diff --git a/src/seba_charts/att-workflow/requirements.lock b/src/seba_charts/att-workflow/requirements.lock new file mode 100755 index 0000000..aab6c39 --- /dev/null +++ b/src/seba_charts/att-workflow/requirements.lock @@ -0,0 +1,6 @@ +dependencies: +- name: att-workflow-driver + repository: file://../../xos-services/att-workflow-driver + version: 1.0.12 +digest: sha256:f1b42952bde477f7eec3072d853fb9d98aa791ccba85def30b0392d8f24a02fe +generated: 2018-12-17T10:31:05.340526782-07:00 diff --git a/src/seba_charts/att-workflow/requirements.yaml b/src/seba_charts/att-workflow/requirements.yaml new file mode 100755 index 0000000..f871494 --- /dev/null +++ b/src/seba_charts/att-workflow/requirements.yaml @@ -0,0 +1,19 @@ +--- +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +dependencies: +- name: att-workflow-driver + version: 1.0.12 + repository: file://../../xos-services/att-workflow-driver diff --git a/src/seba_charts/att-workflow/templates/_helpers.tpl b/src/seba_charts/att-workflow/templates/_helpers.tpl new file mode 100755 index 0000000..6f83543 --- /dev/null +++ b/src/seba_charts/att-workflow/templates/_helpers.tpl @@ -0,0 +1,47 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Copyright 2018-present Open Networking Foundation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{/* +Expand the name of the chart. +*/}} +{{- define "att-workflow.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "att-workflow.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "att-workflow.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/src/seba_charts/att-workflow/templates/_tosca.tpl b/src/seba_charts/att-workflow/templates/_tosca.tpl new file mode 100755 index 0000000..b919834 --- /dev/null +++ b/src/seba_charts/att-workflow/templates/_tosca.tpl @@ -0,0 +1,53 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Copyright 2018-present Open Networking Foundation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} + +{{- define "att-workflow.serviceGraphTosca" -}} +tosca_definitions_version: tosca_simple_yaml_1_0 +imports: + - custom_types/attworkflowdriverservice.yaml + - custom_types/voltservice.yaml + - custom_types/servicedependency.yaml +description: att-workflow-driver service graph +topology_template: + node_templates: + +# These services must be defined before loading the graph + + service#volt: + type: tosca.nodes.VOLTService + properties: + name: volt + must-exist: true + + service#att-workflow-driver: + type: tosca.nodes.AttWorkflowDriverService + properties: + name: att-workflow-driver + must-exist: true + + service_dependency#workflow_volt: + type: tosca.nodes.ServiceDependency + properties: + connect_method: none + requirements: + - subscriber_service: + node: service#att-workflow-driver + relationship: tosca.relationships.BelongsToOne + - provider_service: + node: service#volt + relationship: tosca.relationships.BelongsToOne +{{- end -}} diff --git a/src/seba_charts/att-workflow/templates/tosca-configmap.yaml b/src/seba_charts/att-workflow/templates/tosca-configmap.yaml new file mode 100755 index 0000000..9fc6add --- /dev/null +++ b/src/seba_charts/att-workflow/templates/tosca-configmap.yaml @@ -0,0 +1,25 @@ +--- + +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: v1 +kind: ConfigMap +metadata: + name: att-workflow-tosca +data: + 010-fixtures.yaml: | +{{ include "att-workflow-driver.serviceTosca" (index .Values "att-workflow-driver") | indent 4 }} + 300-service-graph.yaml: | +{{ include "att-workflow.serviceGraphTosca" . | indent 4 }} diff --git a/src/seba_charts/att-workflow/templates/tosca-job.yaml b/src/seba_charts/att-workflow/templates/tosca-job.yaml new file mode 100755 index 0000000..a51aaf7 --- /dev/null +++ b/src/seba_charts/att-workflow/templates/tosca-job.yaml @@ -0,0 +1,55 @@ +--- + +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "att-workflow.fullname" . }}-tosca-loader + labels: + app: {{ template "att-workflow.name" . }} + chart: {{ template "att-workflow.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + backoffLimit: 12 + template: + metadata: + labels: + app: {{ template "att-workflow.name" . }} + release: {{ .Release.Name }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/tosca-configmap.yaml") . | sha256sum }} + spec: + restartPolicy: OnFailure + containers: + - name: {{ .Chart.Name }}-tosca-loader + image: {{ .Values.global.registry }}{{ .Values.images.tosca_loader.repository }}:{{ tpl .Values.images.tosca_loader.tag . }} + imagePullPolicy: {{ .Values.images.tosca_loader.pullPolicy }} + env: + - name: XOS_USER + value: {{ .Values.xosAdminUser }} + - name: XOS_PASSWD + valueFrom: + secretKeyRef: + name: xos-admin-passwd-secret + key: password + volumeMounts: + - name: att-workflow-tosca + mountPath: /opt/tosca + volumes: + - name: att-workflow-tosca + configMap: + name: att-workflow-tosca diff --git a/src/seba_charts/att-workflow/values.yaml b/src/seba_charts/att-workflow/values.yaml new file mode 100755 index 0000000..55bee5e --- /dev/null +++ b/src/seba_charts/att-workflow/values.yaml @@ -0,0 +1,37 @@ +--- +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# Default values for the att-workflow profile. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + + +nameOverride: "" +fullnameOverride: "" + +images: + tosca_loader: + repository: 'cachengo/tosca-loader' + tag: '{{ .Chart.AppVersion }}' + pullPolicy: 'Always' + +global: + registry: "" + +att-workflow-driver: + kafkaService: "cord-platform-kafka" + +xosAdminUser: "admin@opencord.org" +xosAdminPassword: "letmein" diff --git a/src/seba_charts/bbsim/Chart.yaml b/src/seba_charts/bbsim/Chart.yaml new file mode 100644 index 0000000..1361845 --- /dev/null +++ b/src/seba_charts/bbsim/Chart.yaml @@ -0,0 +1,5 @@ +appVersion: 1.0.0 +description: Broadband Simulator +icon: https://guide.opencord.org/logos/cord.svg +name: bbsim +version: 1.0.0 diff --git a/src/seba_charts/bbsim/templates/NOTES.txt b/src/seba_charts/bbsim/templates/NOTES.txt new file mode 100644 index 0000000..1359652 --- /dev/null +++ b/src/seba_charts/bbsim/templates/NOTES.txt @@ -0,0 +1,5 @@ +BBSim deployed with release name: {{ .Release.Name }} + +OLT ID: {{ .Values.olt_id }}, on TCP port: {{ .Values.olt_tcp_port }} +# of PON Ports: {{ .Values.pon_ports }} +ONUs per PON Port: {{ .Values.onus_per_pon_port }} (total: {{ mul .Values.pon_ports .Values.onus_per_pon_port}}) diff --git a/src/seba_charts/bbsim/templates/_helpers.tpl b/src/seba_charts/bbsim/templates/_helpers.tpl new file mode 100644 index 0000000..af6ac67 --- /dev/null +++ b/src/seba_charts/bbsim/templates/_helpers.tpl @@ -0,0 +1,48 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Copyright 2018-present Open Networking Foundation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + +http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/}} +{{/* +Expand the name of the chart. +*/}} +{{- define "bbsim.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "bbsim.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "bbsim.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + diff --git a/src/seba_charts/bbsim/templates/deployment.yaml b/src/seba_charts/bbsim/templates/deployment.yaml new file mode 100644 index 0000000..8156ed2 --- /dev/null +++ b/src/seba_charts/bbsim/templates/deployment.yaml @@ -0,0 +1,84 @@ +--- +# Copyright 2017-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: {{ template "bbsim.fullname" . }} + namespace: {{ .Values.namespace }} + labels: + app: {{ template "bbsim.name" . }} + chart: {{ template "bbsim.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + replicas: {{ .Values.replicaCount }} + selector: + matchLabels: + app: {{ template "bbsim.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "bbsim.name" . }} + release: {{ .Release.Name }} + spec: + serviceAccount: {{ .Values.serviceAccountName }} + serviceAccountName: {{ .Values.serviceAccountName }} + containers: + - name: {{ .Chart.Name }} + image: {{ .Values.global.registry }}{{ .Values.images.bbsim.repository }}:{{ tpl .Values.images.bbsim.tag . }} + imagePullPolicy: {{ .Values.images.bbsim.pullPolicy }} + securityContext: + privileged: true + command: [ + "/app/bbsim", + "-n", "{{ .Values.onus_per_pon_port }}", + "-m", "{{ .Values.emulation_mode }}", + "-H", ":{{ .Values.olt_tcp_port }}", + "-id", "{{ .Values.olt_id }}", + "-i", "{{ .Values.pon_ports }}", + "-aw", "{{ .Values.wpa_wait }}", + "-dw", "{{ .Values.dhcp_wait }}", + "-k", "{{ .Values.kafka_broker }}", + ] + ports: + - name: "bbsim-olt-id-{{ .Values.olt_id }}" + containerPort: {{ .Values.olt_tcp_port }} + port: {{ .Values.olt_tcp_port }} + protocol: TCP + env: + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + resources: +{{ toYaml .Values.resources | indent 12 }} + {{- with .Values.nodeSelector }} + nodeSelector: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} diff --git a/src/seba_charts/bbsim/templates/service.yaml b/src/seba_charts/bbsim/templates/service.yaml new file mode 100644 index 0000000..d63c129 --- /dev/null +++ b/src/seba_charts/bbsim/templates/service.yaml @@ -0,0 +1,34 @@ +--- +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +apiVersion: v1 +kind: Service +metadata: + name: {{ template "bbsim.fullname" . }} + namespace: {{ .Values.namespace }} + labels: + app: {{ template "bbsim.name" . }} + chart: {{ template "bbsim.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + type: ClusterIP + ports: + - name: "bbsim-olt-id-{{ .Values.olt_id }}" + port: {{ .Values.olt_tcp_port }} + protocol: TCP + selector: + app: {{ template "bbsim.name" . }} + release: {{ .Release.Name }} diff --git a/src/seba_charts/bbsim/values.yaml b/src/seba_charts/bbsim/values.yaml new file mode 100644 index 0000000..513d14d --- /dev/null +++ b/src/seba_charts/bbsim/values.yaml @@ -0,0 +1,66 @@ +# Copyright 2018-present Open Networking Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# bbsim values + +# CLI switches passed to bbsim + +# -id option +olt_id: 0 + +# -H option, port number portion +olt_tcp_port: 50060 + +# -i option +pon_ports: 1 + +# -n option +onus_per_pon_port: 16 + +# -m option +emulation_mode: 'both' + +# -a option +wpa_wait: 60 + +# -d option +dhcp_wait: 120 + +# -k option +kafka_broker: '' + +images: + bbsim: + repository: 'akrainoenea/voltha-bbsim' + tag: '{{ .Chart.AppVersion }}' + pullPolicy: 'Always' + +global: + registry: '' + +namespace: voltha +serviceAccountName: default + +nameOverride: "" +fullnameOverride: "" + +replicaCount: 1 + +resources: {} + +nodeSelector: {} + +tolerations: [] + +affinity: {} diff --git a/src/seba_charts/cord-platform/Chart.yaml b/src/seba_charts/cord-platform/Chart.yaml new file mode 100644 index 0000000..f08d7b1 --- /dev/null +++ b/src/seba_charts/cord-platform/Chart.yaml @@ -0,0 +1,5 @@ +appVersion: 6.1.0 +description: A Helm chart to install the CORD platform +icon: https://guide.opencord.org/logos/cord.svg +name: cord-platform +version: 6.1.0 diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/.helmignore b/src/seba_charts/cord-platform/charts/etcd-operator/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/Chart.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/Chart.yaml new file mode 100644 index 0000000..36f8924 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/Chart.yaml @@ -0,0 +1,16 @@ +apiVersion: v1 +appVersion: 0.9.2 +description: CoreOS etcd-operator Helm chart for Kubernetes +home: https://github.com/coreos/etcd-operator +icon: https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-horizontal-color.png +maintainers: +- email: chance.zibolski@coreos.com + name: chancez +- email: lachlan@deis.com + name: lachie83 +- email: jaescobar.cell@gmail.com + name: alejandroEsc +name: etcd-operator +sources: +- https://github.com/coreos/etcd-operator +version: 0.8.0 diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/OWNERS b/src/seba_charts/cord-platform/charts/etcd-operator/OWNERS new file mode 100644 index 0000000..e7cf870 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/OWNERS @@ -0,0 +1,8 @@ +approvers: +- lachie83 +- chancez +- alejandroEsc +reviewers: +- lachie83 +- chancez +- alejandroEsc diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/README.md b/src/seba_charts/cord-platform/charts/etcd-operator/README.md new file mode 100644 index 0000000..746d73d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/README.md @@ -0,0 +1,158 @@ +# CoreOS etcd-operator + +[etcd-operator](https://coreos.com/blog/introducing-the-etcd-operator.html) Simplify etcd cluster +configuration and management. + +__DISCLAIMER:__ While this chart has been well-tested, the etcd-operator is still currently in beta. +Current project status is available [here](https://github.com/coreos/etcd-operator). + +## Introduction + +This chart bootstraps an etcd-operator and allows the deployment of etcd-cluster(s). + +## Official Documentation + +Official project documentation found [here](https://github.com/coreos/etcd-operator) + +## Prerequisites + +- Kubernetes 1.4+ with Beta APIs enabled +- __Suggested:__ PV provisioner support in the underlying infrastructure to support backups + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```bash +$ helm install stable/etcd-operator --name my-release +``` + +__Note__: If you set `cluster.enabled` on install, it will have no effect. +Before you create an etcd cluster, the TPR must be installed by the operator, so this option is ignored during helm installs, but can be used in upgrades. + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```bash +$ helm delete my-release +``` + +The command removes all the Kubernetes components EXCEPT the persistent volume. + +## Updating +Updating the TPR resource will not result in the cluster being update until `kubectl apply` for +TPRs is fixed see [kubernetes/issues/29542](https://github.com/kubernetes/kubernetes/issues/29542) +Work around options are documented [here](https://github.com/coreos/etcd-operator#resize-an-etcd-cluster) + +## Configuration + +The following table lists the configurable parameters of the etcd-operator chart and their default values. + +| Parameter | Description | Default | +| ------------------------------------------------- | -------------------------------------------------------------------- | ---------------------------------------------- | +| `rbac.create` | install required RBAC service account, roles and rolebindings | `true` | +| `rbac.apiVersion` | RBAC api version `v1alpha1|v1beta1` | `v1beta1` | +| `rbac.etcdOperatorServiceAccountName` | Name of the service account resource when RBAC is enabled | `etcd-operator-sa` | +| `rbac.backupOperatorServiceAccountName` | Name of the service account resource when RBAC is enabled | `etcd-backup-operator-sa` | +| `rbac.restoreOperatorServiceAccountName` | Name of the service account resource when RBAC is enabled | `etcd-restore-operator-sa` | +| `deployments.etcdOperator` | Deploy the etcd cluster operator | `true` | +| `deployments.backupOperator` | Deploy the etcd backup operator | `true` | +| `deployments.restoreOperator` | Deploy the etcd restore operator | `true` | +| `customResources.createEtcdClusterCRD` | Create a custom resource: EtcdCluster | `false` | +| `customResources.createBackupCRD` | Create an a custom resource: EtcdBackup | `false` | +| `customResources.createRestoreCRD` | Create an a custom resource: EtcdRestore | `false` | +| `etcdOperator.name` | Etcd Operator name | `etcd-operator` | +| `etcdOperator.replicaCount` | Number of operator replicas to create (only 1 is supported) | `1` | +| `etcdOperator.image.repository` | etcd-operator container image | `quay.io/coreos/etcd-operator` | +| `etcdOperator.image.tag` | etcd-operator container image tag | `v0.7.0` | +| `etcdOperator.image.pullpolicy` | etcd-operator container image pull policy | `Always` | +| `etcdOperator.resources.cpu` | CPU limit per etcd-operator pod | `100m` | +| `etcdOperator.resources.memory` | Memory limit per etcd-operator pod | `128Mi` | +| `etcdOperator.nodeSelector` | Node labels for etcd operator pod assignment | `{}` | +| `etcdOperator.commandArgs` | Additional command arguments | `{}` | +| `backupOperator.name` | Backup operator name | `etcd-backup-operator` | +| `backupOperator.replicaCount` | Number of operator replicas to create (only 1 is supported) | `1` | +| `backupOperator.image.repository` | Operator container image | `quay.io/coreos/etcd-operator` | +| `backupOperator.image.tag` | Operator container image tag | `v0.7.0` | +| `backupOperator.image.pullpolicy` | Operator container image pull policy | `Always` | +| `backupOperator.resources.cpu` | CPU limit per etcd-operator pod | `100m` | +| `backupOperator.resources.memory` | Memory limit per etcd-operator pod | `128Mi` | +| `backupOperator.spec.storageType` | Storage to use for backup file, currently only S3 supported | `S3` | +| `backupOperator.spec.s3.s3Bucket` | Bucket in S3 to store backup file | | +| `backupOperator.spec.s3.awsSecret` | Name of kubernetes secret containing aws credentials | | +| `backupOperator.nodeSelector` | Node labels for etcd operator pod assignment | `{}` | +| `backupOperator.commandArgs` | Additional command arguments | `{}` | +| `restoreOperator.name` | Restore operator name | `etcd-backup-operator` | +| `restoreOperator.replicaCount` | Number of operator replicas to create (only 1 is supported) | `1` | +| `restoreOperator.image.repository` | Operator container image | `quay.io/coreos/etcd-operator` | +| `restoreOperator.image.tag` | Operator container image tag | `v0.7.0` | +| `restoreOperator.image.pullpolicy` | Operator container image pull policy | `Always` | +| `restoreOperator.resources.cpu` | CPU limit per etcd-operator pod | `100m` | +| `restoreOperator.resources.memory` | Memory limit per etcd-operator pod | `128Mi` | +| `restoreOperator.spec.s3.path` | Path in S3 bucket containing the backup file | | +| `restoreOperator.spec.s3.awsSecret` | Name of kubernetes secret containing aws credentials | | +| `restoreOperator.nodeSelector` | Node labels for etcd operator pod assignment | `{}` | +| `restoreOperator.commandArgs` | Additional command arguments | `{}` | +| `etcdCluster.name` | etcd cluster name | `etcd-cluster` | +| `etcdCluster.size` | etcd cluster size | `3` | +| `etcdCluster.version` | etcd cluster version | `3.2.10` | +| `etcdCluster.image.repository` | etcd container image | `quay.io/coreos/etcd-operator` | +| `etcdCluster.image.tag` | etcd container image tag | `v3.2.10` | +| `etcdCluster.image.pullPolicy` | etcd container image pull policy | `Always` | +| `etcdCluster.enableTLS` | Enable use of TLS | `false` | +| `etcdCluster.tls.static.member.peerSecret` | Kubernetes secret containing TLS peer certs | `etcd-peer-tls` | +| `etcdCluster.tls.static.member.serverSecret` | Kubernetes secret containing TLS server certs | `etcd-server-tls` | +| `etcdCluster.tls.static.operatorSecret` | Kubernetes secret containing TLS client certs | `etcd-client-tls` | +| `etcdCluster.pod.antiAffinity` | Whether etcd cluster pods should have an antiAffinity | `false` | +| `etcdCluster.pod.resources.limits.cpu` | CPU limit per etcd cluster pod | `100m` | +| `etcdCluster.pod.resources.limits.memory` | Memory limit per etcd cluster pod | `128Mi` | +| `etcdCluster.pod.resources.requests.cpu` | CPU request per etcd cluster pod | `100m` | +| `etcdCluster.pod.resources.requests.memory` | Memory request per etcd cluster pod | `128Mi` | +| `etcdCluster.pod.nodeSelector` | node labels for etcd cluster pod assignment | `{}` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example: + +```bash +$ helm install --name my-release --set image.tag=v0.2.1 stable/etcd-operator +``` + +Alternatively, a YAML file that specifies the values for the parameters can be provided while +installing the chart. For example: + +```bash +$ helm install --name my-release --values values.yaml stable/etcd-operator +``` + +## RBAC +By default the chart will install the recommended RBAC roles and rolebindings. + +To determine if your cluster supports this running the following: + +```console +$ kubectl api-versions | grep rbac +``` + +You also need to have the following parameter on the api server. See the following document for how to enable [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) + +``` +--authorization-mode=RBAC +``` + +If the output contains "beta" or both "alpha" and "beta" you can may install rbac by default, if not, you may turn RBAC off as described below. + +### RBAC role/rolebinding creation + +RBAC resources are enabled by default. To disable RBAC do the following: + +```console +$ helm install --name my-release stable/etcd-operator --set rbac.create=false +``` + +### Changing RBAC manifest apiVersion + +By default the RBAC resources are generated with the "v1beta1" apiVersion. To use "v1alpha1" do the following: + +```console +$ helm install --name my-release stable/etcd-operator --set rbac.install=true,rbac.apiVersion=v1alpha1 +``` diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/NOTES.txt b/src/seba_charts/cord-platform/charts/etcd-operator/templates/NOTES.txt new file mode 100644 index 0000000..c33ee01 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/NOTES.txt @@ -0,0 +1,33 @@ +{{- $clusterEnabled := (and (not .Release.IsInstall) .Values.customResources.createEtcdClusterCRD) -}} +{{- if and .Release.IsInstall .Values.customResources.createEtcdClusterCRD -}} +Not enabling cluster, the ThirdPartResource must be installed before you can create a Cluster. Continuing rest of normal deployment. + +{{ end -}} + +{{- if $clusterEnabled -}} +1. Watch etcd cluster start + kubectl get pods -l etcd_cluster={{ .Values.etcdCluster.name }} --namespace {{ .Release.Namespace }} -w + +2. Confirm etcd cluster is healthy + $ kubectl run --rm -i --tty --env="ETCDCTL_API=3" --env="ETCDCTL_ENDPOINTS=http://{{ .Values.etcdCluster.name }}-client:2379" --namespace {{ .Release.Namespace }} etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/sh -c 'watch -n1 "etcdctl member list"' + +3. Interact with the cluster! + $ kubectl run --rm -i --tty --env ETCDCTL_API=3 --namespace {{ .Release.Namespace }} etcd-test --image quay.io/coreos/etcd --restart=Never -- /bin/sh + / # etcdctl --endpoints http://{{ .Values.etcdCluster.name }}-client:2379 put foo bar + / # etcdctl --endpoints http://{{ .Values.etcdCluster.name }}-client:2379 get foo + OK + (ctrl-D to exit) + +4. Optional + Check the etcd-operator logs + export POD=$(kubectl get pods -l app={{ template "etcd-operator.fullname" . }} --namespace {{ .Release.Namespace }} --output name) + kubectl logs $POD --namespace={{ .Release.Namespace }} + +{{- else -}} +1. etcd-operator deployed. + If you would like to deploy an etcd-cluster set cluster.enabled to true in values.yaml + Check the etcd-operator logs + export POD=$(kubectl get pods -l app={{ template "etcd-operator.fullname" . }} --namespace {{ .Release.Namespace }} --output name) + kubectl logs $POD --namespace={{ .Release.Namespace }} + +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/_helpers.tpl b/src/seba_charts/cord-platform/charts/etcd-operator/templates/_helpers.tpl new file mode 100644 index 0000000..03f9a26 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/_helpers.tpl @@ -0,0 +1,75 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "etcd-operator.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "etcd-operator.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.etcdOperator.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- define "etcd-backup-operator.name" -}} +{{- default .Chart.Name .Values.backupOperator.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "etcd-backup-operator.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.backupOperator.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- define "etcd-restore-operator.name" -}} +{{- default .Chart.Name .Values.restoreOperator.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "etcd-restore-operator.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.restoreOperator.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the etcd-operator service account to use +*/}} +{{- define "etcd-operator.serviceAccountName" -}} +{{- if .Values.serviceAccount.etcdOperatorServiceAccount.create -}} + {{ default (include "etcd-operator.fullname" .) .Values.serviceAccount.etcdOperatorServiceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.etcdOperatorServiceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the backup-operator service account to use +*/}} +{{- define "etcd-backup-operator.serviceAccountName" -}} +{{- if .Values.serviceAccount.backupOperatorServiceAccount.create -}} + {{ default (include "etcd-backup-operator.fullname" .) .Values.serviceAccount.backupOperatorServiceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.backupOperatorServiceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the restore-operator service account to use +*/}} +{{- define "etcd-restore-operator.serviceAccountName" -}} +{{- if .Values.serviceAccount.restoreOperatorServiceAccount.create -}} + {{ default (include "etcd-restore-operator.fullname" .) .Values.serviceAccount.restoreOperatorServiceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.restoreOperatorServiceAccount.name }} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-etcd-crd.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-etcd-crd.yaml new file mode 100644 index 0000000..5528f76 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-etcd-crd.yaml @@ -0,0 +1,18 @@ +{{- if .Values.customResources.createBackupCRD }} +--- +apiVersion: "etcd.database.coreos.com/v1beta2" +kind: "EtcdBackup" +metadata: + name: {{ template "etcd-backup-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-backup-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + annotations: + "helm.sh/hook": "post-install" + "helm.sh/hook-delete-policy": "before-hook-creation" +spec: + clusterName: {{ .Values.etcdCluster.name }} +{{ toYaml .Values.backupOperator.spec | indent 2 }} +{{- end}} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-clusterrole-binding.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-clusterrole-binding.yaml new file mode 100644 index 0000000..526b245 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-clusterrole-binding.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.rbac.create .Values.deployments.backupOperator }} +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }} +metadata: + name: {{ template "etcd-backup-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +subjects: +- kind: ServiceAccount + name: {{ template "etcd-backup-operator.serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "etcd-operator.fullname" . }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-deployment.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-deployment.yaml new file mode 100644 index 0000000..d5c421c --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-deployment.yaml @@ -0,0 +1,59 @@ +{{- if .Values.deployments.backupOperator }} +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: {{ template "etcd-backup-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-backup-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +spec: + selector: + matchLabels: + app: {{ template "etcd-backup-operator.fullname" . }} + release: {{ .Release.Name }} + replicas: {{ .Values.backupOperator.replicaCount }} + template: + metadata: + name: {{ template "etcd-backup-operator.fullname" . }} + labels: + app: {{ template "etcd-backup-operator.fullname" . }} + release: {{ .Release.Name }} + spec: + serviceAccountName: {{ template "etcd-backup-operator.serviceAccountName" . }} + containers: + - name: {{ .Values.backupOperator.name }} + image: "{{ .Values.backupOperator.image.repository }}:{{ .Values.backupOperator.image.tag }}" + imagePullPolicy: {{ .Values.backupOperator.image.pullPolicy }} + command: + - etcd-backup-operator +{{- range $key, $value := .Values.backupOperator.commandArgs }} + - "--{{ $key }}={{ $value }}" +{{- end }} + env: + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + resources: + limits: + cpu: {{ .Values.backupOperator.resources.cpu }} + memory: {{ .Values.backupOperator.resources.memory }} + requests: + cpu: {{ .Values.backupOperator.resources.cpu }} + memory: {{ .Values.backupOperator.resources.memory }} + {{- if .Values.backupOperator.nodeSelector }} + nodeSelector: +{{ toYaml .Values.backupOperator.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.backupOperator.tolerations }} + tolerations: +{{ toYaml .Values.backupOperator.tolerations | indent 8 }} + {{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-service-account.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-service-account.yaml new file mode 100644 index 0000000..06aec3d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/backup-operator-service-account.yaml @@ -0,0 +1,12 @@ +{{- if and .Values.serviceAccount.backupOperatorServiceAccount.create .Values.deployments.backupOperator }} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "etcd-backup-operator.serviceAccountName" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-backup-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- end }} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/etcd-cluster-crd.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/etcd-cluster-crd.yaml new file mode 100644 index 0000000..0d385d8 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/etcd-cluster-crd.yaml @@ -0,0 +1,25 @@ +{{- if .Values.customResources.createEtcdClusterCRD }} +--- +apiVersion: "etcd.database.coreos.com/v1beta2" +kind: "EtcdCluster" +metadata: + name: {{ .Values.etcdCluster.name }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + annotations: + "helm.sh/hook": "post-install" + "helm.sh/hook-delete-policy": "before-hook-creation" +spec: + size: {{ .Values.etcdCluster.size }} + version: "{{ .Values.etcdCluster.version }}" + pod: +{{ toYaml .Values.etcdCluster.pod | indent 4 }} + {{- if .Values.etcdCluster.enableTLS }} + TLS: +{{ toYaml .Values.etcdCluster.tls | indent 4 }} + {{- end }} +{{- end }} + diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-cluster-role.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-cluster-role.yaml new file mode 100644 index 0000000..6208597 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-cluster-role.yaml @@ -0,0 +1,49 @@ +{{- if .Values.rbac.create }} +--- +apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }} +kind: ClusterRole +metadata: + name: {{ template "etcd-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +rules: +- apiGroups: + - etcd.database.coreos.com + resources: + - etcdclusters + - etcdbackups + - etcdrestores + verbs: + - "*" +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: + - "*" +- apiGroups: + - "" + resources: + - pods + - services + - endpoints + - persistentvolumeclaims + - events + verbs: + - "*" +- apiGroups: + - apps + resources: + - deployments + verbs: + - "*" +- apiGroups: + - "" + resources: + - secrets + verbs: + - get +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-clusterrole-binding.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-clusterrole-binding.yaml new file mode 100644 index 0000000..09594cc --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-clusterrole-binding.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.rbac.create .Values.deployments.etcdOperator }} +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/{{ required "A valid .Values.rbac.apiVersion entry required!" .Values.rbac.apiVersion }} +metadata: + name: {{ template "etcd-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +subjects: +- kind: ServiceAccount + name: {{ template "etcd-operator.serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "etcd-operator.fullname" . }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-deployment.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-deployment.yaml new file mode 100644 index 0000000..bb6b1a7 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-deployment.yaml @@ -0,0 +1,81 @@ +{{- if .Values.deployments.etcdOperator }} +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: {{ template "etcd-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +spec: + selector: + matchLabels: + app: {{ template "etcd-operator.fullname" . }} + release: {{ .Release.Name }} + replicas: {{ .Values.etcdOperator.replicaCount }} + template: + metadata: + name: {{ template "etcd-operator.fullname" . }} + labels: + app: {{ template "etcd-operator.fullname" . }} + release: {{ .Release.Name }} + spec: + serviceAccountName: {{ template "etcd-operator.serviceAccountName" . }} + containers: + - name: {{ template "etcd-operator.fullname" . }} + image: "{{ .Values.etcdOperator.image.repository }}:{{ .Values.etcdOperator.image.tag }}" + imagePullPolicy: {{ .Values.etcdOperator.image.pullPolicy }} + command: + - etcd-operator +{{- range $key, $value := .Values.etcdOperator.commandArgs }} + - "--{{ $key }}={{ $value }}" +{{- end }} + env: + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + resources: + limits: + cpu: {{ .Values.etcdOperator.resources.cpu }} + memory: {{ .Values.etcdOperator.resources.memory }} + requests: + cpu: {{ .Values.etcdOperator.resources.cpu }} + memory: {{ .Values.etcdOperator.resources.memory }} + {{- if .Values.etcdOperator.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: /readyz + port: 8080 + initialDelaySeconds: {{ .Values.etcdOperator.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.etcdOperator.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.etcdOperator.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.etcdOperator.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.etcdOperator.livenessProbe.failureThreshold }} + {{- end}} + {{- if .Values.etcdOperator.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: /readyz + port: 8080 + initialDelaySeconds: {{ .Values.etcdOperator.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.etcdOperator.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.etcdOperator.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.etcdOperator.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.etcdOperator.readinessProbe.failureThreshold }} + {{- end }} + {{- if .Values.etcdOperator.nodeSelector }} + nodeSelector: +{{ toYaml .Values.etcdOperator.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.etcdOperator.tolerations }} + tolerations: +{{ toYaml .Values.etcdOperator.tolerations | indent 8 }} + {{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-service-account.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-service-account.yaml new file mode 100644 index 0000000..2faba8a --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/operator-service-account.yaml @@ -0,0 +1,12 @@ +{{- if and .Values.serviceAccount.etcdOperatorServiceAccount.create .Values.deployments.etcdOperator }} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "etcd-operator.serviceAccountName" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- end }} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-etcd-crd.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-etcd-crd.yaml new file mode 100644 index 0000000..73faaab --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-etcd-crd.yaml @@ -0,0 +1,28 @@ +{{- if .Values.customResources.createRestoreCRD }} +--- +apiVersion: "etcd.database.coreos.com/v1beta2" +kind: "EtcdRestore" +metadata: + # An EtcdCluster with the same name will be created + name: {{ .Values.etcdCluster.name }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-restore-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + annotations: + "helm.sh/hook": "post-install" + "helm.sh/hook-delete-policy": "before-hook-creation" +spec: + clusterSpec: + size: {{ .Values.etcdCluster.size }} + baseImage: "{{ .Values.etcdCluster.image.repository }}" + version: {{ .Values.etcdCluster.image.tag }} + pod: +{{ toYaml .Values.etcdCluster.pod | indent 6 }} + {{- if .Values.etcdCluster.enableTLS }} + TLS: +{{ toYaml .Values.etcdCluster.tls | indent 6 }} + {{- end }} +{{ toYaml .Values.restoreOperator.spec | indent 2 }} +{{- end}} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-clusterrole-binding.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-clusterrole-binding.yaml new file mode 100644 index 0000000..9a6696e --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-clusterrole-binding.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.rbac.create .Values.deployments.restoreOperator }} +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }} +metadata: + name: {{ template "etcd-restore-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-restore-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +subjects: +- kind: ServiceAccount + name: {{ template "etcd-restore-operator.serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "etcd-operator.fullname" . }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-deployment.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-deployment.yaml new file mode 100644 index 0000000..5c4784d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-deployment.yaml @@ -0,0 +1,63 @@ +{{- if .Values.deployments.restoreOperator }} +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: {{ template "etcd-restore-operator.fullname" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-restore-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +spec: + selector: + matchLabels: + app: {{ template "etcd-restore-operator.name" . }} + release: {{ .Release.Name }} + replicas: {{ .Values.restoreOperator.replicaCount }} + template: + metadata: + name: {{ template "etcd-restore-operator.fullname" . }} + labels: + app: {{ template "etcd-restore-operator.name" . }} + release: {{ .Release.Name }} + spec: + serviceAccountName: {{ template "etcd-restore-operator.serviceAccountName" . }} + containers: + - name: {{ .Values.restoreOperator.name }} + image: "{{ .Values.restoreOperator.image.repository }}:{{ .Values.restoreOperator.image.tag }}" + imagePullPolicy: {{ .Values.restoreOperator.image.pullPolicy }} + ports: + - containerPort: {{ .Values.restoreOperator.port }} + command: + - etcd-restore-operator +{{- range $key, $value := .Values.restoreOperator.commandArgs }} + - "--{{ $key }}={{ $value }}" +{{- end }} + env: + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: SERVICE_ADDR + value: "{{ .Values.restoreOperator.name }}:{{ .Values.restoreOperator.port }}" + resources: + limits: + cpu: {{ .Values.restoreOperator.resources.cpu }} + memory: {{ .Values.restoreOperator.resources.memory }} + requests: + cpu: {{ .Values.restoreOperator.resources.cpu }} + memory: {{ .Values.restoreOperator.resources.memory }} + {{- if .Values.restoreOperator.nodeSelector }} + nodeSelector: +{{ toYaml .Values.restoreOperator.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.restoreOperator.tolerations }} + tolerations: +{{ toYaml .Values.restoreOperator.tolerations | indent 8 }} + {{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service-account.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service-account.yaml new file mode 100644 index 0000000..595cee9 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service-account.yaml @@ -0,0 +1,12 @@ +{{- if and .Values.serviceAccount.restoreOperatorServiceAccount.create .Values.deployments.restoreOperator }} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "etcd-restore-operator.serviceAccountName" . }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-restore-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- end }} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service.yaml new file mode 100644 index 0000000..052be36 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/templates/restore-operator-service.yaml @@ -0,0 +1,20 @@ +{{- if .Values.deployments.restoreOperator }} +--- +apiVersion: v1 +kind: Service +metadata: + name: {{ .Values.restoreOperator.name }} + labels: + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + app: {{ template "etcd-restore-operator.name" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +spec: + ports: + - protocol: TCP + name: http-etcd-restore-port + port: {{ .Values.restoreOperator.port }} + selector: + app: {{ template "etcd-restore-operator.name" . }} + release: {{ .Release.Name }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/etcd-operator/values.yaml b/src/seba_charts/cord-platform/charts/etcd-operator/values.yaml new file mode 100644 index 0000000..2eaac85 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/etcd-operator/values.yaml @@ -0,0 +1,152 @@ +# Default values for etcd-operator. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +## Install Default RBAC roles and bindings +rbac: + create: true + apiVersion: v1beta1 + +## Service account names and whether to create them +serviceAccount: + etcdOperatorServiceAccount: + create: true + name: + backupOperatorServiceAccount: + create: true + name: + restoreOperatorServiceAccount: + create: true + name: + +# Select what to deploy +deployments: + etcdOperator: true + # one time deployment, delete once completed, + # Ref: https://github.com/coreos/etcd-operator/blob/master/doc/user/walkthrough/backup-operator.md + backupOperator: true + # one time deployment, delete once completed + # Ref: https://github.com/coreos/etcd-operator/blob/master/doc/user/walkthrough/restore-operator.md + restoreOperator: true + +# creates custom resources, not all required, +# you could use `helm template --values --name release_name ... ` +# and create the resources yourself to deploy on your cluster later +customResources: + createEtcdClusterCRD: false + createBackupCRD: false + createRestoreCRD: false + +# etcdOperator +etcdOperator: + name: etcd-operator + replicaCount: 1 + image: + repository: cachengo/etcd-operator + tag: v0.9.2 + pullPolicy: Always + resources: + cpu: 100m + memory: 128Mi + ## Node labels for etcd-operator pod assignment + ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ + nodeSelector: {} + ## additional command arguments go here; will be translated to `--key=value` form + ## e.g., analytics: true + commandArgs: {} + ## Configurable health checks against the /readyz endpoint that etcd-operator exposes + readinessProbe: + enabled: false + initialDelaySeconds: 0 + periodSeconds: 10 + timeoutSeconds: 1 + successThreshold: 1 + failureThreshold: 3 + livenessProbe: + enabled: false + initialDelaySeconds: 0 + periodSeconds: 10 + timeoutSeconds: 1 + successThreshold: 1 + failureThreshold: 3 +# backup spec +backupOperator: + name: etcd-backup-operator + replicaCount: 1 + image: + repository: cachengo/etcd-operator + tag: v0.9.2 + pullPolicy: Always + resources: + cpu: 100m + memory: 128Mi + spec: + storageType: S3 + s3: + s3Bucket: + awsSecret: + ## Node labels for etcd pod assignment + ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ + nodeSelector: {} + ## additional command arguments go here; will be translated to `--key=value` form + ## e.g., analytics: true + commandArgs: {} + +# restore spec +restoreOperator: + name: etcd-restore-operator + replicaCount: 1 + image: + repository: cachengo/etcd-operator + tag: v0.9.2 + pullPolicy: Always + port: 19999 + resources: + cpu: 100m + memory: 128Mi + spec: + s3: + # The format of "path" must be: "/" + # e.g: "etcd-snapshot-bucket/v1/default/example-etcd-cluster/3.2.10_0000000000000001_etcd.backup" + path: + awsSecret: + ## Node labels for etcd pod assignment + ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ + nodeSelector: {} + ## additional command arguments go here; will be translated to `--key=value` form + ## e.g., analytics: true + commandArgs: {} + +## etcd-cluster specific values +etcdCluster: + name: etcd-cluster + size: 3 + version: 3.2.13 + image: + repository: cachengo/etcd + tag: v3.2.13 + pullPolicy: Always + enableTLS: false + # TLS configs + tls: + static: + member: + peerSecret: etcd-peer-tls + serverSecret: etcd-server-tls + operatorSecret: etcd-client-tls + ## etcd cluster pod specific values + ## Ref: https://github.com/coreos/etcd-operator/blob/master/doc/user/spec_examples.md#three-members-cluster-with-resource-requirement + pod: + ## Antiaffinity for etcd pod assignment + ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + antiAffinity: false + resources: + limits: + cpu: 100m + memory: 128Mi + requests: + cpu: 100m + memory: 128Mi + ## Node labels for etcd pod assignment + ## Ref: https://kubernetes.io/docs/user-guide/node-selection/ + nodeSelector: {} diff --git a/src/seba_charts/cord-platform/charts/kafka/.helmignore b/src/seba_charts/cord-platform/charts/kafka/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/src/seba_charts/cord-platform/charts/kafka/Chart.yaml b/src/seba_charts/cord-platform/charts/kafka/Chart.yaml new file mode 100644 index 0000000..a34098b --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v1 +appVersion: 4.1.2 +description: Apache Kafka is publish-subscribe messaging rethought as a distributed + commit log. +home: https://kafka.apache.org/ +icon: https://kafka.apache.org/images/logo.png +keywords: +- kafka +- zookeeper +- kafka statefulset +maintainers: +- email: faraaz@rationalizeit.us + name: faraazkhan +- email: marc.villacorta@gmail.com + name: h0tbird +- email: ben@spothero.com + name: benjigoldberg +name: kafka +sources: +- https://github.com/kubernetes/charts/tree/master/incubator/zookeeper +- https://github.com/Yolean/kubernetes-kafka +- https://github.com/confluentinc/cp-docker-images +- https://github.com/apache/kafka +version: 0.8.8 diff --git a/src/seba_charts/cord-platform/charts/kafka/OWNERS b/src/seba_charts/cord-platform/charts/kafka/OWNERS new file mode 100644 index 0000000..0ed92ba --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/OWNERS @@ -0,0 +1,4 @@ +approvers: +- benjigoldberg +reviewers: +- benjigoldberg diff --git a/src/seba_charts/cord-platform/charts/kafka/README.md b/src/seba_charts/cord-platform/charts/kafka/README.md new file mode 100644 index 0000000..0c31807 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/README.md @@ -0,0 +1,235 @@ +# Apache Kafka Helm Chart + +This is an implementation of Kafka StatefulSet found here: + + * https://github.com/Yolean/kubernetes-kafka + +## Pre Requisites: + +* Kubernetes 1.3 with alpha APIs enabled and support for storage classes + +* PV support on underlying infrastructure + +* Requires at least `v2.0.0-beta.1` version of helm to support + dependency management with requirements.yaml + +## StatefulSet Details + +* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ + +## StatefulSet Caveats + +* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations + +## Chart Details + +This chart will do the following: + +* Implement a dynamically scalable kafka cluster using Kubernetes StatefulSets + +* Implement a dynamically scalable zookeeper cluster as another Kubernetes StatefulSet required for the Kafka cluster above + +* Expose Kafka protocol endpoints via NodePort services (optional) + +### Installing the Chart + +To install the chart with the release name `my-kafka` in the default +namespace: + +``` +$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator +$ helm install --name my-kafka incubator/kafka +``` + +If using a dedicated namespace(recommended) then make sure the namespace +exists with: + +``` +$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator +$ kubectl create ns kafka +$ helm install --name my-kafka --namespace kafka incubator/kafka +``` + +This chart includes a ZooKeeper chart as a dependency to the Kafka +cluster in its `requirement.yaml` by default. The chart can be customized using the +following configurable parameters: + +| Parameter | Description | Default | +|------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------| +| `image` | Kafka Container image name | `confluentinc/cp-kafka` | +| `imageTag` | Kafka Container image tag | `4.1.2-2` | +| `imagePullPolicy` | Kafka Container pull policy | `IfNotPresent` | +| `replicas` | Kafka Brokers | `3` | +| `component` | Kafka k8s selector key | `kafka` | +| `resources` | Kafka resource requests and limits | `{}` | +| `kafkaHeapOptions` | Kafka broker JVM heap options | `-Xmx1G-Xms1G` | +| `logSubPath` | Subpath under `persistence.mountPath` where kafka logs will be placed. | `logs` | +| `schedulerName` | Name of Kubernetes scheduler (other than the default) | `nil` | +| `affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}` | +| `tolerations` | List of node tolerations for the pods. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` | +| `headless.annotations` | List of annotations for the headless service. https://kubernetes.io/docs/concepts/services-networking/service/#headless-services | `[]` | +| `headless.targetPort` | Target port to be used for the headless service. This is not a required value. | `nil` | +| `headless.port` | Port to be used for the headless service. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `9092` | +| `external.enabled` | If True, exposes Kafka brokers via NodePort (PLAINTEXT by default) | `false` | +| `external.servicePort` | TCP port configured at external services (one per pod) to relay from NodePort to the external listener port. | '19092' | +| `external.firstListenerPort` | TCP port which is added pod index number to arrive at the port used for NodePort and external listener port. | '31090' | +| `external.domain` | Domain in which to advertise Kafka external listeners. | `cluster.local` | +| `external.init` | External init container settings. | (see `values.yaml`) | +| `external.type` | Service Type. | `NodePort` | +| `external.distinct` | Distinct DNS entries for each created A record. | `false` | +| `external.annotations` | Additional annotations for the external service. | `{}` | +| `rbac.enabled` | Enable a service account and role for the init container to use in an RBAC enabled cluster | `false` | +| `configurationOverrides` | `Kafka ` [configuration setting][brokerconfigs] overrides in the dictionary format | `{ offsets.topic.replication.factor: 3 }` | +| `additionalPorts` | Additional ports to expose on brokers. Useful when the image exposes metrics (like prometheus, etc.) through a javaagent instead of a sidecar | `{}` | +| `readinessProbe.initialDelaySeconds` | Number of seconds before probe is initiated. | `30` | +| `readinessProbe.periodSeconds` | How often (in seconds) to perform the probe. | `10` | +| `readinessProbe.timeoutSeconds` | Number of seconds after which the probe times out. | `5` | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` | +| `readinessProbe.failureThreshold` | After the probe fails this many times, pod will be marked Unready. | `3` | +| `terminationGracePeriodSeconds` | Wait up to this many seconds for a broker to shut down gracefully, after which it is killed | `60` | +| `updateStrategy` | StatefulSet update strategy to use. | `{ type: "OnDelete" }` | +| `podManagementPolicy` | Start and stop pods in Parallel or OrderedReady (one-by-one.) Can not change after first release. | `OrderedReady` | +| `persistence.enabled` | Use a PVC to persist data | `true` | +| `persistence.size` | Size of data volume | `1Gi` | +| `persistence.mountPath` | Mount path of data volume | `/opt/kafka/data` | +| `persistence.storageClass` | Storage class of backing PVC | `nil` | +| `jmx.configMap.enabled` | Enable the default ConfigMap for JMX | `true` | +| `jmx.configMap.overrideConfig` | Allows config file to be generated by passing values to ConfigMap | `{}` | +| `jmx.configMap.overrideName` | Allows setting the name of the ConfigMap to be used | `""` | +| `jmx.port` | The jmx port which JMX style metrics are exposed (note: these are not scrapeable by Prometheus) | `5555` | +| `jmx.whitelistObjectNames` | Allows setting which JMX objects you want to expose to via JMX stats to JMX Exporter | (see `values.yaml`) | +| `prometheus.jmx.resources` | Allows setting resource limits for jmx sidecar container | `{}` | +| `prometheus.jmx.enabled` | Whether or not to expose JMX metrics to Prometheus | `false` | +| `prometheus.jmx.image` | JMX Exporter container image | `solsson/kafka-prometheus-jmx-exporter@sha256` | +| `prometheus.jmx.imageTag` | JMX Exporter container image tag | `a23062396cd5af1acdf76512632c20ea6be76885dfc20cd9ff40fb23846557e8` | +| `prometheus.jmx.interval` | Interval that Prometheus scrapes JMX metrics when using Prometheus Operator | `10s` | +| `prometheus.jmx.port` | JMX Exporter Port which exposes metrics in Prometheus format for scraping | `5556` | +| `prometheus.kafka.enabled` | Whether or not to create a separate Kafka exporter | `false` | +| `prometheus.kafka.image` | Kafka Exporter container image | `danielqsj/kafka-exporter` | +| `prometheus.kafka.imageTag` | Kafka Exporter container image tag | `v1.2.0` | +| `prometheus.kafka.interval` | Interval that Prometheus scrapes Kafka metrics when using Prometheus Operator | `10s` | +| `prometheus.kafka.port` | Kafka Exporter Port which exposes metrics in Prometheus format for scraping | `9308` | +| `prometheus.kafka.resources` | Allows setting resource limits for kafka-exporter pod | `{}` | +| `prometheus.operator.enabled` | True if using the Prometheus Operator, False if not | `false` | +| `prometheus.operator.serviceMonitor.namespace` | Namespace which Prometheus is running in. Default to kube-prometheus install. | `monitoring` | +| `prometheus.operator.serviceMonitor.selector` | Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install | `{ prometheus: kube-prometheus }` | +| `topics` | List of topics to create & configure. Can specify name, partitions, replicationFactor, config. See values.yaml | `[]` (Empty list) | +| `zookeeper.enabled` | If True, installs Zookeeper Chart | `true` | +| `zookeeper.resources` | Zookeeper resource requests and limits | `{}` | +| `zookeeper.env` | Environmental variables provided to Zookeeper Zookeeper | `{ZK_HEAP_SIZE: "1G"}` | +| `zookeeper.storage` | Zookeeper Persistent volume size | `2Gi` | +| `zookeeper.image.PullPolicy` | Zookeeper Container pull policy | `IfNotPresent` | +| `zookeeper.url` | URL of Zookeeper Cluster (unneeded if installing Zookeeper Chart) | `""` | +| `zookeeper.port` | Port of Zookeeper Cluster | `2181` | +| `zookeeper.affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}` | + + +Specify parameters using `--set key=value[,key=value]` argument to `helm install` + +Alternatively a YAML file that specifies the values for the parameters can be provided like this: + +```bash +$ helm install --name my-kafka -f values.yaml incubator/kafka +``` + +### Connecting to Kafka from inside Kubernetes + +You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: testclient + namespace: kafka +spec: + containers: + - name: kafka + image: solsson/kafka:0.11.0.0 + command: + - sh + - -c + - "exec tail -f /dev/null" +``` + +Once you have the testclient pod above running, you can list all kafka +topics with: + +` kubectl -n kafka exec -ti testclient -- ./bin/kafka-topics.sh --zookeeper +my-release-zookeeper:2181 --list` + +Where `my-release` is the name of your helm release. + +## Extensions + +Kafka has a rich ecosystem, with lots of tools. This sections is intended to compile all of those tools for which a corresponding Helm chart has already been created. + +- [Schema-registry](https://github.com/kubernetes/charts/tree/master/incubator/schema-registry) - A confluent project that provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving Avro schemas. + +### Connecting to Kafka from outside Kubernetes + +#### Node Port External Service Type + +Review and optionally override to enable the example text concerned with external access in `values.yaml`. + +Once configured, you should be able to reach Kafka via NodePorts, one per replica. In kops where private, +topology is enabled, this feature publishes an internal round-robin DNS record using the following naming +scheme. The external access feature of this chart was tested with kops on AWS using flannel networking. +If you wish to enable external access to Kafka running in kops, your security groups will likely need to +be adjusted to allow non-Kubernetes nodes (e.g. bastion) to access the Kafka external listener port range. + +``` +{{ .Release.Name }}.{{ .Values.external.domain }} +``` + +If `external.distinct` is set theses entries will be prefixed with the replica number or broker id. + +``` +{{ .Release.Name }}-.{{ .Values.external.domain }} +``` + +Port numbers for external access used at container and NodePort are unique to each container in the StatefulSet. +Using the default `external.firstListenerPort` number with a `replicas` value of `3`, the following +container and NodePorts will be opened for external access: `31090`, `31091`, `31092`. All of these ports should +be reachable from any host to NodePorts are exposed because Kubernetes routes each NodePort from entry node +to pod/container listening on the same port (e.g. `31091`). + +The `external.servicePort` at each external access service (one such service per pod) is a relay toward +the a `containerPort` with a number matching its respective `NodePort`. The range of NodePorts is set, but +should not actually listen, on all Kafka pods in the StatefulSet. As any given pod will listen only one +such port at a time, setting the range at every Kafka pod is a reasonably safe configuration. + +#### Load Balancer External Service Type + +The load balancer external service type differs from the node port type by routing to the `port` specified in the service for each statefulset container. Because of this `external.servicePort` is unused and will be set to the sum of `external.firstListenerPort` and the replica number. It is important to note that `external.firstListenerPort` does not have to be within the configured node port range for the cluster, however a node port will be allocated. + +## Known Limitations + +* Only supports storage options that have backends for persistent volume claims (tested mostly on AWS) +* KAFKA_PORT will be created as an envvar and brokers will fail to start when there is a service named `kafka` in the same namespace. We work around this be unsetting that envvar `unset KAFKA_PORT`. + +[brokerconfigs]: https://kafka.apache.org/documentation/#brokerconfigs + +## Prometheus Stats + +### Prometheus vs Prometheus Operator + +Standard Prometheus is the default monitoring option for this chart. This chart also supports the CoreOS Prometheus Operator, +which can provide additional functionality like automatically updating Prometheus and Alert Manager configuration. If you are +interested in installing the Prometheus Operator please see the [CoreOS repository](https://github.com/coreos/prometheus-operator/tree/master/helm) for more information or +read through the [CoreOS blog post introducing the Prometheus Operator](https://coreos.com/blog/the-prometheus-operator.html) + +### JMX Exporter + +The majority of Kafka statistics are provided via JMX and are exposed via the [Prometheus JMX Exporter](https://github.com/prometheus/jmx_exporter). + +The JMX Exporter is a general purpose prometheus provider which is intended for use with any Java application. Because of this, it produces a number of statistics which +may not be of interest. To help in reducing these statistics to their relevant components we have created a curated whitelist `whitelistObjectNames` for the JMX exporter. +This whitelist may be modified or removed via the values configuration. + +To accommodate compatibility with the Prometheus metrics, this chart performs transformations of raw JMX metrics. For example, broker names and topics names are incorporated +into the metric name instead of becoming a label. If you are curious to learn more about any default transformations to the chart metrics, please have reference the [configmap template](https://github.com/kubernetes/charts/blob/master/incubator/kafka/templates/jmx-configmap.yaml). + +### Kafka Exporter + +The [Kafka Exporter](https://github.com/danielqsj/kafka_exporter) is a complimentary metrics exporter to the JMX Exporter. The Kafka Exporter provides additional statistics on Kafka Consumer Groups. diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/.helmignore b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/Chart.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/Chart.yaml new file mode 100644 index 0000000..b7a0222 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/Chart.yaml @@ -0,0 +1,15 @@ +appVersion: 3.4.10 +description: Centralized service for maintaining configuration information, naming, + providing distributed synchronization, and providing group services. +home: https://zookeeper.apache.org/ +icon: https://zookeeper.apache.org/images/zookeeper_small.gif +maintainers: +- email: lachlan.evenson@microsoft.com + name: lachie83 +- email: owensk@google.com + name: kow3ns +name: zookeeper +sources: +- https://github.com/apache/zookeeper +- https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper +version: 1.0.2 diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/OWNERS b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/OWNERS new file mode 100644 index 0000000..dd9facd --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/OWNERS @@ -0,0 +1,6 @@ +approvers: +- lachie83 +- kow3ns +reviewers: +- lachie83 +- kow3ns diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/README.md b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/README.md new file mode 100644 index 0000000..22bbac4 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/README.md @@ -0,0 +1,140 @@ +# incubator/zookeeper + +This helm chart provides an implementation of the ZooKeeper [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) found in Kubernetes Contrib [Zookeeper StatefulSet](https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper). + +## Prerequisites +* Kubernetes 1.6+ +* PersistentVolume support on the underlying infrastructure +* A dynamic provisioner for the PersistentVolumes +* A familiarity with [Apache ZooKeeper 3.4.x](https://zookeeper.apache.org/doc/current/) + +## Chart Components +This chart will do the following: + +* Create a fixed size ZooKeeper ensemble using a [StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/). +* Create a [PodDisruptionBudget](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-disruption-budget/) so kubectl drain will respect the Quorum size of the ensemble. +* Create a [Headless Service](https://kubernetes.io/docs/concepts/services-networking/service/) to control the domain of the ZooKeeper ensemble. +* Create a Service configured to connect to the available ZooKeeper instance on the configured client port. +* Optionally apply a [Pod Anti-Affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature) to spread the ZooKeeper ensemble across nodes. +* Optionally start JMX Exporter and Zookeeper Exporter containers inside Zookeeper pods. +* Optionally create a job which creates Zookeeper chroots (e.g. `/kafka1`). + +## Installing the Chart +You can install the chart with the release name `zookeeper` as below. + +```console +$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator +$ helm install --name zookeeper incubator/zookeeper +``` + +If you do not specify a name, helm will select a name for you. + +### Installed Components +You can use `kubectl get` to view all of the installed components. + +```console{%raw} +$ kubectl get all -l app=zookeeper +NAME: zookeeper +LAST DEPLOYED: Wed Apr 11 17:09:48 2018 +NAMESPACE: default +STATUS: DEPLOYED + +RESOURCES: +==> v1beta1/PodDisruptionBudget +NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE +zookeeper N/A 1 1 2m + +==> v1/Service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +zookeeper-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP 2m +zookeeper ClusterIP 10.98.179.165 2181/TCP 2m + +==> v1beta1/StatefulSet +NAME DESIRED CURRENT AGE +zookeeper 3 3 2m +``` + +1. `statefulsets/zookeeper` is the StatefulSet created by the chart. +1. `po/zookeeper-<0|1|2>` are the Pods created by the StatefulSet. Each Pod has a single container running a ZooKeeper server. +1. `svc/zookeeper-headless` is the Headless Service used to control the network domain of the ZooKeeper ensemble. +1. `svc/zookeeper` is a Service that can be used by clients to connect to an available ZooKeeper server. + +## Configuration +You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```console +$ helm install --name my-release -f values.yaml incubator/zookeeper +``` + +## Default Values + +- You can find all user-configurable settings, their defaults and commentary about them in [values.yaml](values.yaml). + +## Deep Dive + +## Image Details +The image used for this chart is based on Ubuntu 16.04 LTS. This image is larger than Alpine or BusyBox, but it provides glibc, rather than ulibc or mucl, and a JVM release that is built against it. You can easily convert this chart to run against a smaller image with a JVM that is built against that image's libc. However, as far as we know, no Hadoop vendor supports, or has verified, ZooKeeper running on such a JVM. + +## JVM Details +The Java Virtual Machine used for this chart is the OpenJDK JVM 8u111 JRE (headless). + +## ZooKeeper Details +The ZooKeeper version is the latest stable version (3.4.10). The distribution is installed into /opt/zookeeper-3.4.10. This directory is symbolically linked to /opt/zookeeper. Symlinks are created to simulate a rpm installation into /usr. + +## Failover +You can test failover by killing the leader. Insert a key: +```console +$ kubectl exec zookeeper-0 -- /opt/zookeeper/bin/zkCli.sh create /foo bar; +$ kubectl exec zookeeper-2 -- /opt/zookeeper/bin/zkCli.sh get /foo; +``` + +Watch existing members: +```console +$ kubectl run --attach bbox --image=busybox --restart=Never -- sh -c 'while true; do for i in 0 1 2; do echo zk-${i} $(echo stats | nc -${i}.:2181 | grep Mode); sleep 1; done; done'; + +zk-2 Mode: follower +zk-0 Mode: follower +zk-1 Mode: leader +zk-2 Mode: follower +``` + +Delete Pods and wait for the StatefulSet controller to bring them back up: +```console +$ kubectl delete po -l app=zookeeper +$ kubectl get po --watch-only +NAME READY STATUS RESTARTS AGE +zookeeper-0 0/1 Running 0 35s +zookeeper-0 1/1 Running 0 50s +zookeeper-1 0/1 Pending 0 0s +zookeeper-1 0/1 Pending 0 0s +zookeeper-1 0/1 ContainerCreating 0 0s +zookeeper-1 0/1 Running 0 19s +zookeeper-1 1/1 Running 0 40s +zookeeper-2 0/1 Pending 0 0s +zookeeper-2 0/1 Pending 0 0s +zookeeper-2 0/1 ContainerCreating 0 0s +zookeeper-2 0/1 Running 0 19s +zookeeper-2 1/1 Running 0 41s +``` + +Check the previously inserted key: +```console +$ kubectl exec zookeeper-1 -- /opt/zookeeper/bin/zkCli.sh get /foo +ionid = 0x354887858e80035, negotiated timeout = 30000 + +WATCHER:: + +WatchedEvent state:SyncConnected type:None path:null +bar +``` + +## Scaling +ZooKeeper can not be safely scaled in versions prior to 3.5.x. This chart currently uses 3.4.x. There are manual procedures for scaling a 3.4.x ensemble, but as noted in the [ZooKeeper 3.5.2 documentation](https://zookeeper.apache.org/doc/r3.5.2-alpha/zookeeperReconfig.html) these procedures require a rolling restart, are known to be error prone, and often result in a data loss. + +While ZooKeeper 3.5.x does allow for dynamic ensemble reconfiguration (including scaling membership), the current status of the release is still alpha, and 3.5.x is therefore not recommended for production use. + +## Limitations +* StatefulSet and PodDisruptionBudget are beta resources. +* Only supports storage options that have backends for persistent volume claims. diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/NOTES.txt b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/NOTES.txt new file mode 100644 index 0000000..6c5da85 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/NOTES.txt @@ -0,0 +1,7 @@ +Thank you for installing ZooKeeper on your Kubernetes cluster. More information +about ZooKeeper can be found at https://zookeeper.apache.org/doc/current/ + +Your connection string should look like: + {{ template "zookeeper.fullname" . }}-0.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},{{ template "zookeeper.fullname" . }}-1.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},... + +You can also use the client service {{ template "zookeeper.fullname" . }}:{{ .Values.service.ports.client.port }} to connect to an available ZooKeeper server. diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/_helpers.tpl b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/_helpers.tpl new file mode 100644 index 0000000..ae36115 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/_helpers.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "zookeeper.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "zookeeper.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "zookeeper.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/config-jmx-exporter.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/config-jmx-exporter.yaml new file mode 100644 index 0000000..79905e5 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/config-jmx-exporter.yaml @@ -0,0 +1,19 @@ +{{- if .Values.exporters.jmx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Release.Name }}-jmx-exporter + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +data: + config.yml: |- + hostPort: 127.0.0.1:{{ .Values.env.JMXPORT }} + lowercaseOutputName: {{ .Values.exporters.jmx.config.lowercaseOutputName }} + rules: +{{ .Values.exporters.jmx.config.rules | toYaml | indent 6 }} + ssl: false + startDelaySeconds: {{ .Values.exporters.jmx.config.startDelaySeconds }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/job-chroots.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/job-chroots.yaml new file mode 100644 index 0000000..6663ddb --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/job-chroots.yaml @@ -0,0 +1,62 @@ +{{- if .Values.jobs.chroots.enabled }} +{{- $root := . }} +{{- $job := .Values.jobs.chroots }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "zookeeper.fullname" . }}-chroots + annotations: + "helm.sh/hook": post-install,post-upgrade + "helm.sh/hook-weight": "-5" + "helm.sh/hook-delete-policy": hook-succeeded + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + component: jobs + job: chroots +spec: + activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }} + backoffLimit: {{ $job.backoffLimit }} + completions: {{ $job.completions }} + parallelism: {{ $job.parallelism }} + template: + metadata: + labels: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} + component: jobs + job: chroots + spec: + restartPolicy: {{ $job.restartPolicy }} + containers: + - name: main + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + command: + - /bin/bash + - -o + - pipefail + - -euc + {{- $port := .Values.service.ports.client.port }} + - > + sleep 15; + export SERVER={{ template "zookeeper.fullname" $root }}:{{ $port }}; + {{- range $job.config.create }} + echo '==> {{ . }}'; + echo '====> Create chroot if does not exist.'; + zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid' + || zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} create {{ . }} ""; + echo '====> Confirm chroot exists.'; + zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid'; + echo '====> Chroot exists.'; + {{- end }} + env: + {{- range $key, $value := $job.env }} + - name: {{ $key | upper | replace "." "_" }} + value: {{ $value | quote }} + {{- end }} + resources: +{{ toYaml $job.resources | indent 12 }} +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/poddisruptionbudget.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/poddisruptionbudget.yaml new file mode 100644 index 0000000..15ee008 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/poddisruptionbudget.yaml @@ -0,0 +1,17 @@ +apiVersion: policy/v1beta1 +kind: PodDisruptionBudget +metadata: + name: {{ template "zookeeper.fullname" . }} + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + component: server +spec: + selector: + matchLabels: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} + component: server +{{ toYaml .Values.podDisruptionBudget | indent 2 }} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service-headless.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service-headless.yaml new file mode 100644 index 0000000..8822867 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service-headless.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "zookeeper.fullname" . }}-headless + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + clusterIP: None + ports: +{{- range $key, $port := .Values.ports }} + - name: {{ $key }} + port: {{ $port.containerPort }} + targetPort: {{ $port.name }} + protocol: {{ $port.protocol }} +{{- end }} + selector: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service.yaml new file mode 100644 index 0000000..5f10861 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/service.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "zookeeper.fullname" . }} + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + annotations: +{{- with .Values.service.annotations }} +{{ toYaml . | indent 4 }} +{{- end }} +spec: + type: {{ .Values.service.type }} + ports: + {{- range $key, $value := .Values.service.ports }} + - name: {{ $key }} +{{ toYaml $value | indent 6 }} + {{- end }} + selector: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/statefulset.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/statefulset.yaml new file mode 100644 index 0000000..bc2d160 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/templates/statefulset.yaml @@ -0,0 +1,177 @@ +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + name: {{ template "zookeeper.fullname" . }} + labels: + app: {{ template "zookeeper.name" . }} + chart: {{ template "zookeeper.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + component: server +spec: + serviceName: {{ template "zookeeper.fullname" . }}-headless + replicas: {{ .Values.replicaCount }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + selector: + matchLabels: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} + component: server + updateStrategy: +{{ toYaml .Values.updateStrategy | indent 4 }} + template: + metadata: + labels: + app: {{ template "zookeeper.name" . }} + release: {{ .Release.Name }} + component: server + {{- if .Values.podLabels }} + ## Custom pod labels + {{- range $key, $value := .Values.podLabels }} + {{ $key }}: {{ $value | quote }} + {{- end }} + {{- end }} + annotations: + {{- if .Values.podAnnotations }} + ## Custom pod annotations + {{- range $key, $value := .Values.podAnnotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + {{- end }} + spec: +{{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" +{{- end }} + securityContext: +{{ toYaml .Values.securityContext | indent 8 }} + containers: + + - name: zookeeper + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + command: + - /bin/bash + - -xec + - zkGenConfig.sh && exec zkServer.sh start-foreground + ports: +{{- range $key, $port := .Values.ports }} + - name: {{ $key }} +{{ toYaml $port | indent 14 }} +{{- end }} + livenessProbe: +{{ toYaml .Values.livenessProbe | indent 12 }} + readinessProbe: +{{ toYaml .Values.readinessProbe | indent 12 }} + env: + - name: ZK_REPLICAS + value: {{ .Values.replicaCount | quote }} + {{- range $key, $value := .Values.env }} + - name: {{ $key | upper | replace "." "_" }} + value: {{ $value | quote }} + {{- end }} + resources: +{{ toYaml .Values.resources | indent 12 }} + volumeMounts: + - name: data + mountPath: /var/lib/zookeeper + +{{- if .Values.exporters.jmx.enabled }} + - name: jmx-exporter + image: "{{ .Values.exporters.jmx.image.repository }}:{{ .Values.exporters.jmx.image.tag }}" + imagePullPolicy: {{ .Values.exporters.jmx.image.pullPolicy }} + ports: + {{- range $key, $port := .Values.exporters.jmx.ports }} + - name: {{ $key }} +{{ toYaml $port | indent 14 }} + {{- end }} + livenessProbe: +{{ toYaml .Values.exporters.jmx.livenessProbe | indent 12 }} + readinessProbe: +{{ toYaml .Values.exporters.jmx.readinessProbe | indent 12 }} + env: + - name: SERVICE_PORT + value: {{ .Values.exporters.jmx.ports.jmxxp.containerPort | quote }} + {{- with .Values.exporters.jmx.env }} + {{- range $key, $value := . }} + - name: {{ $key | upper | replace "." "_" }} + value: {{ $value | quote }} + {{- end }} + {{- end }} + resources: +{{ toYaml .Values.exporters.jmx.resources | indent 12 }} + volumeMounts: + - name: config-jmx-exporter + mountPath: /opt/jmx_exporter/config.yml + subPath: config.yml +{{- end }} + +{{- if .Values.exporters.zookeeper.enabled }} + - name: zookeeper-exporter + image: "{{ .Values.exporters.zookeeper.image.repository }}:{{ .Values.exporters.zookeeper.image.tag }}" + imagePullPolicy: {{ .Values.exporters.zookeeper.image.pullPolicy }} + args: + - -bind-addr=:{{ .Values.exporters.zookeeper.ports.zookeeperxp.containerPort }} + - -metrics-path={{ .Values.exporters.zookeeper.path }} + - -zookeeper=localhost:{{ .Values.ports.client.containerPort }} + - -log-level={{ .Values.exporters.zookeeper.config.logLevel }} + - -reset-on-scrape={{ .Values.exporters.zookeeper.config.resetOnScrape }} + ports: + {{- range $key, $port := .Values.exporters.zookeeper.ports }} + - name: {{ $key }} +{{ toYaml $port | indent 14 }} + {{- end }} + livenessProbe: +{{ toYaml .Values.exporters.zookeeper.livenessProbe | indent 12 }} + readinessProbe: +{{ toYaml .Values.exporters.zookeeper.readinessProbe | indent 12 }} + env: + {{- range $key, $value := .Values.exporters.zookeeper.env }} + - name: {{ $key | upper | replace "." "_" }} + value: {{ $value | quote }} + {{- end }} + resources: +{{ toYaml .Values.exporters.zookeeper.resources | indent 12 }} +{{- end }} + + {{- with .Values.nodeSelector }} + nodeSelector: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} + {{- if (or .Values.exporters.jmx.enabled (not .Values.persistence.enabled)) }} + volumes: + {{- if .Values.exporters.jmx.enabled }} + - name: config-jmx-exporter + configMap: + name: {{ .Release.Name }}-jmx-exporter + {{- end }} + {{- if not .Values.persistence.enabled }} + - name: data + emptyDir: {} + {{- end }} + {{- end }} + {{- if .Values.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: data + spec: + accessModes: + - {{ .Values.persistence.accessMode | quote }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{- if .Values.persistence.storageClass }} + {{- if (eq "-" .Values.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.persistence.storageClass }}" + {{- end }} + {{- end }} + {{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/values.yaml b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/values.yaml new file mode 100644 index 0000000..f92a12b --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/charts/zookeeper/values.yaml @@ -0,0 +1,294 @@ +## As weighted quorums are not supported, it is imperative that an odd number of replicas +## be chosen. Moreover, the number of replicas should be either 1, 3, 5, or 7. +## +## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper#stateful-set +replicaCount: 3 # Desired quantity of ZooKeeper pods. This should always be (1,3,5, or 7) + +podDisruptionBudget: + maxUnavailable: 1 # Limits how many Zokeeper pods may be unavailable due to voluntary disruptions. + +terminationGracePeriodSeconds: 1800 # Duration in seconds a Zokeeper pod needs to terminate gracefully. + +## OnDelete requires you to manually delete each pod when making updates. +## This approach is at the moment safer than RollingUpdate because replication +## may be incomplete when replication source pod is killed. +## +## ref: http://blog.kubernetes.io/2017/09/kubernetes-statefulsets-daemonsets.html +updateStrategy: + type: OnDelete # Pods will only be created when you manually delete old pods. + +## refs: +## - https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper +## - https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/Makefile#L1 +image: + repository: iecedge/k8szk_arm64 # Container image repository for zookeeper container. + tag: v3 # Container image tag for zookeeper container. + pullPolicy: IfNotPresent # Image pull criteria for zookeeper container. + +service: + type: ClusterIP # Exposes zookeeper on a cluster-internal IP. + annotations: {} # Arbitrary non-identifying metadata for zookeeper service. + ## AWS example for use with LoadBalancer service type. + # external-dns.alpha.kubernetes.io/hostname: zookeeper.cluster.local + # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" + # service.beta.kubernetes.io/aws-load-balancer-internal: "true" + ports: + client: + port: 2181 # Service port number for client port. + targetPort: client # Service target port for client port. + protocol: TCP # Service port protocol for client port. + + +ports: + client: + containerPort: 2181 # Port number for zookeeper container client port. + protocol: TCP # Protocol for zookeeper container client port. + election: + containerPort: 3888 # Port number for zookeeper container election port. + protocol: TCP # Protocol for zookeeper container election port. + server: + containerPort: 2888 # Port number for zookeeper container server port. + protocol: TCP # Protocol for zookeeper container server port. + +resources: {} # Optionally specify how much CPU and memory (RAM) each zookeeper container needs. + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +nodeSelector: {} # Node label-values required to run zookeeper pods. + +tolerations: [] # Node taint overrides for zookeeper pods. + +affinity: {} # Criteria by which pod label-values influence scheduling for zookeeper pods. + # podAntiAffinity: + # requiredDuringSchedulingIgnoredDuringExecution: + # - topologyKey: "kubernetes.io/hostname" + # labelSelector: + # matchLabels: + # release: zookeeper + +podAnnotations: {} # Arbitrary non-identifying metadata for zookeeper pods. + # prometheus.io/scrape: "true" + # prometheus.io/path: "/metrics" + # prometheus.io/port: "9141" + +podLabels: {} # Key/value pairs that are attached to zookeeper pods. + # team: "developers" + # service: "zookeeper" + +livenessProbe: + exec: + command: + - zkOk.sh + initialDelaySeconds: 20 + # periodSeconds: 30 + # timeoutSeconds: 30 + # failureThreshold: 6 + # successThreshold: 1 + +readinessProbe: + exec: + command: + - zkOk.sh + initialDelaySeconds: 20 + # periodSeconds: 30 + # timeoutSeconds: 30 + # failureThreshold: 6 + # successThreshold: 1 + +securityContext: + fsGroup: 1000 + runAsUser: 1000 + +persistence: + enabled: true + ## zookeeper data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClass: "-" + accessMode: ReadWriteOnce + size: 5Gi + +## Exporters query apps for metrics and make those metrics available for +## Prometheus to scrape. +exporters: + + jmx: + enabled: false + image: + repository: cachengo/jmx-prometheus-exporter + tag: 0.3.0 + pullPolicy: IfNotPresent + config: + lowercaseOutputName: false + ## ref: https://github.com/prometheus/jmx_exporter/blob/master/example_configs/zookeeper.yaml + rules: + - pattern: "org.apache.ZooKeeperService<>(\\w+)" + name: "zookeeper_$2" + - pattern: "org.apache.ZooKeeperService<>(\\w+)" + name: "zookeeper_$3" + labels: + replicaId: "$2" + - pattern: "org.apache.ZooKeeperService<>(\\w+)" + name: "zookeeper_$4" + labels: + replicaId: "$2" + memberType: "$3" + - pattern: "org.apache.ZooKeeperService<>(\\w+)" + name: "zookeeper_$4_$5" + labels: + replicaId: "$2" + memberType: "$3" + startDelaySeconds: 30 + env: {} + resources: {} + path: /metrics + ports: + jmxxp: + containerPort: 9404 + protocol: TCP + livenessProbe: + httpGet: + path: /metrics + port: jmxxp + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 60 + failureThreshold: 8 + successThreshold: 1 + readinessProbe: + httpGet: + path: /metrics + port: jmxxp + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 60 + failureThreshold: 8 + successThreshold: 1 + + zookeeper: + ## refs: + ## - https://github.com/carlpett/zookeeper_exporter + ## - https://hub.docker.com/r/akrainoenea/zookeeper_exporter/ + ## - https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/#zookeeper-metrics + enabled: false + image: + repository: akrainoenea/zookeeper_exporter + tag: v1.1.2 + pullPolicy: IfNotPresent + config: + logLevel: info + resetOnScrape: "true" + env: {} + resources: {} + path: /metrics + ports: + zookeeperxp: + containerPort: 9141 + protocol: TCP + livenessProbe: + httpGet: + path: /metrics + port: zookeeperxp + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 60 + failureThreshold: 8 + successThreshold: 1 + readinessProbe: + httpGet: + path: /metrics + port: zookeeperxp + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 60 + failureThreshold: 8 + successThreshold: 1 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper +env: + + ## Options related to JMX exporter. + ## ref: https://github.com/apache/zookeeper/blob/master/bin/zkServer.sh#L36 + JMXAUTH: "false" + JMXDISABLE: "false" + JMXPORT: 1099 + JMXSSL: "false" + + ## The port on which the server will accept client requests. + ZK_CLIENT_PORT: 2181 + + ## The port on which the ensemble performs leader election. + ZK_ELECTION_PORT: 3888 + + ## The JVM heap size. + ZK_HEAP_SIZE: 2G + + ## The number of Ticks that an ensemble member is allowed to perform leader + ## election. + ZK_INIT_LIMIT: 5 + + ## The Log Level that for the ZooKeeper processes logger. + ## Choices are `TRACE,DEBUG,INFO,WARN,ERROR,FATAL`. + ZK_LOG_LEVEL: INFO + + ## The maximum number of concurrent client connections that + ## a server in the ensemble will accept. + ZK_MAX_CLIENT_CNXNS: 60 + + ## The maximum session timeout that the ensemble will allow a client to request. + ## Upstream default is `20 * ZK_TICK_TIME` + ZK_MAX_SESSION_TIMEOUT: 40000 + + ## The minimum session timeout that the ensemble will allow a client to request. + ## Upstream default is `2 * ZK_TICK_TIME`. + ZK_MIN_SESSION_TIMEOUT: 4000 + + ## The delay, in hours, between ZooKeeper log and snapshot cleanups. + ZK_PURGE_INTERVAL: 0 + + ## The port on which the leader will send events to followers. + ZK_SERVER_PORT: 2888 + + ## The number of snapshots that the ZooKeeper process will retain if + ## `ZK_PURGE_INTERVAL` is set to a value greater than `0`. + ZK_SNAP_RETAIN_COUNT: 3 + + ## The number of Tick by which a follower may lag behind the ensembles leader. + ZK_SYNC_LIMIT: 10 + + ## The number of wall clock ms that corresponds to a Tick for the ensembles + ## internal time. + ZK_TICK_TIME: 2000 + +jobs: + ## ref: http://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkSessions + chroots: + enabled: false + activeDeadlineSeconds: 300 + backoffLimit: 5 + completions: 1 + config: + create: [] + # - /kafka + # - /ureplicator + env: [] + parallelism: 1 + resources: {} + restartPolicy: Never diff --git a/src/seba_charts/cord-platform/charts/kafka/requirements.lock b/src/seba_charts/cord-platform/charts/kafka/requirements.lock new file mode 100644 index 0000000..802e6a9 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/requirements.lock @@ -0,0 +1,6 @@ +dependencies: +- name: zookeeper + repository: https://kubernetes-charts-incubator.storage.googleapis.com/ + version: 1.0.2 +digest: sha256:0ea890c77e32aee10c564b732c9fa27b17fa5c398bc50a6bf342ecbb79094cdc +generated: 2018-07-09T20:04:07.73379146+03:00 diff --git a/src/seba_charts/cord-platform/charts/kafka/requirements.yaml b/src/seba_charts/cord-platform/charts/kafka/requirements.yaml new file mode 100644 index 0000000..3468ece --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/requirements.yaml @@ -0,0 +1,6 @@ +dependencies: +- name: zookeeper + version: 1.0.2 + repository: https://kubernetes-charts-incubator.storage.googleapis.com/ + condition: zookeeper.enabled + diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/NOTES.txt b/src/seba_charts/cord-platform/charts/kafka/templates/NOTES.txt new file mode 100644 index 0000000..11eade7 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/NOTES.txt @@ -0,0 +1,73 @@ +### Connecting to Kafka from inside Kubernetes + +You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this: + + apiVersion: v1 + kind: Pod + metadata: + name: testclient + namespace: {{ .Release.Namespace }} + spec: + containers: + - name: kafka + image: {{ .Values.image }}:{{ .Values.imageTag }} + command: + - sh + - -c + - "exec tail -f /dev/null" + +Once you have the testclient pod above running, you can list all kafka +topics with: + + kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --list + +To create a new topic: + + kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1 + +To listen for messages on a topic: + + kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-consumer --bootstrap-server {{ .Release.Name }}-kafka:9092 --topic test1 --from-beginning + +To stop the listener session above press: Ctrl+C + +To start an interactive message producer session: + kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-producer --broker-list {{ .Release.Name }}-kafka-headless:9092 --topic test1 + +To create a message in the above session, simply type the message and press "enter" +To end the producer session try: Ctrl+C +{{ if .Values.external.enabled }} +### Connecting to Kafka from outside Kubernetes + +You have enabled the external access feature of this chart. + +**WARNING:** By default this feature allows Kafka clients outside Kubernetes to +connect to Kafka via NodePort(s) in `PLAINTEXT`. + +Please see this chart's README.md for more details and guidance. + +If you wish to connect to Kafka from outside please configure your external Kafka +clients to point at the following brokers. Please allow a few minutes for all +associated resources to become healthy. + {{ $fullName := include "kafka.fullname" . }} + {{- $replicas := .Values.replicas | int }} + {{- $servicePort := .Values.external.servicePort }} + {{- $root := . }} + {{- range $i, $e := until $replicas }} + {{- $externalListenerPort := add $root.Values.external.firstListenerPort $i }} + {{- if $root.Values.external.distinct }} +{{ printf "%s-%d.%s:%d" $root.Release.Name $i $root.Values.external.domain $externalListenerPort | indent 2 }} + {{- else }} +{{ printf "%s.%s:%d" $root.Release.Name $root.Values.external.domain $externalListenerPort | indent 2 }} + {{- end }} + {{- end }} +{{- end }} + +{{ if .Values.prometheus.jmx.enabled }} +To view JMX configuration (pull request/updates to improve defaults are encouraged): + {{ if .Values.jmx.configMap.overrideName }} + kubectl -n {{ .Release.Namespace }} describe configmap {{ .Values.jmx.configMap.overrideName }} + {{ else }} + kubectl -n {{ .Release.Namespace }} describe configmap {{ include "kafka.fullname" . }}-metrics + {{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/_helpers.tpl b/src/seba_charts/cord-platform/charts/kafka/templates/_helpers.tpl new file mode 100644 index 0000000..cb0d300 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/_helpers.tpl @@ -0,0 +1,56 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "kafka.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "kafka.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified zookeeper name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "kafka.zookeeper.fullname" -}} +{{- $name := default "zookeeper" .Values.zookeeper.nameOverride -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Form the Zookeeper URL. If zookeeper is installed as part of this chart, use k8s service discovery, +else use user-provided URL +*/}} +{{- define "zookeeper.url" }} +{{- $port := .Values.zookeeper.port | toString }} +{{- if .Values.zookeeper.enabled -}} +{{- printf "%s:%s" (include "kafka.zookeeper.fullname" .) $port }} +{{- else -}} +{{- $zookeeperConnect := printf "%s:%s" .Values.zookeeper.url $port }} +{{- $zookeeperConnectOverride := index .Values "configurationOverrides" "zookeeper.connect" }} +{{- default $zookeeperConnect $zookeeperConnectOverride }} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "kafka.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/configmap-config.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/configmap-config.yaml new file mode 100644 index 0000000..454faf0 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/configmap-config.yaml @@ -0,0 +1,37 @@ +{{- if .Values.topics -}} +{{- $zk := include "zookeeper.url" . -}} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app: {{ template "kafka.fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" + name: {{ template "kafka.fullname" . }}-config +data: + runtimeConfig.sh: | + #!/bin/sh + set -e + cd /usr/bin + until kafka-configs --zookeeper {{ $zk }} --entity-type topics --describe || (( count++ >= 6 )) + do + echo "Waiting for Zookeeper..." + sleep 20 + done + echo "Applying runtime configuration using {{ .Values.image }}:{{ .Values.imageTag }}" + {{- range $n, $topic := .Values.topics }} + {{- if and $topic.partitions $topic.replicationFactor }} + kafka-topics --zookeeper {{ $zk }} --create --if-not-exists --force --topic {{ $topic.name }} --partitions {{ $topic.partitions }} --replication-factor {{ $topic.replicationFactor }} + {{- else if $topic.partitions }} + kafka-topics --zookeeper {{ $zk }} --alter --force --topic {{ $topic.name }} --partitions {{ $topic.partitions }} || true + {{- end }} + {{- if $topic.defaultConfig }} + kafka-configs --zookeeper {{ $zk }} --entity-type topics --entity-name {{ $topic.name }} --alter --force --delete-config {{ nospace $topic.defaultConfig }} || true + {{- end }} + {{- if $topic.config }} + kafka-configs --zookeeper {{ $zk }} --entity-type topics --entity-name {{ $topic.name }} --alter --force --add-config {{ nospace $topic.config }} + {{- end }} + kafka-configs --zookeeper {{ $zk }} --entity-type topics --entity-name {{ $topic.name }} --describe + {{- end }} +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/configmap-jmx.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/configmap-jmx.yaml new file mode 100644 index 0000000..24a25c7 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/configmap-jmx.yaml @@ -0,0 +1,67 @@ +{{- if and .Values.prometheus.jmx.enabled .Values.jmx.configMap.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "kafka.fullname" . }}-metrics + labels: + app: {{ include "kafka.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +data: + jmx-kafka-prometheus.yml: |+ +{{- if .Values.jmx.configMap.overrideConfig }} +{{ toYaml .Values.jmx.configMap.overrideConfig | indent 4 }} +{{- else }} + jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:{{ .Values.jmx.port }}/jmxrmi + lowercaseOutputName: true + lowercaseOutputLabelNames: true + ssl: false + {{ if .Values.jmx.whitelistObjectNames }} + whitelistObjectNames: ["{{ join "\",\"" .Values.jmx.whitelistObjectNames }}"] + {{ end }} + rules: + - pattern: kafka.controller<>(Value) + name: kafka_controller_$1_$2_$4 + labels: + broker_id: "$3" + - pattern: kafka.controller<>(Value) + name: kafka_controller_$1_$2_$3 + - pattern: kafka.controller<>(Value) + name: kafka_controller_$1_$2_$3 + - pattern: kafka.controller<>(Count) + name: kafka_controller_$1_$2_$3 + - pattern: kafka.server<>(Value) + name: kafka_server_$1_$2_$4 + labels: + client_id: "$3" + - pattern : kafka.network<>(Value) + name: kafka_network_$1_$2_$4 + labels: + network_processor: $3 + - pattern : kafka.network<>(Count) + name: kafka_network_$1_$2_$4 + labels: + request: $3 + - pattern: kafka.server<>(Count|OneMinuteRate) + name: kafka_server_$1_$2_$4 + labels: + topic: $3 + - pattern: kafka.server<>(Value) + name: kafka_server_$1_$2_$3_$4 + - pattern: kafka.server<>(Count|Value|OneMinuteRate) + name: kafka_server_$1_total_$2_$3 + - pattern: kafka.server<>(queue-size) + name: kafka_server_$1_$2 + - pattern: java.lang<(.+)>(\w+) + name: java_lang_$1_$4_$3_$2 + - pattern: java.lang<>(\w+) + name: java_lang_$1_$3_$2 + - pattern : java.lang + - pattern: kafka.log<>Value + name: kafka_log_$1_$2 + labels: + topic: $3 + partition: $4 +{{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/deployment-kafka-exporter.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/deployment-kafka-exporter.yaml new file mode 100644 index 0000000..d43aab1 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/deployment-kafka-exporter.yaml @@ -0,0 +1,38 @@ +{{- if .Values.prometheus.kafka.enabled }} +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: {{ template "kafka.fullname" . }}-exporter + labels: + app: "{{ template "kafka.name" . }}" + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}" +spec: + replicas: 1 + selector: + matchLabels: + app: {{ template "kafka.name" . }}-exporter + release: {{ .Release.Name }} + template: + metadata: + annotations: +{{- if and .Values.prometheus.kafka.enabled (not .Values.prometheus.operator.enabled) }} + prometheus.io/scrape: "true" + prometheus.io/port: {{ .Values.prometheus.kafka.port | quote }} +{{- end }} + labels: + app: {{ template "kafka.name" . }}-exporter + release: {{ .Release.Name }} + spec: + containers: + - image: "{{ .Values.prometheus.kafka.image }}:{{ .Values.prometheus.kafka.imageTag }}" + name: kafka-exporter + args: + - --kafka.server={{ template "kafka.fullname" . }}:9092 + - --web.listen-address=:{{ .Values.prometheus.kafka.port }} + ports: + - containerPort: {{ .Values.prometheus.kafka.port }} + resources: +{{ toYaml .Values.prometheus.kafka.resources | indent 10 }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/job-config.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/job-config.yaml new file mode 100644 index 0000000..1bd747f --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/job-config.yaml @@ -0,0 +1,32 @@ +{{- if .Values.topics -}} +{{- $scriptHash := include (print $.Template.BasePath "/configmap-config.yaml") . | sha256sum | trunc 8 -}} +apiVersion: batch/v1 +kind: Job +metadata: + name: "{{ template "kafka.fullname" . }}-config-{{ $scriptHash }}" + labels: + app: {{ template "kafka.fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" +spec: + template: + metadata: + labels: + app: {{ template "kafka.fullname" . }} + release: "{{ .Release.Name }}" + spec: + restartPolicy: Never + volumes: + - name: config-volume + configMap: + name: {{ template "kafka.fullname" . }}-config + defaultMode: 0744 + containers: + - name: {{ template "kafka.fullname" . }}-config + image: "{{ .Values.image }}:{{ .Values.imageTag }}" + command: ["/usr/local/script/runtimeConfig.sh"] + volumeMounts: + - name: config-volume + mountPath: "/usr/local/script" +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/rbac.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/rbac.yaml new file mode 100644 index 0000000..0173ab6 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/rbac.yaml @@ -0,0 +1,36 @@ +{{- if .Values.rbac.enabled }} +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ .Release.Name }} + namespace: {{ .Release.Namespace }} +--- +apiVersion: rbac.authorization.k8s.io/v1beta1 +kind: Role +metadata: + name: {{ .Release.Name }} + namespace: {{ .Release.Namespace }} +rules: +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - list + - patch +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1beta1 +metadata: + name: {{ .Release.Name }} +roleRef: + kind: Role + name: {{ .Release.Name }} + apiGroup: rbac.authorization.k8s.io +subjects: +- kind: ServiceAccount + name: {{ .Release.Name }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers-external.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers-external.yaml new file mode 100644 index 0000000..e8084f8 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers-external.yaml @@ -0,0 +1,52 @@ +{{- if .Values.external.enabled }} + {{- $fullName := include "kafka.fullname" . }} + {{- $replicas := .Values.replicas | int }} + {{- $servicePort := .Values.external.servicePort }} + {{- $dnsPrefix := printf "%s" .Release.Name }} + {{- $root := . }} + {{- range $i, $e := until $replicas }} + {{- $externalListenerPort := add $root.Values.external.firstListenerPort $i }} + {{- $responsiblePod := printf "%s-%d" (printf "%s" $fullName) $i }} + {{- $distinctPrefix := printf "%s-%d" $dnsPrefix $i }} +--- +apiVersion: v1 +kind: Service +metadata: + annotations: + {{- if $root.Values.external.distinct }} + dns.alpha.kubernetes.io/internal: "{{ $distinctPrefix }}.{{ $root.Values.external.domain }}" + external-dns.alpha.kubernetes.io/hostname: "{{ $distinctPrefix }}.{{ $root.Values.external.domain }}" + {{- else }} + dns.alpha.kubernetes.io/internal: "{{ $dnsPrefix }}.{{ $root.Values.external.domain }}" + external-dns.alpha.kubernetes.io/hostname: "{{ $dnsPrefix }}.{{ $root.Values.external.domain }}" + {{- end }} + {{- if $root.Values.external.annotations }} +{{ toYaml $root.Values.external.annotations | indent 4 }} + {{- end }} + name: {{ $root.Release.Name }}-{{ $i }}-external + labels: + app: {{ include "kafka.name" $root }} + chart: {{ $root.Chart.Name }}-{{ $root.Chart.Version }} + release: {{ $root.Release.Name }} + heritage: {{ $root.Release.Service }} + pod: {{ $responsiblePod | quote }} +spec: + type: {{ $root.Values.external.type }} + ports: + - name: external-broker + {{- if eq $root.Values.external.type "LoadBalancer" }} + port: {{ $externalListenerPort }} + {{- else }} + port: {{ $servicePort }} + {{- end }} + targetPort: {{ $externalListenerPort }} + {{- if eq $root.Values.external.type "NodePort" }} + nodePort: {{ $externalListenerPort }} + {{- end }} + protocol: TCP + selector: + app: {{ include "kafka.name" $root }} + release: {{ $root.Release.Name }} + pod: {{ $responsiblePod | quote }} + {{- end }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers.yaml new file mode 100644 index 0000000..6748b45 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/service-brokers.yaml @@ -0,0 +1,44 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kafka.fullname" . }} + labels: + app: {{ include "kafka.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + ports: + - name: broker + port: 9092 + targetPort: kafka +{{- if and .Values.prometheus.jmx.enabled .Values.prometheus.operator.enabled }} + - name: jmx-exporter + protocol: TCP + port: {{ .Values.jmx.port }} + targetPort: prometheus +{{- end }} + selector: + app: {{ include "kafka.name" . }} + release: {{ .Release.Name }} +--- +{{- if and .Values.prometheus.kafka.enabled .Values.prometheus.operator.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kafka.fullname" . }}-exporter + labels: + app: {{ include "kafka.name" . }}-exporter + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + ports: + - name: kafka-exporter + protocol: TCP + port: {{ .Values.prometheus.kafka.port }} + targetPort: {{ .Values.prometheus.kafka.port }} + selector: + app: {{ include "kafka.name" . }}-exporter + release: {{ .Release.Name }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/service-headless.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/service-headless.yaml new file mode 100644 index 0000000..483f5b0 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/service-headless.yaml @@ -0,0 +1,25 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "kafka.fullname" . }}-headless + labels: + app: {{ include "kafka.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} + annotations: + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" +{{- if .Values.headless.annotations }} +{{ .Values.headless.annotations | toYaml | trimSuffix "\n" | indent 4 }} +{{- end }} +spec: + ports: + - name: broker + port: {{ .Values.headless.port }} +{{- if .Values.headless.targetPort }} + targetPort: {{ .Values.headless.targetPort }} +{{- end }} + clusterIP: None + selector: + app: {{ include "kafka.name" . }} + release: {{ .Release.Name }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/servicemonitors.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/servicemonitors.yaml new file mode 100644 index 0000000..92eb125 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/servicemonitors.yaml @@ -0,0 +1,39 @@ +{{ if and .Values.prometheus.jmx.enabled .Values.prometheus.operator.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "kafka.fullname" . }} + namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }} + labels: +{{ toYaml .Values.prometheus.operator.serviceMonitor.selector | indent 4 }} +spec: + selector: + matchLabels: + app: {{ include "kafka.name" . }} + release: {{ .Release.Name }} + endpoints: + - port: jmx-exporter + interval: {{ .Values.prometheus.jmx.interval }} + namespaceSelector: + any: true +{{ end }} +--- +{{ if and .Values.prometheus.kafka.enabled .Values.prometheus.operator.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "kafka.fullname" . }}-exporter + namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }} + labels: +{{ toYaml .Values.prometheus.operator.serviceMonitor.selector | indent 4 }} +spec: + selector: + matchLabels: + app: {{ include "kafka.name" . }}-exporter + release: {{ .Release.Name }} + endpoints: + - port: kafka-exporter + interval: {{ .Values.prometheus.kafka.interval }} + namespaceSelector: + any: true +{{ end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/statefulset.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/statefulset.yaml new file mode 100644 index 0000000..e8c988d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/statefulset.yaml @@ -0,0 +1,222 @@ +{{- $advertisedListenersOverride := first (pluck "advertised.listeners" .Values.configurationOverrides) }} +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + name: {{ include "kafka.fullname" . }} + labels: + app: {{ include "kafka.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +spec: + serviceName: {{ include "kafka.fullname" . }}-headless + podManagementPolicy: {{ .Values.podManagementPolicy }} + updateStrategy: +{{ toYaml .Values.updateStrategy | indent 4 }} + replicas: {{ default 3 .Values.replicas }} + template: + metadata: +{{- if and .Values.prometheus.jmx.enabled (not .Values.prometheus.operator.enabled) }} + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: {{ .Values.prometheus.jmx.port | quote }} +{{- end }} + labels: + app: {{ include "kafka.name" . }} + release: {{ .Release.Name }} + spec: +{{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" +{{- end }} +{{- if .Values.rbac.enabled }} + serviceAccountName: {{ .Release.Name }} +{{- end }} + {{- if .Values.external.enabled }} + ## ref: https://github.com/Yolean/kubernetes-kafka/blob/master/kafka/50kafka.yml + initContainers: + - name: init-ext + image: "{{ .Values.external.init.image }}:{{ .Values.external.init.imageTag }}" + imagePullPolicy: "{{ .Values.external.init.imagePullPolicy }}" + command: + - sh + - -euxc + - "kubectl label pods ${POD_NAME} --namespace ${POD_NAMESPACE} pod=${POD_NAME} --overwrite" + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + {{- end }} +{{- if .Values.tolerations }} + tolerations: +{{ toYaml .Values.tolerations | indent 8 }} +{{- end }} +{{- if .Values.affinity }} + affinity: +{{ toYaml .Values.affinity | indent 8 }} +{{- end }} +{{- if .Values.nodeSelector }} + nodeSelector: +{{ toYaml .Values.nodeSelector | indent 8 }} +{{- end }} + containers: + {{- if .Values.prometheus.jmx.enabled }} + - name: metrics + image: "{{ .Values.prometheus.jmx.image }}:{{ .Values.prometheus.jmx.imageTag }}" + command: + - sh + - -exc + - | + trap "exit 0" TERM; \ + while :; do \ + java \ + -XX:+UnlockExperimentalVMOptions \ + -XX:+UseCGroupMemoryLimitForHeap \ + -XX:MaxRAMFraction=1 \ + -XshowSettings:vm \ + -jar \ + jmx_prometheus_httpserver.jar \ + {{ .Values.prometheus.jmx.port | quote }} \ + /etc/jmx-kafka/jmx-kafka-prometheus.yml & \ + wait $! || sleep 3; \ + done + ports: + - containerPort: {{ .Values.prometheus.jmx.port }} + name: prometheus + resources: +{{ toYaml .Values.prometheus.jmx.resources | indent 10 }} + volumeMounts: + - name: jmx-config + mountPath: /etc/jmx-kafka + {{- end }} + - name: {{ include "kafka.name" . }}-broker + image: "{{ .Values.image }}:{{ .Values.imageTag }}" + imagePullPolicy: "{{ .Values.imagePullPolicy }}" + livenessProbe: + exec: + command: + - sh + - -ec + - /usr/bin/jps | /bin/grep -q SupportedKafka + {{- if not .Values.livenessProbe }} + initialDelaySeconds: 30 + timeoutSeconds: 5 + {{- else }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds | default 30}} + {{- if .Values.livenessProbe.periodSeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + {{- end }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds | default 5}} + {{- if .Values.livenessProbe.successThreshold }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + {{- end }} + {{- if .Values.livenessProbe.failureThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- end }} + {{- end }} + readinessProbe: + tcpSocket: + port: kafka + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + ports: + - containerPort: 9092 + name: kafka + {{- if .Values.external.enabled }} + {{- $replicas := .Values.replicas | int }} + {{- $root := . }} + {{- range $i, $e := until $replicas }} + - containerPort: {{ add $root.Values.external.firstListenerPort $i }} + name: external-{{ $i }} + {{- end }} + {{- end }} + {{- if .Values.prometheus.jmx.enabled }} + - containerPort: {{ .Values.jmx.port }} + name: jmx + {{- end }} + {{- if .Values.additionalPorts }} +{{ toYaml .Values.additionalPorts | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.resources | indent 10 }} + env: + {{- if .Values.prometheus.jmx.enabled }} + - name: JMX_PORT + value: "{{ .Values.jmx.port }}" + {{- end }} + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + - name: KAFKA_HEAP_OPTS + value: {{ .Values.kafkaHeapOptions }} + {{- if not (hasKey .Values.configurationOverrides "zookeeper.connect") }} + - name: KAFKA_ZOOKEEPER_CONNECT + value: {{ include "zookeeper.url" . | quote }} + {{- end }} + {{- if not (hasKey .Values.configurationOverrides "log.dirs") }} + - name: KAFKA_LOG_DIRS + value: {{ printf "%s/%s" .Values.persistence.mountPath .Values.logSubPath | quote }} + {{- end }} + {{- range $key, $value := .Values.configurationOverrides }} + - name: {{ printf "KAFKA_%s" $key | replace "." "_" | upper | quote }} + value: {{ $value | quote }} + {{- end }} + {{- if .Values.jmx.port }} + - name: KAFKA_JMX_PORT + value: "{{ .Values.jmx.port }}" + {{- end }} + # This is required because the Downward API does not yet support identification of + # pod numbering in statefulsets. Thus, we are required to specify a command which + # allows us to extract the pod ID for usage as the Kafka Broker ID. + # See: https://github.com/kubernetes/kubernetes/issues/31218 + command: + - sh + - -exc + - | + unset KAFKA_PORT && \ + export KAFKA_BROKER_ID=${HOSTNAME##*-} && \ + export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092{{ if kindIs "string" $advertisedListenersOverride }}{{ printf ",%s" $advertisedListenersOverride }}{{ end }} && \ + exec /etc/confluent/docker/run + volumeMounts: + - name: datadir + mountPath: {{ .Values.persistence.mountPath | quote }} + volumes: + {{- if not .Values.persistence.enabled }} + - name: datadir + emptyDir: {} + {{- end }} + {{- if .Values.prometheus.jmx.enabled }} + - name: jmx-config + configMap: + {{- if .Values.jmx.configMap.overrideName }} + name: {{ .Values.jmx.configMap.overrideName }} + {{- else }} + name: {{ include "kafka.fullname" . }}-metrics + {{- end }} + {{- end }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- if .Values.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: datadir + spec: + accessModes: [ "ReadWriteOnce" ] + resources: + requests: + storage: {{ .Values.persistence.size }} + {{- if .Values.persistence.storageClass }} + {{- if (eq "-" .Values.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.persistence.storageClass }}" + {{- end }} + {{- end }} + {{- end }} diff --git a/src/seba_charts/cord-platform/charts/kafka/templates/tests/test_topic_create_consume_produce.yaml b/src/seba_charts/cord-platform/charts/kafka/templates/tests/test_topic_create_consume_produce.yaml new file mode 100644 index 0000000..5e7a7ea --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/templates/tests/test_topic_create_consume_produce.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: Pod +metadata: + name: "{{ .Release.Name }}-test-topic-create-consume-produce" + annotations: + "helm.sh/hook": test-success +spec: + containers: + - name: {{ .Release.Name }}-test-consume + image: {{ .Values.image }}:{{ .Values.imageTag }} + command: + - sh + - -c + - | + # Create the topic + kafka-topics --zookeeper {{ include "zookeeper.url" . }} --topic helm-test-topic-create-consume-produce --create --partitions 1 --replication-factor 1 --if-not-exists && \ + # Create a message + MESSAGE="`date -u`" && \ + # Produce a test message to the topic + echo "$MESSAGE" | kafka-console-producer --broker-list {{ include "kafka.fullname" . }}:9092 --topic helm-test-topic-create-consume-produce && \ + # Consume a test message from the topic + kafka-console-consumer --bootstrap-server {{ include "kafka.fullname" . }}-headless:9092 --topic helm-test-topic-create-consume-produce --from-beginning --timeout-ms 2000 --max-messages 1 | grep "$MESSAGE" + restartPolicy: Never diff --git a/src/seba_charts/cord-platform/charts/kafka/values.yaml b/src/seba_charts/cord-platform/charts/kafka/values.yaml new file mode 100644 index 0000000..75e6541 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/kafka/values.yaml @@ -0,0 +1,355 @@ +# ------------------------------------------------------------------------------ +# Kafka: +# ------------------------------------------------------------------------------ + +## The StatefulSet installs 3 pods by default +replicas: 3 + +## The kafka image repository +image: "akrainoenea/cp-kafka" + +## The kafka image tag +imageTag: "4.1.2-2" + +## Specify a imagePullPolicy +## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images +imagePullPolicy: "IfNotPresent" + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +resources: {} + # limits: + # cpu: 200m + # memory: 1536Mi + # requests: + # cpu: 100m + # memory: 1024Mi +kafkaHeapOptions: "-Xmx1G -Xms1G" + +## The StatefulSet Update Strategy which Kafka will use when changes are applied. +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies +updateStrategy: + type: "OnDelete" + +## Start and stop pods in Parallel or OrderedReady (one-by-one.) Note - Can not change after first release. +## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy +podManagementPolicy: OrderedReady + +## If RBAC is enabled on the cluster, the Kafka init container needs a service account +## with permissisions sufficient to apply pod labels +rbac: + enabled: false + +## The name of the storage class which the cluster should use. +# storageClass: default + +## The subpath within the Kafka container's PV where logs will be stored. +## This is combined with `persistence.mountPath`, to create, by default: /opt/kafka/data/logs +logSubPath: "logs" + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pod scheduling preferences (by default keep pods within a release on separate nodes). +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## By default we don't set affinity +affinity: {} +## Alternatively, this typical example defines: +## antiAffinity (to keep Kafka pods on separate pods) +## and affinity (to encourage Kafka pods to be collocated with Zookeeper pods) +# affinity: +# podAntiAffinity: +# requiredDuringSchedulingIgnoredDuringExecution: +# - labelSelector: +# matchExpressions: +# - key: app +# operator: In +# values: +# - kafka +# topologyKey: "kubernetes.io/hostname" +# podAffinity: +# preferredDuringSchedulingIgnoredDuringExecution: +# - weight: 50 +# podAffinityTerm: +# labelSelector: +# matchExpressions: +# - key: app +# operator: In +# values: +# - zookeeper +# topologyKey: "kubernetes.io/hostname" + +## Node labels for pod assignment +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +nodeSelector: {} + +## Readiness probe config. +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ +## +readinessProbe: + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + successThreshold: 1 + failureThreshold: 3 + +## Period to wait for broker graceful shutdown (sigterm) before pod is killed (sigkill) +## ref: https://kubernetes-v1-4.github.io/docs/user-guide/production-pods/#lifecycle-hooks-and-termination-notice +## ref: https://kafka.apache.org/10/documentation.html#brokerconfigs controlled.shutdown.* +terminationGracePeriodSeconds: 60 + +# Tolerations for nodes that have taints on them. +# Useful if you want to dedicate nodes to just run kafka +# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +tolerations: [] +# tolerations: +# - key: "key" +# operator: "Equal" +# value: "value" +# effect: "NoSchedule" + +## Headless service. +## +headless: + # annotations: + # targetPort: + port: 9092 + +## External access. +## +external: + type: NodePort + # annotations: + # service.beta.kubernetes.io/openstack-internal-load-balancer: "true" + + # create an A record for each statefulset pod + distinct: false + enabled: false + servicePort: 19092 + firstListenerPort: 31090 + domain: cluster.local + init: + image: "iecedge/kubectl_deployer_arm64" + imageTag: "0.4" + imagePullPolicy: "IfNotPresent" + +## Configuration Overrides. Specify any Kafka settings you would like set on the StatefulSet +## here in map format, as defined in the official docs. +## ref: https://kafka.apache.org/documentation/#brokerconfigs +## +configurationOverrides: + "offsets.topic.replication.factor": 3 + # "auto.leader.rebalance.enable": true + # "auto.create.topics.enable": true + # "controlled.shutdown.enable": true + # "controlled.shutdown.max.retries": 100 + + ## Options required for external access via NodePort + ## ref: + ## - http://kafka.apache.org/documentation/#security_configbroker + ## - https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic + ## + ## Setting "advertised.listeners" here appends to "PLAINTEXT://${POD_IP}:9092," + # "advertised.listeners": |- + # EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID})) + # "listener.security.protocol.map": |- + # PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT + +## A collection of additional ports to expose on brokers (formatted as normal containerPort yaml) +# Useful when the image exposes metrics (like prometheus, etc.) through a javaagent instead of a sidecar +additionalPorts: {} + +## Persistence configuration. Specify if and how to persist data to a persistent volume. +## +persistence: + enabled: true + + ## The size of the PersistentVolume to allocate to each Kafka Pod in the StatefulSet. For + ## production servers this number should likely be much larger. + ## + size: "1Gi" + + ## The location within the Kafka container where the PV will mount its storage and Kafka will + ## store its logs. + ## + mountPath: "/opt/kafka/data" + + ## Kafka data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClass: + +jmx: + ## Rules to apply to the Prometheus JMX Exporter. Note while lots of stats have been cleaned and exposed, + ## there are still more stats to clean up and expose, others will never get exposed. They keep lots of duplicates + ## that can be derived easily. The configMap in this chart cleans up the metrics it exposes to be in a Prometheus + ## format, eg topic, broker are labels and not part of metric name. Improvements are gladly accepted and encouraged. + configMap: + + ## Allows disabling the default configmap, note a configMap is needed + enabled: true + + ## Allows setting values to generate confimap + ## To allow all metrics through (warning its crazy excessive) comment out below `overrideConfig` and set + ## `whitelistObjectNames: []` + overrideConfig: {} + # jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi + # lowercaseOutputName: true + # lowercaseOutputLabelNames: true + # ssl: false + # rules: + # - pattern: ".*" + + ## If you would like to supply your own ConfigMap for JMX metrics, supply the name of that + ## ConfigMap as an `overrideName` here. + overrideName: "" + + ## Port the jmx metrics are exposed in native jmx format, not in Prometheus format + port: 5555 + + ## JMX Whitelist Objects, can be set to control which JMX metrics are exposed. Only whitelisted + ## values will be exposed via JMX Exporter. They must also be exposed via Rules. To expose all metrics + ## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []` + ## (2) commented out above `overrideConfig`. + whitelistObjectNames: # [] + - kafka.controller:* + - kafka.server:* + - java.lang:* + - kafka.network:* + - kafka.log:* + +## Prometheus Exporters / Metrics +## +prometheus: + ## Prometheus JMX Exporter: exposes the majority of Kafkas metrics + jmx: + enabled: false + + ## The image to use for the metrics collector + image: iecedge/kafka-prometheus-jmx-exporter_arm64 + + ## The image tag to use for the metrics collector + imageTag: misc-dockerfiles + + ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator + interval: 10s + + ## Port jmx-exporter exposes Prometheus format metrics to scrape + port: 5556 + + resources: {} + # limits: + # cpu: 200m + # memory: 1Gi + # requests: + # cpu: 100m + # memory: 100Mi + + ## Prometheus Kafka Exporter: exposes complimentary metrics to JMX Exporter + kafka: + enabled: false + + ## The image to use for the metrics collector + image: iecedge/kafka-exporter_arm64 + + ## The image tag to use for the metrics collector + imageTag: v1.2.0 + + ## Interval at which Prometheus scrapes metrics, note: only used by Prometheus Operator + interval: 10s + + ## Port kafka-exporter exposes for Prometheus to scrape metrics + port: 9308 + + ## Resource limits + resources: {} +# limits: +# cpu: 200m +# memory: 1Gi +# requests: +# cpu: 100m +# memory: 100Mi + + operator: + ## Are you using Prometheus Operator? + enabled: false + + serviceMonitor: + # Namespace Prometheus is installed in + namespace: monitoring + + ## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr) + ## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65) + ## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298) + selector: + prometheus: kube-prometheus + +## Topic creation and configuration. +## The job will be run on a deployment only when the config has been changed. +## - If 'partitions' and 'replicationFactor' are specified we create the topic (with --if-not-exists.) +## - If 'partitions' is specified we 'alter' the number of partitions. This will +## silently and safely fail if the new setting isn’t strictly larger than the old (i.e. a NOOP.) Do be aware of the +## implications for keyed topics (ref: https://docs.confluent.io/current/kafka/post-deployment.html#admin-operations) +## - If 'defaultConfig' is specified it's deleted from the topic configuration. If it isn't present, +## it will silently and safely fail. +## - If 'config' is specified it's added to the topic configuration. +## +topics: [] + # - name: myExistingTopicConfig + # config: "cleanup.policy=compact,delete.retention.ms=604800000" + # - name: myExistingTopicPartitions + # partitions: 8 + # - name: myNewTopicWithConfig + # partitions: 8 + # replicationFactor: 3 + # defaultConfig: "segment.bytes,segment.ms" + # config: "cleanup.policy=compact,delete.retention.ms=604800000" + +# ------------------------------------------------------------------------------ +# Zookeeper: +# ------------------------------------------------------------------------------ + +zookeeper: + ## If true, install the Zookeeper chart alongside Kafka + ## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper + enabled: true + + ## Configure Zookeeper resource requests and limits + ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ + resources: ~ + + ## Environmental variables to set in Zookeeper + env: + ## The JVM heap size to allocate to Zookeeper + ZK_HEAP_SIZE: "1G" + + persistence: + enabled: false + ## The amount of PV storage allocated to each Zookeeper pod in the statefulset + # size: "2Gi" + + ## Specify a Zookeeper imagePullPolicy + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + image: + PullPolicy: "IfNotPresent" + + ## If the Zookeeper Chart is disabled a URL and port are required to connect + url: "" + port: 2181 + + ## Pod scheduling preferences (by default keep pods within a release on separate nodes). + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## By default we don't set affinity: + affinity: {} # Criteria by which pod label-values influence scheduling for zookeeper pods. + # podAntiAffinity: + # requiredDuringSchedulingIgnoredDuringExecution: + # - topologyKey: "kubernetes.io/hostname" + # labelSelector: + # matchLabels: + # release: zookeeper diff --git a/src/seba_charts/cord-platform/charts/logging/.helmignore b/src/seba_charts/cord-platform/charts/logging/.helmignore new file mode 100644 index 0000000..f0c1319 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/src/seba_charts/cord-platform/charts/logging/Chart.yaml b/src/seba_charts/cord-platform/charts/logging/Chart.yaml new file mode 100644 index 0000000..1320da8 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/Chart.yaml @@ -0,0 +1,4 @@ +description: Sets up log aggregation infrastructure in Kubernetes, with elasticstack + and kibana +name: logging +version: 1.0.0 diff --git a/src/seba_charts/cord-platform/charts/logging/README.md b/src/seba_charts/cord-platform/charts/logging/README.md new file mode 100644 index 0000000..4cdd5e6 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/README.md @@ -0,0 +1,22 @@ +# Logging + +This chart implements a log aggregation framework built on elasticsearch within +kubernetes. + +It requires persistent storage, and currently has default values for the +`local-provisioner` with storage on each k8s node. + +Once these prereqs are satisfied, it can be run with: + + helm install -n logging logging + +(NOTE: the name must be `logging` currently, or name lookups within the pod are broken) + +## Current log sources + +- Container logs from k8s with [fluentd-elasticsearch](https://github.com/helm/charts/tree/master/stable/fluentd-elasticsearch) + +## Using Kibana + +Visit: http://:30601 + diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/.helmignore b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/.helmignore new file mode 100644 index 0000000..f225651 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/.helmignore @@ -0,0 +1,3 @@ +.git +# OWNERS file for Kubernetes +OWNERS \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/Chart.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/Chart.yaml new file mode 100644 index 0000000..764953b --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/Chart.yaml @@ -0,0 +1,21 @@ +appVersion: 6.4.2 +description: Flexible and powerful open source, distributed real-time search and analytics + engine. +home: https://www.elastic.co/products/elasticsearch +icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg +maintainers: +- email: christian@jetstack.io + name: simonswine +- email: michael.haselton@gmail.com + name: icereval +- email: pete.brown@powerhrg.com + name: rendhalver +name: elasticsearch +sources: +- https://www.elastic.co/products/elasticsearch +- https://github.com/jetstack/elasticsearch-pet +- https://github.com/giantswarm/kubernetes-elastic-stack +- https://github.com/GoogleCloudPlatform/elasticsearch-docker +- https://github.com/clockworksoul/helm-elasticsearch +- https://github.com/pires/kubernetes-elasticsearch-cluster +version: 1.11.0 diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/README.md b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/README.md new file mode 100644 index 0000000..7d7345f --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/README.md @@ -0,0 +1,220 @@ +# Elasticsearch Helm Chart + +This chart uses a standard Docker image of Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch-oss) and uses a service pointing to the master's transport port for service discovery. +Elasticsearch does not communicate with the Kubernetes API, hence no need for RBAC permissions. + +## Warning for previous users +If you are currently using an earlier version of this Chart you will need to redeploy your Elasticsearch clusters. The discovery method used here is incompatible with using RBAC. +If you are upgrading to Elasticsearch 6 from the 5.5 version used in this chart before, please note that your cluster needs to do a full cluster restart. +The simplest way to do that is to delete the installation (keep the PVs) and install this chart again with the new version. +If you want to avoid doing that upgrade to Elasticsearch 5.6 first before moving on to Elasticsearch 6.0. + +## Prerequisites Details + +* Kubernetes 1.6+ +* PV dynamic provisioning support on the underlying infrastructure + +## StatefulSets Details +* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/ + +## StatefulSets Caveats +* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations + +## Todo + +* Implement TLS/Auth/Security +* Smarter upscaling/downscaling +* Solution for memory locking + +## Chart Details +This chart will do the following: + +* Implemented a dynamically scalable elasticsearch cluster using Kubernetes StatefulSets/Deployments +* Multi-role deployment: master, client (coordinating) and data nodes +* Statefulset Supports scaling down without degrading the cluster + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```bash +$ helm install --name my-release stable/elasticsearch +``` + +## Deleting the Charts + +Delete the Helm deployment as normal + +``` +$ helm delete my-release +``` + +Deletion of the StatefulSet doesn't cascade to deleting associated PVCs. To delete them: + +``` +$ kubectl delete pvc -l release=my-release,component=data +``` + +## Configuration + +The following table lists the configurable parameters of the elasticsearch chart and their default values. + +| Parameter | Description | Default | +| ------------------------------------ | ------------------------------------------------------------------- | --------------------------------------------------- | +| `appVersion` | Application Version (Elasticsearch) | `6.4.2` | +| `image.repository` | Container image name | `docker.elastic.co/elasticsearch/elasticsearch-oss` | +| `image.tag` | Container image tag | `6.4.2` | +| `image.pullPolicy` | Container pull policy | `IfNotPresent` | +| `initImage.repository` | Init container image name | `busybox` | +| `initImage.tag` | Init container image tag | `latest` | +| `initImage.pullPolicy` | Init container pull policy | `Always` | +| `cluster.name` | Cluster name | `elasticsearch` | +| `cluster.xpackEnable` | Writes the X-Pack configuration options to the configuration file | `false` | +| `cluster.config` | Additional cluster config appended | `{}` | +| `cluster.keystoreSecret` | Name of secret holding secure config options in an es keystore | `nil` | +| `cluster.env` | Cluster environment variables | `{MINIMUM_MASTER_NODES: "2"}` | +| `cluster.additionalJavaOpts` | Cluster parameters to be added to `ES_JAVA_OPTS` environment variable | `""` | +| `client.name` | Client component name | `client` | +| `client.replicas` | Client node replicas (deployment) | `2` | +| `client.resources` | Client node resources requests & limits | `{} - cpu limit must be an integer` | +| `client.priorityClassName` | Client priorityClass | `nil` | +| `client.heapSize` | Client node heap size | `512m` | +| `client.podAnnotations` | Client Deployment annotations | `{}` | +| `client.nodeSelector` | Node labels for client pod assignment | `{}` | +| `client.tolerations` | Client tolerations | `[]` | +| `client.serviceAnnotations` | Client Service annotations | `{}` | +| `client.serviceType` | Client service type | `ClusterIP` | +| `client.loadBalancerIP` | Client loadBalancerIP | `{}` | +| `client.loadBalancerSourceRanges` | Client loadBalancerSourceRanges | `{}` | +| `client.antiAffinity` | Client anti-affinity policy | `soft` | +| `client.nodeAffinity` | Client node affinity policy | `{}` | +| `master.exposeHttp` | Expose http port 9200 on master Pods for monitoring, etc | `false` | +| `master.name` | Master component name | `master` | +| `master.replicas` | Master node replicas (deployment) | `2` | +| `master.resources` | Master node resources requests & limits | `{} - cpu limit must be an integer` | +| `master.priorityClassName` | Master priorityClass | `nil` | +| `master.podAnnotations` | Master Deployment annotations | `{}` | +| `master.nodeSelector` | Node labels for master pod assignment | `{}` | +| `master.tolerations` | Master tolerations | `[]` | +| `master.heapSize` | Master node heap size | `512m` | +| `master.name` | Master component name | `master` | +| `master.persistence.enabled` | Master persistent enabled/disabled | `true` | +| `master.persistence.name` | Master statefulset PVC template name | `data` | +| `master.persistence.size` | Master persistent volume size | `4Gi` | +| `master.persistence.storageClass` | Master persistent volume Class | `nil` | +| `master.persistence.accessMode` | Master persistent Access Mode | `ReadWriteOnce` | +| `master.antiAffinity` | Master anti-affinity policy | `soft` | +| `master.nodeAffinity` | Master node affinity policy | `{}` | +| `data.exposeHttp` | Expose http port 9200 on data Pods for monitoring, etc | `false` | +| `data.replicas` | Data node replicas (statefulset) | `2` | +| `data.resources` | Data node resources requests & limits | `{} - cpu limit must be an integer` | +| `data.priorityClassName` | Data priorityClass | `nil` | +| `data.heapSize` | Data node heap size | `1536m` | +| `data.persistence.enabled` | Data persistent enabled/disabled | `true` | +| `data.persistence.name` | Data statefulset PVC template name | `data` | +| `data.persistence.size` | Data persistent volume size | `30Gi` | +| `data.persistence.storageClass` | Data persistent volume Class | `nil` | +| `data.persistence.accessMode` | Data persistent Access Mode | `ReadWriteOnce` | +| `data.podAnnotations` | Data StatefulSet annotations | `{}` | +| `data.nodeSelector` | Node labels for data pod assignment | `{}` | +| `data.tolerations` | Data tolerations | `[]` | +| `data.terminationGracePeriodSeconds` | Data termination grace period (seconds) | `3600` | +| `data.antiAffinity` | Data anti-affinity policy | `soft` | +| `data.nodeAffinity` | Data node affinity policy | `{}` | +| `extraInitContainers` | Additional init container passed through the tpl | `` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. + +In terms of Memory resources you should make sure that you follow that equation: + +- `${role}HeapSize < ${role}MemoryRequests < ${role}MemoryLimits` + +The YAML value of cluster.config is appended to elasticsearch.yml file for additional customization ("script.inline: on" for example to allow inline scripting) + +# Deep dive + +## Application Version + +This chart aims to support Elasticsearch v2 to v6 deployments by specifying the `values.yaml` parameter `appVersion`. + +### Version Specific Features + +* Memory Locking *(variable renamed)* +* Ingest Node *(v5)* +* X-Pack Plugin *(v5)* + +Upgrade paths & more info: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html + +## Mlocking + +This is a limitation in kubernetes right now. There is no way to raise the +limits of lockable memory, so that these memory areas won't be swapped. This +would degrade performance heavily. The issue is tracked in +[kubernetes/#3595](https://github.com/kubernetes/kubernetes/issues/3595). + +``` +[WARN ][bootstrap] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory +[WARN ][bootstrap] This can result in part of the JVM being swapped out. +[WARN ][bootstrap] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536 +``` + +## Minimum Master Nodes +> The minimum_master_nodes setting is extremely important to the stability of your cluster. This setting helps prevent split brains, the existence of two masters in a single cluster. + +>When you have a split brain, your cluster is at danger of losing data. Because the master is considered the supreme ruler of the cluster, it decides when new indices can be created, how shards are moved, and so forth. If you have two masters, data integrity becomes perilous, since you have two nodes that think they are in charge. + +>This setting tells Elasticsearch to not elect a master unless there are enough master-eligible nodes available. Only then will an election take place. + +>This setting should always be configured to a quorum (majority) of your master-eligible nodes. A quorum is (number of master-eligible nodes / 2) + 1 + +More info: https://www.elastic.co/guide/en/elasticsearch/guide/1.x/_important_configuration_changes.html#_minimum_master_nodes + +# Client and Coordinating Nodes + +Elasticsearch v5 terminology has updated, and now refers to a `Client Node` as a `Coordinating Node`. + +More info: https://www.elastic.co/guide/en/elasticsearch/reference/5.5/modules-node.html#coordinating-node + +## Enabling elasticsearch interal monitoring +Requires version 6.3+ and standard non `oss` repository defined. Starting with 6.3 Xpack is partially free and enabled by default. You need to set a new config to enable the collection of these internal metrics. (https://www.elastic.co/guide/en/elasticsearch/reference/6.3/monitoring-settings.html) + +To do this through this helm chart override with the three following changes: +``` +image.repository: docker.elastic.co/elasticsearch/elasticsearch +cluster.xpackEnable: true +cluster.env.XPACK_MONITORING_ENABLED: true +``` + +Note: to see these changes you will need to update your kibana repo to `image.repository: docker.elastic.co/kibana/kibana` instead of the `oss` version + + +## Select right storage class for SSD volumes + +### GCE + Kubernetes 1.5 + +Create StorageClass for SSD-PD + +``` +$ kubectl create -f - < >(tee -a "/var/log/elasticsearch-hooks.log") + NODE_NAME=${HOSTNAME} + echo "Prepare to migrate data of the node ${NODE_NAME}" + echo "Move all data from node ${NODE_NAME}" + curl -s -XPUT -H 'Content-Type: application/json' '{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings' -d "{ + \"transient\" :{ + \"cluster.routing.allocation.exclude._name\" : \"${NODE_NAME}\" + } + }" + echo "" + + while true ; do + echo -e "Wait for node ${NODE_NAME} to become empty" + SHARDS_ALLOCATION=$(curl -s -XGET 'http://{{ template "elasticsearch.client.fullname" . }}:9200/_cat/shards') + if ! echo "${SHARDS_ALLOCATION}" | grep -E "${NODE_NAME}"; then + break + fi + sleep 1 + done + echo "Node ${NODE_NAME} is ready to shutdown" + post-start-hook.sh: |- + #!/bin/bash + exec &> >(tee -a "/var/log/elasticsearch-hooks.log") + NODE_NAME=${HOSTNAME} + CLUSTER_SETTINGS=$(curl -s -XGET "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings") + if echo "${CLUSTER_SETTINGS}" | grep -E "${NODE_NAME}"; then + echo "Activate node ${NODE_NAME}" + curl -s -XPUT -H 'Content-Type: application/json' "http://{{ template "elasticsearch.client.fullname" . }}:9200/_cluster/settings" -d "{ + \"transient\" :{ + \"cluster.routing.allocation.exclude._name\" : null + } + }" + fi + echo "Node ${NODE_NAME} is ready to be used" diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-pdb.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-pdb.yaml new file mode 100644 index 0000000..54e91c7 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-pdb.yaml @@ -0,0 +1,24 @@ +{{- if .Values.data.podDisruptionBudget.enabled }} +apiVersion: policy/v1beta1 +kind: PodDisruptionBudget +metadata: + labels: + app: {{ template "elasticsearch.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + component: "{{ .Values.data.name }}" + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "elasticsearch.data.fullname" . }} +spec: +{{- if .Values.data.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.data.podDisruptionBudget.minAvailable }} +{{- end }} +{{- if .Values.data.podDisruptionBudget.maxUnavailable }} + maxUnavailable: {{ .Values.data.podDisruptionBudget.maxUnavailable }} +{{- end }} + selector: + matchLabels: + app: {{ template "elasticsearch.name" . }} + component: "{{ .Values.data.name }}" + release: {{ .Release.Name }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-statefulset.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-statefulset.yaml new file mode 100644 index 0000000..d7ae76d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/data-statefulset.yaml @@ -0,0 +1,198 @@ +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + labels: + app: {{ template "elasticsearch.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + component: "{{ .Values.data.name }}" + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "elasticsearch.data.fullname" . }} +spec: + serviceName: {{ template "elasticsearch.data.fullname" . }} + replicas: {{ .Values.data.replicas }} + template: + metadata: + labels: + app: {{ template "elasticsearch.name" . }} + component: "{{ .Values.data.name }}" + release: {{ .Release.Name }} + {{- if .Values.data.podAnnotations }} + annotations: +{{ toYaml .Values.data.podAnnotations | indent 8 }} + {{- end }} + spec: +{{- if .Values.data.priorityClassName }} + priorityClassName: "{{ .Values.data.priorityClassName }}" +{{- end }} + securityContext: + fsGroup: 1000 + {{- if or .Values.data.antiAffinity .Values.data.nodeAffinity }} + affinity: + {{- end }} + {{- if eq .Values.data.antiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: "kubernetes.io/hostname" + labelSelector: + matchLabels: + app: "{{ template "elasticsearch.name" . }}" + release: "{{ .Release.Name }}" + component: "{{ .Values.data.name }}" + {{- else if eq .Values.data.antiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + app: "{{ template "elasticsearch.name" . }}" + release: "{{ .Release.Name }}" + component: "{{ .Values.data.name }}" + {{- end }} + {{- with .Values.data.nodeAffinity }} + nodeAffinity: +{{ toYaml . | indent 10 }} + {{- end }} +{{- if .Values.data.nodeSelector }} + nodeSelector: +{{ toYaml .Values.data.nodeSelector | indent 8 }} +{{- end }} +{{- if .Values.data.tolerations }} + tolerations: +{{ toYaml .Values.data.tolerations | indent 8 }} +{{- end }} + initContainers: + # see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html + # and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall + - name: "sysctl" + image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}" + imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }} + command: ["sysctl", "-w", "vm.max_map_count=262144"] + securityContext: + privileged: true + - name: "chown" + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + command: + - /bin/bash + - -c + - chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && + chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/logs + securityContext: + runAsUser: 0 + volumeMounts: + - mountPath: /usr/share/elasticsearch/data + name: data +{{- if .Values.extraInitContainers }} +{{ tpl .Values.extraInitContainers . | indent 6 }} +{{- end }} + containers: + - name: elasticsearch + env: + - name: DISCOVERY_SERVICE + value: {{ template "elasticsearch.fullname" . }}-discovery + - name: NODE_MASTER + value: "false" + - name: PROCESSORS + valueFrom: + resourceFieldRef: + resource: limits.cpu + - name: ES_JAVA_OPTS + value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.data.heapSize }} -Xmx{{ .Values.data.heapSize }} {{ .Values.cluster.additionalJavaOpts }}" + {{- range $key, $value := .Values.cluster.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + ports: + - containerPort: 9300 + name: transport +{{ if .Values.data.exposeHttp }} + - containerPort: 9200 + name: http +{{ end }} + resources: +{{ toYaml .Values.data.resources | indent 12 }} + readinessProbe: + httpGet: + path: /_cluster/health?local=true + port: 9200 + initialDelaySeconds: 5 + volumeMounts: + - mountPath: /usr/share/elasticsearch/data + name: data + - mountPath: /usr/share/elasticsearch/config/elasticsearch.yml + name: config + subPath: elasticsearch.yml +{{- if hasPrefix "2." .Values.image.tag }} + - mountPath: /usr/share/elasticsearch/config/logging.yml + name: config + subPath: logging.yml +{{- end }} +{{- if hasPrefix "5." .Values.image.tag }} + - mountPath: /usr/share/elasticsearch/config/log4j2.properties + name: config + subPath: log4j2.properties +{{- end }} + - name: config + mountPath: /pre-stop-hook.sh + subPath: pre-stop-hook.sh + - name: config + mountPath: /post-start-hook.sh + subPath: post-start-hook.sh +{{- if .Values.cluster.keystoreSecret }} + - name: keystore + mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore" + subPath: elasticsearch.keystore + readOnly: true +{{- end }} + lifecycle: + preStop: + exec: + command: ["/bin/bash","/pre-stop-hook.sh"] + postStart: + exec: + command: ["/bin/bash","/post-start-hook.sh"] + terminationGracePeriodSeconds: {{ .Values.data.terminationGracePeriodSeconds }} +{{- if .Values.image.pullSecrets }} + imagePullSecrets: + {{- range $pullSecret := .Values.image.pullSecrets }} + - name: {{ $pullSecret }} + {{- end }} +{{- end }} + volumes: + - name: config + configMap: + name: {{ template "elasticsearch.fullname" . }} +{{- if .Values.cluster.keystoreSecret }} + - name: keystore + secret: + secretName: {{ .Values.cluster.keystoreSecret }} +{{- end }} + {{- if not .Values.data.persistence.enabled }} + - name: data + emptyDir: {} + {{- end }} + updateStrategy: + type: {{ .Values.data.updateStrategy.type }} + {{- if .Values.data.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: {{ .Values.data.persistence.name }} + spec: + accessModes: + - {{ .Values.data.persistence.accessMode | quote }} + {{- if .Values.data.persistence.storageClass }} + {{- if (eq "-" .Values.data.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.data.persistence.storageClass }}" + {{- end }} + {{- end }} + resources: + requests: + storage: "{{ .Values.data.persistence.size }}" + {{- end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-pdb.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-pdb.yaml new file mode 100644 index 0000000..c3efe83 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-pdb.yaml @@ -0,0 +1,24 @@ +{{- if .Values.master.podDisruptionBudget.enabled }} +apiVersion: policy/v1beta1 +kind: PodDisruptionBudget +metadata: + labels: + app: {{ template "elasticsearch.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + component: "{{ .Values.master.name }}" + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "elasticsearch.master.fullname" . }} +spec: +{{- if .Values.master.podDisruptionBudget.minAvailable }} + minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }} +{{- end }} +{{- if .Values.master.podDisruptionBudget.maxUnavailable }} + maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }} +{{- end }} + selector: + matchLabels: + app: {{ template "elasticsearch.name" . }} + component: "{{ .Values.master.name }}" + release: {{ .Release.Name }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-statefulset.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-statefulset.yaml new file mode 100644 index 0000000..6530b00 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-statefulset.yaml @@ -0,0 +1,188 @@ +apiVersion: apps/v1beta1 +kind: StatefulSet +metadata: + labels: + app: {{ template "elasticsearch.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + component: "{{ .Values.master.name }}" + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "elasticsearch.master.fullname" . }} +spec: + serviceName: {{ template "elasticsearch.master.fullname" . }} + replicas: {{ .Values.master.replicas }} + template: + metadata: + labels: + app: {{ template "elasticsearch.name" . }} + component: "{{ .Values.master.name }}" + release: {{ .Release.Name }} + {{- if .Values.master.podAnnotations }} + annotations: +{{ toYaml .Values.master.podAnnotations | indent 8 }} + {{- end }} + spec: +{{- if .Values.master.priorityClassName }} + priorityClassName: "{{ .Values.master.priorityClassName }}" +{{- end }} + securityContext: + fsGroup: 1000 + {{- if or .Values.master.antiAffinity .Values.master.nodeAffinity }} + affinity: + {{- end }} + {{- if eq .Values.master.antiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: "kubernetes.io/hostname" + labelSelector: + matchLabels: + app: "{{ template "elasticsearch.name" . }}" + release: "{{ .Release.Name }}" + component: "{{ .Values.master.name }}" + {{- else if eq .Values.master.antiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + app: "{{ template "elasticsearch.name" . }}" + release: "{{ .Release.Name }}" + component: "{{ .Values.master.name }}" + {{- end }} + {{- with .Values.master.nodeAffinity }} + nodeAffinity: +{{ toYaml . | indent 10 }} + {{- end }} +{{- if .Values.master.nodeSelector }} + nodeSelector: +{{ toYaml .Values.master.nodeSelector | indent 8 }} +{{- end }} +{{- if .Values.master.tolerations }} + tolerations: +{{ toYaml .Values.master.tolerations | indent 8 }} +{{- end }} + initContainers: + # see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html + # and https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html#mlockall + - name: "sysctl" + image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}" + imagePullPolicy: {{ .Values.initImage.pullPolicy | quote }} + command: ["sysctl", "-w", "vm.max_map_count=262144"] + securityContext: + privileged: true + - name: "chown" + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + command: + - /bin/bash + - -c + - chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data && + chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/logs + securityContext: + runAsUser: 0 + volumeMounts: + - mountPath: /usr/share/elasticsearch/data + name: data +{{- if .Values.extraInitContainers }} +{{ tpl .Values.extraInitContainers . | indent 6 }} +{{- end }} + containers: + - name: elasticsearch + env: + - name: NODE_DATA + value: "false" +{{- if hasPrefix "5." .Values.appVersion }} + - name: NODE_INGEST + value: "false" +{{- end }} + - name: DISCOVERY_SERVICE + value: {{ template "elasticsearch.fullname" . }}-discovery + - name: PROCESSORS + valueFrom: + resourceFieldRef: + resource: limits.cpu + - name: ES_JAVA_OPTS + value: "-Djava.net.preferIPv4Stack=true -Xms{{ .Values.master.heapSize }} -Xmx{{ .Values.master.heapSize }} {{ .Values.cluster.additionalJavaOpts }}" + {{- range $key, $value := .Values.cluster.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + resources: +{{ toYaml .Values.master.resources | indent 12 }} + readinessProbe: + httpGet: + path: /_cluster/health?local=true + port: 9200 + initialDelaySeconds: 5 + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.image.pullPolicy | quote }} + ports: + - containerPort: 9300 + name: transport +{{ if .Values.master.exposeHttp }} + - containerPort: 9200 + name: http +{{ end }} + volumeMounts: + - mountPath: /usr/share/elasticsearch/data + name: data + - mountPath: /usr/share/elasticsearch/config/elasticsearch.yml + name: config + subPath: elasticsearch.yml +{{- if hasPrefix "2." .Values.image.tag }} + - mountPath: /usr/share/elasticsearch/config/logging.yml + name: config + subPath: logging.yml +{{- end }} +{{- if hasPrefix "5." .Values.image.tag }} + - mountPath: /usr/share/elasticsearch/config/log4j2.properties + name: config + subPath: log4j2.properties +{{- end }} +{{- if .Values.cluster.keystoreSecret }} + - name: keystore + mountPath: "/usr/share/elasticsearch/config/elasticsearch.keystore" + subPath: elasticsearch.keystore + readOnly: true +{{- end }} +{{- if .Values.image.pullSecrets }} + imagePullSecrets: + {{- range $pullSecret := .Values.image.pullSecrets }} + - name: {{ $pullSecret }} + {{- end }} +{{- end }} + volumes: + - name: config + configMap: + name: {{ template "elasticsearch.fullname" . }} +{{- if .Values.cluster.keystoreSecret }} + - name: keystore + secret: + secretName: {{ .Values.cluster.keystoreSecret }} +{{- end }} + {{- if not .Values.master.persistence.enabled }} + - name: data + emptyDir: {} + {{- end }} + updateStrategy: + type: {{ .Values.master.updateStrategy.type }} + {{- if .Values.master.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: {{ .Values.master.persistence.name }} + spec: + accessModes: + - {{ .Values.master.persistence.accessMode | quote }} + {{- if .Values.master.persistence.storageClass }} + {{- if (eq "-" .Values.master.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.master.persistence.storageClass }}" + {{- end }} + {{- end }} + resources: + requests: + storage: "{{ .Values.master.persistence.size }}" + {{ end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-svc.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-svc.yaml new file mode 100644 index 0000000..5db28b7 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/templates/master-svc.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Service +metadata: + labels: + app: {{ template "elasticsearch.name" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + component: "{{ .Values.master.name }}" + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "elasticsearch.fullname" . }}-discovery +spec: + clusterIP: None + ports: + - port: 9300 + targetPort: transport + selector: + app: {{ template "elasticsearch.name" . }} + component: "{{ .Values.master.name }}" + release: {{ .Release.Name }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/values.yaml b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/values.yaml new file mode 100644 index 0000000..1cafe19 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/elasticsearch/values.yaml @@ -0,0 +1,134 @@ +# Default values for elasticsearch. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +appVersion: "6.4.2" + +image: + repository: "akrainoenea/elasticsearch-oss" + tag: "6.4.2" + pullPolicy: "IfNotPresent" + # If specified, use these secrets to access the image + # pullSecrets: + # - registry-secret + +initImage: + repository: "busybox" + tag: "latest" + pullPolicy: "Always" + +cluster: + name: "elasticsearch" + # If you want X-Pack installed, switch to an image that includes it, enable this option and toggle the features you want + # enabled in the environment variables outlined in the README + xpackEnable: false + # Some settings must be placed in a keystore, so they need to be mounted in from a secret. + # Use this setting to specify the name of the secret + # keystoreSecret: eskeystore + config: {} + # Custom parameters, as string, to be added to ES_JAVA_OPTS environment variable + additionalJavaOpts: "" + env: + # IMPORTANT: https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html#minimum_master_nodes + # To prevent data loss, it is vital to configure the discovery.zen.minimum_master_nodes setting so that each master-eligible + # node knows the minimum number of master-eligible nodes that must be visible in order to form a cluster. + MINIMUM_MASTER_NODES: "2" + +client: + name: client + replicas: 2 + serviceType: ClusterIP + loadBalancerIP: {} + loadBalancerSourceRanges: {} +## (dict) If specified, apply these annotations to the client service +# serviceAnnotations: +# example: client-svc-foo + heapSize: "512m" + antiAffinity: "soft" + nodeAffinity: {} + nodeSelector: {} + tolerations: [] + resources: + limits: + cpu: "1" + # memory: "1024Mi" + requests: + cpu: "25m" + memory: "512Mi" + priorityClassName: "" + ## (dict) If specified, apply these annotations to each client Pod + # podAnnotations: + # example: client-foo + podDisruptionBudget: + enabled: false + minAvailable: 1 + # maxUnavailable: 1 + +master: + name: master + exposeHttp: false + replicas: 3 + heapSize: "512m" + persistence: + enabled: true + accessMode: ReadWriteOnce + name: data + size: "4Gi" + # storageClass: "ssd" + antiAffinity: "soft" + nodeAffinity: {} + nodeSelector: {} + tolerations: [] + resources: + limits: + cpu: "1" + # memory: "1024Mi" + requests: + cpu: "25m" + memory: "512Mi" + priorityClassName: "" + ## (dict) If specified, apply these annotations to each master Pod + # podAnnotations: + # example: master-foo + podDisruptionBudget: + enabled: false + minAvailable: 2 # Same as `cluster.env.MINIMUM_MASTER_NODES` + # maxUnavailable: 1 + updateStrategy: + type: OnDelete + +data: + name: data + exposeHttp: false + replicas: 2 + heapSize: "1536m" + persistence: + enabled: true + accessMode: ReadWriteOnce + name: data + size: "30Gi" + # storageClass: "ssd" + terminationGracePeriodSeconds: 3600 + antiAffinity: "soft" + nodeAffinity: {} + nodeSelector: {} + tolerations: [] + resources: + limits: + cpu: "1" + # memory: "2048Mi" + requests: + cpu: "25m" + memory: "1536Mi" + priorityClassName: "" + ## (dict) If specified, apply these annotations to each data Pod + # podAnnotations: + # example: data-foo + podDisruptionBudget: + enabled: false + # minAvailable: 1 + maxUnavailable: 1 + updateStrategy: + type: OnDelete + +## Additional init containers +extraInitContainers: | diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/Chart.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/Chart.yaml new file mode 100644 index 0000000..a0c8388 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/Chart.yaml @@ -0,0 +1,21 @@ +appVersion: 2.3.1 +description: A Fluentd Helm chart for Kubernetes with Elasticsearch output +engine: gotpl +home: https://www.fluentd.org/ +icon: https://raw.githubusercontent.com/fluent/fluentd-docs/master/public/logo/Fluentd_square.png +keywords: +- fluentd +- elasticsearch +- multiline +- detect-exceptions +- logging +maintainers: +- email: monotek23@gmail.com + name: monotek +name: fluentd-elasticsearch +sources: +- https://github.com/kubernetes/charts/stable/fluentd-elasticsearch +- https://github.com/fluent/fluentd-kubernetes-daemonset +- https://github.com/GoogleCloudPlatform/fluent-plugin-detect-exceptions +- https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image +version: 1.0.3 diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/OWNERS b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/OWNERS new file mode 100644 index 0000000..9375c95 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/OWNERS @@ -0,0 +1,4 @@ +approvers: +- monotek +reviewers: +- monotek diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/README.md b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/README.md new file mode 100644 index 0000000..a3676b4 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/README.md @@ -0,0 +1,82 @@ +# Fluentd Elasticsearch + +* Installs [Fluentd](https://www.fluentd.org/) log forwarder. + +## TL;DR; + +```console +$ helm install stable/fluentd-elasticsearch +``` + +## Introduction + +This chart bootstraps a [Fluentd](https://www.fluentd.org/) daemonset on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. +It's meant to be a drop in replacement for fluentd-gcp on GKE which sends logs to Google's Stackdriver service, but can also be used in other places where logging to ElasticSearch is required. +The used Docker image also contains Google's detect exceptions (for Java multiline stacktraces), Prometheus exporter, Kubernetes metadata filter & Systemd plugins. + +## Prerequisites + +- Kubernetes 1.8+ with Beta APIs enabled + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```console +$ helm install --name my-release stable/fluentd-elasticsearch +``` + +The command deploys fluentd-elasticsearch on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation. + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```console +$ helm delete my-release +``` + +The command removes all the Kubernetes components associated with the chart and deletes the release. + +## Configuration + +The following table lists the configurable parameters of the Fluentd elasticsearch chart and their default values. + + +| Parameter | Description | Default | +| ---------------------------------- | ------------------------------------------ | ---------------------------------------------------------- | +| `annotations` | Optional daemonset annotations | `NULL` | +| `configMaps` | Fluentd configmaps | `default conf files` | +| `elasticsearch.host` | Elstaicsearch Host | `elasticsearch-client` | +| `elasticsearch.port` | Elasticsearch Port | `9200` | +| `elasticsearch.buffer_chunk_limit` | Elasticsearch buffer chunk limit | `2M` | +| `elasticsearch.buffer_queue_limit` | Elasticsearch buffer queue limit | `8` | +| `extraVolumeMounts` | Mount an extra volume, required to mount ssl certificates when elasticsearch has tls enabled | | +| `extraVolume` | Extra volume | | +| `image.repository` | Image | `gcr.io/google-containers/fluentd-elasticsearch` | +| `image.tag` | Image tag | `v2.3.1` | +| `image.pullPolicy` | Image pull policy | `IfNotPresent` | +| `rbac.create` | RBAC | `true` | +| `resources.limits.cpu` | CPU limit | `100m` | +| `resources.limits.memory` | Memory limit | `500Mi` | +| `resources.requests.cpu` | CPU request | `100m` | +| `resources.requests.memory` | Memory request | `200Mi` | +| `service` | Service definition | `{}` | +| `serviceAccount.create` | Specifies whether a service account should be created.| `true` | +| `serviceAccount.name` | Name of the service account. | | +| `livenessProbe.enabled` | Whether to enable livenessProbe | `true` | +| `tolerations` | Optional daemonset tolerations | `NULL` | + + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```console +$ helm install --name my-release \ + stable/fluentd-elasticsearch +``` + +Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example, + +```console +$ helm install --name my-release -f values.yaml stable/fluentd-elasticsearch +``` diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/NOTES.txt b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/NOTES.txt new file mode 100644 index 0000000..d0cf765 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/NOTES.txt @@ -0,0 +1,6 @@ +To verify that Fluentd has started, run: + + kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "fluentd-elasticsearch.name" . }},release={{ .Release.Name }}" + +THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying, +including things like IP addresses, container images, and object names will NOT be anonymized. diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/_helpers.tpl b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/_helpers.tpl new file mode 100644 index 0000000..46b56b9 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/_helpers.tpl @@ -0,0 +1,27 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "fluentd-elasticsearch.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "fluentd-elasticsearch.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "fluentd-elasticsearch.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} + {{ default (include "fluentd-elasticsearch.fullname" .) .Values.serviceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrole.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrole.yaml new file mode 100644 index 0000000..10eaa8d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrole.yaml @@ -0,0 +1,23 @@ +{{- if .Values.rbac.create -}} +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.name" . }} + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +rules: +- apiGroups: + - "" + resources: + - "namespaces" + - "pods" + verbs: + - "get" + - "watch" + - "list" +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrolebinding.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrolebinding.yaml new file mode 100644 index 0000000..ac5ba23 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/clusterrolebinding.yaml @@ -0,0 +1,21 @@ +{{- if .Values.rbac.create -}} +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.name" . }} + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +subjects: +- kind: ServiceAccount + name: {{ template "fluentd-elasticsearch.fullname" . }} + namespace: {{ .Release.Namespace }} +roleRef: + kind: ClusterRole + name: {{ template "fluentd-elasticsearch.fullname" . }} + apiGroup: rbac.authorization.k8s.io +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/configmap.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/configmap.yaml new file mode 100644 index 0000000..6fc1a19 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/configmap.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: {{ .Release.Service | quote }} + release: {{ .Release.Name | quote }} + addonmanager.kubernetes.io/mode: Reconcile +data: +{{- range $key, $value := .Values.configMaps }} + {{ $key }}: |- +{{ $value | indent 4 }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/daemonset.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/daemonset.yaml new file mode 100644 index 0000000..62f02fe --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/daemonset.yaml @@ -0,0 +1,134 @@ +apiVersion: apps/v1beta2 +kind: DaemonSet +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.fullname" . }} + version: {{ .Values.image.tag }} + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" +spec: + selector: + matchLabels: + app: {{ template "fluentd-elasticsearch.fullname" . }} + release: "{{ .Release.Name }}" + template: + metadata: + labels: + app: {{ template "fluentd-elasticsearch.fullname" . }} + version: {{ .Values.image.tag }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: "{{ .Release.Service }}" + kubernetes.io/cluster-service: "true" + version: {{ .Values.image.tag }} + release: "{{ .Release.Name }}" + # This annotation ensures that fluentd does not get evicted if the node + # supports critical pod annotation based priority scheme. + # Note that this does not guarantee admission on the nodes (#40573). + annotations: + scheduler.alpha.kubernetes.io/critical-pod: '' + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} +{{- if .Values.annotations }} +{{ toYaml .Values.annotations | indent 8 }} +{{- end }} + spec: + serviceAccountName: {{ template "fluentd-elasticsearch.fullname" . }} + containers: + - name: {{ template "fluentd-elasticsearch.fullname" . }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" + imagePullPolicy: {{ .Values.imagePullPolicy | quote }} + env: + - name: FLUENTD_ARGS + value: --no-supervisor -q + - name: OUTPUT_HOST + value: {{ .Values.elasticsearch.host | quote }} + - name: OUTPUT_PORT + value: {{ .Values.elasticsearch.port | quote }} + - name: OUTPUT_BUFFER_CHUNK_LIMIT + value: {{ .Values.elasticsearch.buffer_chunk_limit | quote }} + - name: OUTPUT_BUFFER_QUEUE_LIMIT + value: {{ .Values.elasticsearch.buffer_queue_limit | quote }} + - name: K8S_NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + resources: +{{ toYaml .Values.resources | indent 10 }} + volumeMounts: + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + - name: libsystemddir + mountPath: /host/lib + readOnly: true + - name: config-volume-{{ template "fluentd-elasticsearch.fullname" . }} + mountPath: /etc/fluent/config.d +{{- if .Values.extraVolumeMounts }} +{{ toYaml .Values.extraVolumeMounts | indent 8 }} +{{- end }} + ports: +{{- range $port := .Values.service.ports }} + - name: {{ $port.name }} + containerPort: {{ $port.port }} +{{- end }} +{{- if .Values.livenessProbe.enabled }} + # Liveness probe is aimed to help in situarions where fluentd + # silently hangs for no apparent reasons until manual restart. + # The idea of this probe is that if fluentd is not queueing or + # flushing chunks for 5 minutes, something is not right. If + # you want to change the fluentd configuration, reducing amount of + # logs fluentd collects, consider changing the threshold or turning + # liveness probe off completely. + livenessProbe: + initialDelaySeconds: 600 + periodSeconds: 60 + exec: + command: + - '/bin/sh' + - '-c' + - > + LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300}; + STUCK_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-900}; + if [ ! -e /var/log/fluentd-buffers ]; + then + exit 1; + fi; + touch -d "${STUCK_THRESHOLD_SECONDS} seconds ago" /tmp/marker-stuck; + if [[ -z "$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-stuck -print -quit)" ]]; + then + rm -rf /var/log/fluentd-buffers; + exit 1; + fi; + touch -d "${LIVENESS_THRESHOLD_SECONDS} seconds ago" /tmp/marker-liveness; + if [[ -z "$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-liveness -print -quit)" ]]; + then + exit 1; + fi; +{{- end }} + terminationGracePeriodSeconds: 30 + volumes: + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers + # It is needed to copy systemd library to decompress journals + - name: libsystemddir + hostPath: + path: /usr/lib64 + - name: config-volume-{{ template "fluentd-elasticsearch.fullname" . }} + configMap: + name: {{ template "fluentd-elasticsearch.fullname" . }} +{{- if .Values.extraVolumes }} +{{ toYaml .Values.extraVolumes | indent 6 }} +{{- end }} +{{- if .Values.tolerations }} + tolerations: +{{ toYaml .Values.tolerations | indent 6 }} +{{- end }} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service-account.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service-account.yaml new file mode 100644 index 0000000..9bbc28f --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service-account.yaml @@ -0,0 +1,13 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.name" . }} + kubernetes.io/cluster-service: "true" + addonmanager.kubernetes.io/mode: Reconcile + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- end -}} diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service.yaml new file mode 100644 index 0000000..9425497 --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/templates/service.yaml @@ -0,0 +1,22 @@ +{{- if .Values.service }} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "fluentd-elasticsearch.fullname" . }} + labels: + app: {{ template "fluentd-elasticsearch.fullname" . }} + chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" +spec: + type: {{ .Values.service.type }} + ports: + {{- range $port := .Values.service.ports }} + - name: {{ $port.name }} + port: {{ $port.port }} + targetPort: {{ $port.port }} + {{- end }} + selector: + app: {{ template "fluentd-elasticsearch.fullname" . }} + release: {{ .Release.Name }} +{{- end }} \ No newline at end of file diff --git a/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/values.yaml b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/values.yaml new file mode 100644 index 0000000..105c33d --- /dev/null +++ b/src/seba_charts/cord-platform/charts/logging/charts/fluentd-elasticsearch/values.yaml @@ -0,0 +1,448 @@ +image: + repository: akrainoenea/fluentd-elasticsearch +## Specify an imagePullPolicy (Required) +## It's recommended to change this to 'Always' if the image tag is 'latest' +## ref: http://kubernetes.io/docs/user-guide/images/#updating-images + tag: v2.3.1 + pullPolicy: IfNotPresent + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: {} + # limits: + # cpu: 100m + # memory: 500Mi + # requests: + # cpu: 100m + # memory: 200Mi + +elasticsearch: + host: 'elasticsearch-client' + port: 9200 + buffer_chunk_limit: 2M + buffer_queue_limit: 8 + +rbac: + create: true + +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + +livenessProbe: + enabled: true + +annotations: {} + # prometheus.io/scrape: "true" + # prometheus.io/port: "24231" + +tolerations: {} + # - key: node-role.kubernetes.io/master + # operator: Exists + # effect: NoSchedule + +service: {} + # type: ClusterIP + # ports: + # - name: "monitor-agent" + # port: 24231 + +configMaps: + system.conf: |- + + root_dir /tmp/fluentd-buffers/ + + containers.input.conf: |- + # This configuration file for Fluentd / td-agent is used + # to watch changes to Docker log files. The kubelet creates symlinks that + # capture the pod name, namespace, container name & Docker container ID + # to the docker logs for pods in the /var/log/containers directory on the host. + # If running this fluentd configuration in a Docker container, the /var/log + # directory should be mounted in the container. + # + # These logs are then submitted to Elasticsearch which assumes the + # installation of the fluent-plugin-elasticsearch & the + # fluent-plugin-kubernetes_metadata_filter plugins. + # See https://github.com/uken/fluent-plugin-elasticsearch & + # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for + # more information about the plugins. + # + # Example + # ======= + # A line in the Docker log file might look like this JSON: + # + # {"log":"2014/09/25 21:15:03 Got request with path wombat\n", + # "stream":"stderr", + # "time":"2014-09-25T21:15:03.499185026Z"} + # + # The time_format specification below makes sure we properly + # parse the time format produced by Docker. This will be + # submitted to Elasticsearch and should appear like: + # $ curl 'http://elasticsearch-logging:9200/_search?pretty' + # ... + # { + # "_index" : "logstash-2014.09.25", + # "_type" : "fluentd", + # "_id" : "VBrbor2QTuGpsQyTCdfzqA", + # "_score" : 1.0, + # "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n", + # "stream":"stderr","tag":"docker.container.all", + # "@timestamp":"2014-09-25T22:45:50+00:00"} + # }, + # ... + # + # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log + # record & add labels to the log record if properly configured. This enables users + # to filter & search logs on any metadata. + # For example a Docker container's logs might be in the directory: + # + # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b + # + # and in the file: + # + # 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log + # + # where 997599971ee6... is the Docker ID of the running container. + # The Kubernetes kubelet makes a symbolic link to this file on the host machine + # in the /var/log/containers directory which includes the pod name and the Kubernetes + # container name: + # + # synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log + # -> + # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log + # + # The /var/log directory on the host is mapped to the /var/log directory in the container + # running this instance of Fluentd and we end up collecting the file: + # + # /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log + # + # This results in the tag: + # + # var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log + # + # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name + # which are added to the log message as a kubernetes field object & the Docker container ID + # is also added under the docker field object. + # The final tag is: + # + # kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log + # + # And the final log record look like: + # + # { + # "log":"2014/09/25 21:15:03 Got request with path wombat\n", + # "stream":"stderr", + # "time":"2014-09-25T21:15:03.499185026Z", + # "kubernetes": { + # "namespace": "default", + # "pod_name": "synthetic-logger-0.25lps-pod", + # "container_name": "synth-lgr" + # }, + # "docker": { + # "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b" + # } + # } + # + # This makes it easier for users to search for logs by pod name or by + # the name of the Kubernetes container regardless of how many times the + # Kubernetes pod has been restarted (resulting in a several Docker container IDs). + # Json Log Example: + # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"} + # CRI Log Example: + # 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here + + @id fluentd-containers.log + @type tail + path /var/log/containers/*.log + pos_file /var/log/fluentd-containers.log.pos + time_format %Y-%m-%dT%H:%M:%S.%NZ + tag raw.kubernetes.* + format json + read_from_head true + + # Detect exceptions in the log output and forward them as one log entry. + + @id raw.kubernetes + @type detect_exceptions + remove_tag_prefix raw + message log + stream stream + multiline_flush_interval 5 + max_bytes 500000 + max_lines 1000 + + system.input.conf: |- + # Example: + # 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081 + + @id minion + @type tail + format /^(?