--- /dev/null
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+Name: activators
+Version: %{_version}
+Release: 1%{?dist}
+Summary: Basic configuration activators
+License: %{_platform_licence}
+Source0: %{name}-%{version}.tar.gz
+Vendor: %{_platform_vendor}
+
+BuildArch: noarch
+
+%define PKG_BASE_DIR /opt/cmframework/activators
+
+%description
+Configuration activators
+
+
+%prep
+%autosetup
+
+%build
+
+%install
+mkdir -p %{buildroot}/%{PKG_BASE_DIR}/
+find activators -name '*.py' -exec cp {} %{buildroot}/%{PKG_BASE_DIR}/ \;
+
+%files
+%defattr(0755,root,root,0755)
+%{PKG_BASE_DIR}/*.py*
+
+%preun
+
+
+%postun
+
+%clean
+rm -rf ${buildroot}
+
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmerror
+from cmframework.apis import cmactivator
+from cmdatahandlers.api import configmanager
+from cmdatahandlers.api import configerror
+import os
+import subprocess
+import json
+import pwd
+import logging
+
+class installationactivator(cmactivator.CMGlobalActivator):
+ inventory_cli = '/opt/cmframework/scripts/inventory.sh'
+ playbooks_generate_cli = '/usr/local/bin/cmcli ansible-playbooks-generate'
+ playbooks_path = '/opt/openstack-ansible/playbooks/'
+ setup_playbook = 'setup-playbook.yml'
+ presetup_playbook = 'presetup-playbook.yml'
+ bootstrapping_playbook = 'bootstrapping-playbook.yml'
+ provisioning_playbook = 'provisioning-playbook.yml'
+ postconfig_playbook = 'postconfig-playbook.yml'
+ state_file = '/etc/installation_state'
+
+ def __init__(self):
+ self.plugin_client = None
+
+ def get_subscription_info(self):
+ return '.*'
+
+ def activate_set(self, props):
+ self.activate_full()
+
+ def activate_delete(self, props):
+ self.activate_full()
+
+ def activate_full(self, target=None):
+ try:
+ properties = self.get_plugin_client().get_properties('.*')
+ if not properties:
+ return
+ propsjson = {}
+ for name, value in properties.iteritems():
+ try:
+ propsjson[name] = json.loads(value)
+ except Exception as exp:
+ continue
+ configman = configmanager.ConfigManager(propsjson)
+
+ hostsconfig = configman.get_hosts_config_handler()
+ installation_host = hostsconfig.get_installation_host()
+
+ installed = False
+ try:
+ configman.get_cloud_installation_date()
+ installed = True
+ except configerror.ConfigError as exp:
+ pass
+
+ if installed:
+ return
+
+ usersconf = configman.get_users_config_handler()
+ admin = usersconf.get_admin_user()
+
+ #generate high level playbooks
+ if self._run_cmd(self.playbooks_generate_cli, '/etc', 'root', os.environ.copy()):
+ raise cmerror.CMError('Failed to run %s' % self.playbooks_generate_cli)
+
+ caas_data = configman.get_caas_config_handler()
+ phase = self._get_installation_phase()
+ #first we run the setup
+ if not phase:
+ self._set_installation_phase('setup-started')
+ phase = 'setup-started'
+ env = os.environ.copy()
+ if phase == 'setup-started':
+ env['VNF_EMBEDDED_DEPLOYMENT'] = 'false'
+ env['CONFIG_PHASE'] = 'setup'
+ env['BOOTSTRAP_OPTS'] = 'installation_controller=%s' %(installation_host)
+ self._run_setup_playbook(self.presetup_playbook, env)
+ env['BOOTSTRAP_OPTS'] = ''
+ if caas_data.get_vnf_flag():
+ env['VNF_EMBEDDED_DEPLOYMENT'] = 'true'
+ self._run_setup_playbook(self.setup_playbook, env)
+ self._set_installation_phase('setup-ended')
+ phase = 'setup-ended'
+
+ #second we run the aio
+ if phase == 'setup-ended':
+ self._set_installation_phase('bootstrapping-started')
+ phase = 'bootstrapping-started'
+ if phase == 'bootstrapping-started':
+ env['CONFIG_PHASE'] = 'bootstrapping'
+ self._run_playbook(self.bootstrapping_playbook, admin, env)
+ self._set_installation_phase('bootstrapping-ended')
+ phase = 'bootstrapping-ended'
+
+ #3rd we run the provisioning
+ if phase == 'bootstrapping-ended':
+ self._set_installation_phase('provisioning-started')
+ phase = 'provisioning-started'
+ if phase == 'provisioning-started':
+ env['CONFIG_PHASE'] = 'provisioning'
+ self._run_playbook(self.provisioning_playbook, admin, env)
+ self._set_installation_phase('provisioning-ended')
+ phase = 'provisioning-ended'
+
+ #4th we run the postconfig
+ if phase == 'provisioning-ended':
+ self._set_installation_phase('postconfig-started')
+ phase = 'postconfig-started'
+ if phase == 'postconfig-started':
+ env['CONFIG_PHASE'] = 'postconfig'
+ env['CAAS_ONLY_DEPLOYMENT'] = 'false'
+ if caas_data.get_caas_only():
+ env['CAAS_ONLY_DEPLOYMENT'] = 'true'
+ self._run_playbook(self.postconfig_playbook, admin, env)
+ self._set_installation_phase('postconfig-ended')
+ phase = 'postconfig-ended'
+
+ self._set_installation_date()
+
+ self._set_state('success')
+
+ except Exception as exp:
+ self._set_state('failure')
+ raise cmerror.CMError(str(exp))
+
+ def _set_installation_phase(self, phase):
+ self.get_plugin_client().set_property('cloud.installation_phase', json.dumps(phase))
+
+ def _get_installation_phase(self):
+ phase = None
+ try:
+ phase = json.loads(self.get_plugin_client().get_property('cloud.installation_phase'))
+ logging.debug('Current installation phase cloud.installation_phase="%s"'%phase)
+ except Exception as exp:
+ pass
+ return phase
+
+ def _set_installation_date(self):
+ from time import gmtime, strftime
+ # Use ISO 8601 date format
+ times = strftime('%Y-%m-%dT%H:%M:%SZ', gmtime())
+ self.get_plugin_client().set_property('cloud.installation_date', json.dumps(times))
+
+ def _run_playbook(self, playbook, user, env):
+ cmd = '/usr/local/bin/openstack-ansible -b -u ' + user + ' ' + playbook
+ result = self._run_cmd(cmd, self.playbooks_path, user, env)
+ if result != 0:
+ raise cmerror.CMError('Playbook %s failed' % playbook)
+
+ def _run_setup_playbook(self, playbook, env):
+ cmd = '/usr/local/bin/setup-controller.sh ' + playbook
+ result = self._run_cmd(cmd, self.playbooks_path, 'root', env)
+ if result != 0:
+ raise cmerror.CMError('Playbook %s failed' % playbook)
+
+ def _run_cmd(self, cmd, cwd, user, env):
+ args = cmd.split()
+ pw_record = pwd.getpwnam(user)
+ user_name = pw_record.pw_name
+ user_home_dir = pw_record.pw_dir
+ user_uid = pw_record.pw_uid
+ user_gid = pw_record.pw_gid
+ env['HOME'] = user_home_dir
+ env['LOGNAME'] = user_name
+ env['HOME'] = user_home_dir
+ env['PWD'] = cwd
+ env['USER'] = user_name
+ process = subprocess.Popen(args, preexec_fn=self._demote(user_uid, user_gid), cwd=cwd, env=env)
+ result = process.wait()
+ return result
+
+
+ def _demote(self, user_uid, user_gid):
+ def result():
+ os.setgid(user_gid)
+ os.setuid(user_uid)
+ return result
+
+ def _set_state(self, state):
+ with open(self.state_file, 'w') as f:
+ f.write(state)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmactivator
+
+class managelinuxuseractivator(cmactivator.CMGlobalActivator):
+ manage_user_playbook = "/opt/openstack-ansible/playbooks/manage_linux_user.yml"
+
+ def __init__self(self):
+ super(managelinuxuseractivator, self).__init__()
+
+ def get_subscription_info(self):
+ return 'cloud.linuxuser'
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.manage_user_playbook, target)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmactivator
+
+class managepolicycreatoractivator(cmactivator.CMGlobalActivator):
+ aaa_policy_playbook = '/opt/openstack-ansible/playbooks/setup_aaa.yml --tags aaa_policy'
+
+ def __init__self(self):
+ super(managepolicycreatoractivator, self).__init__()
+
+ def get_subscription_info(self):
+ return 'cloud.policy_counter'
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.aaa_policy_playbook, target)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmactivator
+
+class manageuseractivator(cmactivator.CMGlobalActivator):
+ manage_user_playbook = "/opt/openstack-ansible/playbooks/manage_chroot_user.yml"
+
+ def __init__self(self):
+ super(manageuseractivator, self).__init__()
+
+ def get_subscription_info(self):
+ return 'cloud.chroot'
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.manage_user_playbook, target)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmactivator
+
+class motdactivator(cmactivator.CMGlobalActivator):
+ playbook = '/opt/openstack-ansible/playbooks/motd.yml'
+
+ def __init__(self):
+ super(motdactivator, self).__init__()
+
+ def get_subscription_info(self):
+ return 'cloud.motd'
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.playbook, target)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis.cmactivator import CMGlobalActivator
+
+
+class ovsconfigactivator(CMGlobalActivator):
+ """ OVS config activator plugin class. """
+
+
+ CLOUD_HOSTS = 'cloud.networking'
+ PLAYBOOK = '/opt/openstack-ansible/playbooks/ovs_config.yaml'
+
+ def __init__(self):
+ super(ovsconfigactivator, self).__init__()
+
+ def get_subscription_info(self):
+ return self.CLOUD_HOSTS
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.PLAYBOOK, target)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmactivator
+
+class timeactivator(cmactivator.CMGlobalActivator):
+ playbook = '/opt/openstack-ansible/playbooks/ntp-config.yml'
+
+ def __init__(self):
+ super(timeactivator, self).__init__()
+
+ def get_subscription_info(self):
+ return 'cloud.time'
+
+ def activate_set(self, props):
+ self._activate()
+
+ def activate_delete(self, props):
+ self._activate()
+
+ def activate_full(self, target):
+ self._activate(target=target)
+
+ def _activate(self, target=None):
+ self.run_playbook(self.playbook, target)
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+Name: inventoryhandlers
+Version: %{_version}
+Release: 1%{?dist}
+Summary: Inventory handlers
+License: %{_platform_licence}
+Source0: %{name}-%{version}.tar.gz
+Vendor: %{_platform_vendor}
+
+BuildArch: noarch
+
+%define PKG_BASE_DIR /opt/cmframework/inventoryhandlers
+
+%description
+Inventory handlers
+
+
+%prep
+%autosetup
+
+%build
+
+%install
+mkdir -p %{buildroot}/%{PKG_BASE_DIR}/
+find inventoryhandlers -name '*.py' -exec cp {} %{buildroot}/%{PKG_BASE_DIR}/ \;
+
+%files
+%defattr(0755,root,root,0755)
+%{PKG_BASE_DIR}/*.py*
+
+%preun
+
+
+%postun
+
+%clean
+rm -rf ${buildroot}
+
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#pylint: disable=missing-docstring,invalid-name,too-few-public-methods
+import os
+import json
+import string
+from jinja2 import Environment
+from cmframework.apis import cmansibleinventoryconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+import hw_detector.hw_detect_lib as hw
+
+JSON_HW_HOST_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "vendor": "{{ host.vendor }}",
+ "product_family": "{{ host.product_family }}",
+ "mgmt_mac": "{{ host.mgmt_mac }}"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+class Host(object):
+ def __init__(self, name):
+ self.name = name
+ self.vendor = None
+ self.product_family = None
+
+class hwinventory(cmansibleinventoryconfig.CMAnsibleInventoryConfigPlugin):
+ def __init__(self, confman, inventory, ownhost):
+ super(hwinventory, self).__init__(confman, inventory, ownhost)
+ self.host_objects = []
+ self._hosts_config_handler = self.confman.get_hosts_config_handler()
+
+ def handle_bootstrapping(self):
+ self.handle()
+
+ def handle_provisioning(self):
+ self.handle()
+
+ def handle_setup(self):
+ pass
+
+ def handle_postconfig(self):
+ self.handle()
+
+ def handle(self):
+ self._set_hw_types()
+ self._add_hw_config()
+
+
+ def _add_hw_config(self):
+ try:
+ text = Environment().from_string(JSON_HW_HOST_VAR).render(
+ hosts=self.host_objects)
+ inventory = json.loads(text)
+ self.add_global_var("hw_inventory_details", inventory)
+# for host in inventory.keys():
+# for var, value in inventory[host].iteritems():
+# self.add_host_var(host, var, value)
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def _get_hw_type_of_host(self, name):
+ hwmgmt_addr = self._hosts_config_handler.get_hwmgmt_ip(name)
+ hwmgmt_user = self._hosts_config_handler.get_hwmgmt_user(name)
+ hwmgmt_pass = self._hosts_config_handler.get_hwmgmt_password(name)
+ return hw.get_hw_data(hwmgmt_addr, hwmgmt_user, hwmgmt_pass)
+
+ def _set_hw_types(self):
+ hosts = self._hosts_config_handler.get_hosts()
+ for host in hosts:
+ host_object = Host(host)
+ hw_details = self._get_hw_type_of_host(host)
+ host_object.vendor = hw_details.get("vendor", "Unknown")
+ host_object.product_family = hw_details.get("product_family", "Unknown")
+ host_object.mgmt_mac = hw_details.get('info', {}).get("MAC Address", "00:00:00:00:00:00")
+ self.host_objects.append(host_object)
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import socket
+from jinja2 import Environment
+from cmframework.apis import cmansibleinventoryconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import utils
+from cmdatahandlers.api import configerror
+from serviceprofiles import profiles
+
+json_text_setup = """
+{
+ "_meta": {
+ "hostvars": {
+ "{{ installation_controller }}": {
+ "ansible_connection": "local",
+ "aio_hostname": "{{ installation_controller }}",
+ "bootstrap_host_loopback_cinder": "no",
+ "bootstrap_host_loopback_swift": "no",
+ "bootstrap_host_loopback_nova": "no",
+ "bootstrap_host_data_disk_min_size": 30,
+ "bootstrap_env_file": "{{ '{{' }} bootstrap_host_aio_config_path {{ '}}' }}/env.d/baremetal.yml",
+ "user_secrets_overrides": {
+ "keystone_auth_admin_password": "{{ general.openstack_password }}"
+ },
+ "sudo_user": "{{ general.admin }}",
+ "sudo_user_password": "{{ general.password }}"
+ }
+ }
+ }
+}
+"""
+json_text = """
+{
+ "_meta": {
+ "hostvars": {
+ {% set tenant_network = networkingconf.get_cloud_tenant_network_name() %}
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "hostname": "{{ host.name }}",
+ "management_bridge": "{{ hostsconf.get_host_network_ip_holding_interface(host.name, "infra_internal") }}",
+ "is_metal": true,
+ "container_address": "{{ host.get_network_ip("infra_internal") }}",
+ "container_name": "{{ host.name }}",
+ "container_networks": {
+ "management_address": {
+ "address": "{{ host.get_network_ip("infra_internal") }}",
+ "bridge": "{{ host.get_network_ip_holding_interface("infra_internal") }}",
+ "netmask": null,
+ "type": "veth"
+ },
+ {% if tenant_network in hostsconf.get_host_networks(host.name) %}
+ "tunnel_address": {
+ "address": "{{ host.get_network_ip(tenant_network) }}",
+ "bridge": "{{ host.get_network_ip_holding_interface(tenant_network) }}",
+ "netmask": "null",
+ "type": "veth"
+ },
+ {% endif %}
+ "storage_address": {
+ "address": "{{ host.get_network_ip("infra_internal") }}",
+ "bridge": "{{ host.get_network_ip_holding_interface("infra_internal") }}",
+ "netmask": "null",
+ "type": "veth"
+ }
+ },
+ {% if host.is_performance %}
+ "heat_api_threads_max" : {{ host.os_max_threads }},
+ "nova_api_threads_max" : {{ host.os_max_threads }},
+ "cinder_osapi_volume_workers_max" : {{ host.os_max_threads }},
+ "glance_api_threads_max" : {{ host.os_max_threads }},
+ "neutron_api_threads_max" : {{ host.os_max_threads }},
+ {% endif %}
+ "physical_host": "{{ host.name }}",
+ {% if host.is_controller %}
+ "physical_host_group": "orchestration_hosts"
+ {% else %}
+ "physical_host_group": "compute_hosts"
+ {% endif %}
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+ }
+ },
+ "all": {
+ "vars": {
+ "installation_controller": "{{ installation_controller }}",
+ "is_metal": true,
+ "haproxy_glance_api_nodes": ["glance-api"],
+ "nova_vncserver_listen": "0.0.0.0",
+ "nova_novncproxy_base_url": "{% raw %}{{ nova_novncproxy_base_uri }}/vnc_auto.html{% endraw %}",
+ "properties": {
+ "is_metal": true
+ },
+ {% if not virtual_environment %}
+ "virtual_env": false,
+ {% else %}
+ "virtual_env": true,
+ {% endif %}
+ "container_cidr": "{{ infra_mgmt.cidr }}",
+ "haproxy_whitelist_networks": [ {% for cidr in infra_mgmt.cidrs %}"{{ cidr }}"{%if not loop.last %},{% endif %}{% endfor %} ],
+ {% if config_phase == 'postconfig' %}
+ "external_lb_vip_address": "{{ has.haproxy.external_vip }}",
+ "internal_lb_vip_address": "{{ has.haproxy.internal_vip }}",
+ "haproxy_keepalived_external_vip_cird": "{{ has.haproxy.external_vip }}/32",
+ "haproxy_keepalived_internal_vip_cidr": "{{ has.haproxy.external_vip }}/32",
+ {% else %}
+ "external_lb_vip_address": "{{ infra_external.ip }}",
+ "internal_lb_vip_address": "{{ infra_mgmt.ip }}",
+ "haproxy_keepalived_external_vip_cird": "{{ infra_external.ip }}/32",
+ "haproxy_keepalived_internal_vip_cidr": "{{ infra_external.ip }}/32",
+ {% endif %}
+ {%if config_phase == 'postconfig' %}
+ "ironic_standalone_auth_strategy": "keystone",
+ "galera_ignore_cluster_state": false,
+ {% else %}
+ "galera_ignore_cluster_state": true,
+ {% endif %}
+ "keepalived_ping_address": "{{ infra_external.gateway }}",
+ "haproxy_keepalived_external_interface": "{{ infra_external.interface }}",
+ "haproxy_keepalived_internal_interface": "{{ infra_mgmt.interface }}",
+ "management_bridge": "{{ infra_mgmt.interface }}",
+ "ntp_servers": [ {% for server in general.ntp_servers %}"{{ server }}"{%if not loop.last %},{% endif %}{% endfor %} ],
+ "openrc_file_dest": "/home/{{ general.admin }}/openrc",
+ "openrc_file_owner": "{{ general.admin }}",
+ "openrc_file_group": "{{ general.admin }}",
+ "openrc_openstack_client_config_dir_dest": "/home/{{ general.admin }}/.config/openstack",
+ "openrc_openstack_client_config_dir_owner": "{{ general.admin }}",
+ "openrc_openstack_client_config_dir_group": "{{ general.admin }}",
+ "openrc_clouds_yml_file_dest": "/home/{{ general.admin }}/.config/openstack/clouds.yaml",
+ "openrc_clouds_yml_file_owner": "{{ general.admin }}",
+ "openrc_clouds_yml_file_group": "{{ general.admin }}",
+ "horizon_images_upload_mode": "legacy",
+ "horizon_time_zone": "{{ general.zone }}",
+ "horizon_disable_password_reveal": true,
+ "nova_cpu_allocation_ratio": "1.0",
+ "nova_resume_guests_state_on_host_boot": "True",
+ "nova_scheduler_default_filters": "RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateCoreFilter,AggregateDiskFilter,NUMATopologyFilter,AggregateInstanceExtraSpecsFilter,PciPassthroughFilter",
+ "cinder_volume_clear": "none",
+ "haproxy_ssl_pem": "/etc/ssl/private/certificate.pem",
+ "ironic_default_network_interface": "noop",
+ "restful_service_port": "61200",
+ "auth_server_service_address": "localhost",
+ "auth_server_service_port": "62200",
+ "aaa_galera_address": "{{ has.haproxy.internal_vip }}",
+ {% if not virtual_environment %}
+ "nova_cpu_mode": "host-passthrough",
+ {% else %}
+ "nova_cpu_mode": "host-model",
+ {% endif %}
+ {% if computes|length == 1 %}
+ "single_compute" : true,
+ {% else %}
+ "single_compute" : false,
+ {% endif %}
+ {% if management_nodes|length == 1 %}
+ "single_management" : true
+ {% else %}
+ "single_management" : false
+ {% endif %}
+ }
+ },
+ "all_containers": {
+ "children": [
+ "unbound_containers",
+ "ceph-osd_containers",
+ "orchestration_containers",
+ "operator_containers",
+ "memcaching_containers",
+ "metering-infra_containers",
+ "ironic-infra_containers",
+ "ceph-mon_containers",
+ "storage_containers",
+ "ironic-server_containers",
+ "mq_containers",
+ "shared-infra_containers",
+ "compute_containers",
+ "storage-infra_containers",
+ "haproxy_containers",
+ "key-manager_containers",
+ "metering-alarm_containers",
+ "network_containers",
+ "os-infra_containers",
+ "image_containers",
+ "compute-infra_containers",
+ "log_containers",
+ "ironic-compute_containers",
+ "metering-compute_containers",
+ "identity_containers",
+ "dashboard_containers",
+ "dnsaas_containers",
+ "database_containers",
+ "metrics_containers",
+ "repo-infra_containers"
+ ],
+ "hosts": []
+ },
+ "aodh_alarm_evaluator": {
+ "children": [],
+ "hosts": [{% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}]
+ },
+ "aodh_alarm_notifier": {
+ "children": [],
+ "hosts": [{% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}]
+ },
+ "aodh_all": {
+ "children": [
+ "aodh_alarm_notifier",
+ "aodh_api",
+ "aodh_alarm_evaluator",
+ "aodh_listener"
+ ],
+ "hosts": []
+ },
+ "aodh_api": {
+ "children": [],
+ "hosts": [{% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}]
+ },
+ "aodh_container": {
+ "hosts": []
+ },
+ "aodh_listener": {
+ "children": [],
+ "hosts": [{% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}]
+ },
+ "barbican_all": {
+ "children": [
+ "barbican_api"
+ ],
+ "hosts": []
+ },
+ "barbican_api": {
+ "children": [],
+ "hosts": []
+ },
+ "barbican_container": {
+ "hosts": []
+ },
+ "openstack_nodes": {
+ "children": [ "controller", "compute", "storage" ]
+ },
+ "caas_nodes": {
+ "children": [ "caas_master", "caas_worker" ]
+ },
+ "baremetal-infra_hosts": {
+ "hosts": [ {% if not vnf_embedded_deployment %} "{{ installation_controller }}" {% endif %}]
+ },
+ "baremetal-nodes": {
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in hosts %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ "baremetal_management_nodes": {
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "ceilometer_agent_central": {
+ "children": [],
+ "hosts": []
+ },
+ "ceilometer_agent_compute": {
+ "children": [],
+ "hosts": []
+ },
+ "ceilometer_agent_notification": {
+ "children": [],
+ "hosts": []
+ },
+ "ceilometer_all": {
+ "children": [
+ "ceilometer_agent_central",
+ "ceilometer_agent_notification",
+ "ceilometer_api",
+ "ceilometer_collector",
+ "ceilometer_agent_compute"
+ ],
+ "hosts": []
+ },
+ "ceilometer_api": {
+ "children": [],
+ "hosts": []
+ },
+ "ceilometer_api_container": {
+ "hosts": []
+ },
+ "ceilometer_collector": {
+ "children": [],
+ "hosts": []
+ },
+ "ceilometer_collector_container": {
+ "hosts": []
+ },
+ {% if storagebackend != 'ceph' %}
+ "ceph-mon": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph-mon_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph-osd": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph-osd_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph-mgr": {
+ "children": [],
+ "hosts": []
+ },
+ {% endif %}
+ "ceph-mon_container": {
+ "hosts": []
+ },
+ "ceph-mon_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph-osd_container": {
+ "hosts": []
+ },
+ "ceph-osd_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "ceph_all": {
+ "children": [
+ "ceph-mon",
+ "ceph-osd",
+ "ceph-mgr"
+ ],
+ "hosts": []
+ },
+ "cinder_all": {
+ "children": [
+ "cinder_api",
+ "cinder_backup",
+ "cinder_volume",
+ "cinder_scheduler"
+ ],
+ "hosts": []
+ },
+ "cinder_api": {
+ "children": [],
+ {% if storagebackend == 'ceph' %}
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ {% else %}
+ "hosts": [ {% if not caas_only_deployment %}"{{ installation_controller }}"{% endif %} ]
+ {% endif %}
+ },
+ "cinder_api_container": {
+ "hosts": []
+ },
+ "cinder_backup": {
+ "children": [],
+ {% if storagebackend == 'ceph' %}
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ {% else %}
+ "hosts": [ {% if not caas_only_deployment %}"{{ installation_controller }}"{% endif %} ]
+ {% endif %}
+ },
+ "cinder_scheduler": {
+ "children": [],
+ {% if storagebackend == 'ceph' %}
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ {% else %}
+ "hosts": [ {% if not caas_only_deployment %}"{{ installation_controller }}"{% endif %} ]
+ {% endif %}
+ },
+ "cinder_scheduler_container": {
+ "hosts": []
+ },
+ "cinder_volume": {
+ "children": [],
+ {% if storagebackend == 'ceph' %}
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ {% else %}
+ "hosts": [ {% if not caas_only_deployment %}"{{ installation_controller }}"{% endif %} ]
+ {% endif %}
+ },
+ "cinder_volumes_container": {
+ "hosts": []
+ },
+ "compute-infra_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "compute-infra_containers": {
+ "children": [ {% for host in containers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "compute-infra_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "compute_all": {
+ "hosts": [ {% for host in computes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "compute_containers": {
+ "children": [ {% for host in computes %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "compute_hosts": {
+ "hosts": [ {% for host in computes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "dashboard_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "dashboard_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "dashboard_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "database_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "database_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_all": {
+ "children": [
+ "designate_producer",
+ "designate_mdns",
+ "designate_api",
+ "designate_worker",
+ "designate_central",
+ "designate_sink"
+ ],
+ "hosts": []
+ },
+ "designate_api": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_central": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_container": {
+ "hosts": []
+ },
+ "designate_mdns": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_producer": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_sink": {
+ "children": [],
+ "hosts": []
+ },
+ "designate_worker": {
+ "children": [],
+ "hosts": []
+ },
+ "dnsaas_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "dnsaas_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "galera": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "galera_all": {
+ "children": [
+ "galera"
+ ],
+ "hosts": []
+ },
+ "galera_container": {
+ "hosts": []
+ },
+ "glance_all": {
+ "children": [
+ "glance_registry",
+ "glance_api"
+ ],
+ "hosts": []
+ },
+ "glance_api": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "glance_container": {
+ "hosts": []
+ },
+ "glance_registry": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "gnocchi_all": {
+ "children": [
+ "gnocchi_api",
+ "gnocchi_metricd"
+ ],
+ "hosts": []
+ },
+ "gnocchi_api": {
+ "children": [],
+ "hosts": []
+ },
+ "gnocchi_container": {
+ "hosts": []
+ },
+ "gnocchi_metricd": {
+ "children": [],
+ "hosts": []
+ },
+ {% if config_phase != 'bootstrapping' %}
+ "haproxy": {
+ "children": [],
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ "haproxy_all": {
+ "children": [
+ "haproxy"
+ ],
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ "haproxy_container": {
+ "hosts": []
+ },
+ "haproxy_containers": {
+ "children": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ],
+ "hosts": []
+ },
+ "haproxy_hosts": {
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ {% endif %}
+ "heat_all": {
+ "children": [
+ "heat_api",
+ "heat_engine",
+ "heat_api_cloudwatch",
+ "heat_api_cfn"
+ ],
+ "hosts": []
+ },
+ "heat_api": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "heat_api_cfn": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "heat_api_cloudwatch": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "heat_apis_container": {
+ "hosts": []
+ },
+ "heat_engine": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "heat_engine_container": {
+ "hosts": []
+ },
+ "horizon": {
+ "children": [],
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ "horizon_all": {
+ "children": [
+ "horizon"
+ ],
+ "hosts": []
+ },
+ "horizon_container": {
+ "hosts": []
+ },
+ "hosts": {
+ "children": [
+ "memcaching_hosts",
+ "metering-compute_hosts",
+ "image_hosts",
+ "shared-infra_hosts",
+ "storage_hosts",
+ "metering-infra_hosts",
+ "os-infra_hosts",
+ "ironic-server_hosts",
+ "key-manager_hosts",
+ "ceph-osd_hosts",
+ "dnsaas_hosts",
+ "network_hosts",
+ "haproxy_hosts",
+ "mq_hosts",
+ "database_hosts",
+ "ironic-compute_hosts",
+ "metering-alarm_hosts",
+ "log_hosts",
+ "ceph-mon_hosts",
+ "compute_hosts",
+ "orchestration_hosts",
+ "compute-infra_hosts",
+ "identity_hosts",
+ "unbound_hosts",
+ "ironic-infra_hosts",
+ "metrics_hosts",
+ "dashboard_hosts",
+ "storage-infra_hosts",
+ "operator_hosts",
+ "repo-infra_hosts"
+ ],
+ "hosts": []
+ },
+ "identity_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "identity_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "identity_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "image_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "image_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "image_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "installation_controller": {
+ "hosts": [ "{{ installation_controller }}" ]
+ },
+ "ironic-compute_all": {
+ "hosts": []
+ },
+ "ironic-compute_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "ironic-compute_hosts": {
+ "hosts": []
+ },
+ "ironic-infra_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "ironic-infra_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "ironic-infra_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "ironic-server_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "ironic-server_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "ironic_all": {
+ "children": [
+ "ironic_conductor",
+ "ironic_api"
+ ],
+ "hosts": []
+ },
+ "ironic_api": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "ironic_api_container": {
+ "hosts": []
+ },
+ "ironic_compute": {
+ "children": [],
+ "hosts": []
+ },
+ "ironic_compute_container": {
+ "hosts": []
+ },
+ "ironic_conductor": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "ironic_conductor_container": {
+ "hosts": []
+ },
+ "ironic_server": {
+ "children": [],
+ "hosts": []
+ },
+ "ironic_server_container": {
+ "hosts": []
+ },
+ "ironic_servers": {
+ "children": [
+ "ironic_server"
+ ],
+ "hosts": []
+ },
+ "key-manager_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "key-manager_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "keystone": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "keystone_all": {
+ "children": [
+ "keystone"
+ ],
+ "hosts": []
+ },
+ "keystone_container": {
+ "hosts": []
+ },
+ "log_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "log_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "lxc_hosts": {
+ "hosts": [ {% for host in hosts %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "memcached": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "memcached_all": {
+ "children": [
+ "memcached"
+ ],
+ "hosts": []
+ },
+ "memcached_container": {
+ "hosts": []
+ },
+ "memcaching_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "memcaching_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-alarm_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-alarm_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-compute_container": {
+ "hosts": []
+ },
+ "metering-compute_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-compute_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-infra_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "metering-infra_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "metrics_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "metrics_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "mq_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "mq_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "network_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "network_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "network_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_agents_container": {
+ "hosts": []
+ },
+ "neutron_all": {
+ "children": [
+ "neutron_agent",
+ "neutron_metadata_agent",
+ "neutron_linuxbridge_agent",
+ "neutron_bgp_dragent",
+ "neutron_dhcp_agent",
+ "neutron_lbaas_agent",
+ "neutron_l3_agent",
+ "neutron_metering_agent",
+ "neutron_server",
+ "neutron_sriov_nic_agent",
+ "neutron_openvswitch_agent"
+ ],
+ "hosts": []
+ },
+ "neutron_bgp_dragent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_dhcp_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_l3_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_lbaas_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_linuxbridge_agent": {
+ "children": [],
+ "hosts": [ {% for host in neutron_agent_hosts %}{% if not caas_only_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "neutron_metadata_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_metering_agent": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_openvswitch_agent": {
+ "children": [],
+ "hosts": [ {% for host in neutron_agent_hosts %}{% if not caas_only_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "neutron_server": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "neutron_server_container": {
+ "hosts": []
+ },
+ "neutron_sriov_nic_agent": {
+ "children": [],
+ "hosts": [ {% for host in computes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_all": {
+ "children": [
+ "nova_console",
+ "nova_scheduler",
+ "ironic_compute",
+ "nova_api_placement",
+ "nova_api_metadata",
+ "nova_api_os_compute",
+ "nova_conductor",
+ "nova_compute"
+ ],
+ "hosts": []
+ },
+ "nova_api_metadata": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_api_metadata_container": {
+ "hosts": []
+ },
+ "nova_api_os_compute": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_api_os_compute_container": {
+ "hosts": []
+ },
+ "nova_api_placement": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_api_placement_container": {
+ "hosts": []
+ },
+ "nova_compute": {
+ "children": [],
+ "hosts": [ {% for host in computes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_compute_container": {
+ "hosts": []
+ },
+ "nova_conductor": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_conductor_container": {
+ "hosts": []
+ },
+ "nova_console": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_console_container": {
+ "hosts": []
+ },
+ "nova_scheduler": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "nova_scheduler_container": {
+ "hosts": []
+ },
+ "operator_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "operator_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "orchestration_all": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "orchestration_containers": {
+ "children": [ {% for host in controllers %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "orchestration_hosts": {
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "os-infra_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "os-infra_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "pkg_repo": {
+ "children": [],
+ "hosts": []
+ },
+ "rabbit_mq_container": {
+ "hosts": []
+ },
+ "rabbitmq": {
+ "children": [],
+ "hosts": [ {% for host in management_nodes %}{% if not vnf_embedded_deployment %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ },
+ "rabbitmq_all": {
+ "children": [
+ "rabbitmq"
+ ],
+ "hosts": []
+ },
+ "repo-infra_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "repo-infra_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "repo_all": {
+ "children": [
+ "pkg_repo"
+ ],
+ "hosts": []
+ },
+ "repo_container": {
+ "hosts": []
+ },
+ "rsyslog": {
+ "children": [],
+ "hosts": []
+ },
+ "rsyslog_all": {
+ "children": [
+ "rsyslog"
+ ],
+ "hosts": []
+ },
+ "rsyslog_container": {
+ "hosts": []
+ },
+ "shared-infra_hosts":
+ {
+ "hosts": [ {% if not vnf_embedded_deployment %}{% for host in management_nodes %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %}{% endif %} ]
+ },
+ "storage-infra_all": {
+ "hosts": [ {% for host in storages %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "storage-infra_containers": {
+ "children": [ {% for host in storages %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "storage-infra_hosts": {
+ "hosts": [ {% for host in storages %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "storage_all": {
+ "hosts": [ {% for host in storages %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "storage_containers": {
+ "children": [ {% for host in storages %}"{{ host.name }}-host_containers"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "hosts": []
+ },
+ "storage_hosts": {
+ "hosts": [ {% for host in storages %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "unbound": {
+ "children": [],
+ "hosts": []
+ },
+ "unbound_all": {
+ "children": [
+ "unbound"
+ ],
+ "hosts": []
+ },
+ "unbound_container": {
+ "hosts": []
+ },
+ "unbound_containers": {
+ "children": [],
+ "hosts": []
+ },
+ "unbound_hosts": {
+ "children": [],
+ "hosts": []
+ },
+ "utility": {
+ "children": [],
+ "hosts": [ {% for host in controllers %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+ },
+ "utility_all": {
+ "children": [
+ "utility"
+ ],
+ "hosts": []
+ },
+ "utility_container": {
+ "hosts": []
+ },
+ "vnf-nodes": {
+ "hosts": [ {% for host in hosts %}{% if vnf_embedded_deployment %} "{{ host.name }}"{% if not loop.last %},{% endif %}{% endif %}{% endfor %} ]
+ }
+}
+"""
+
+class General:
+ def __init__(self):
+ self.dns_servers = []
+ self.ntp_servers = []
+ self.zone = None
+ self.admin = None
+ self.password = None
+ self.openstack_password = None
+
+class Network:
+ def __init__(self):
+ self.name = None
+ self.cidr = None
+ self.cidrs = set()
+ self.vlan = None
+ self.gateway = None
+
+class HostNetwork:
+ def __init__(self):
+ self.network = None
+ self.interface = None
+ self.ip_holding_interface = None
+ self.is_bonding = False
+ self.linux_bonding_options = None
+ self.members = []
+ self.ip = None
+
+class ProviderNetwork:
+ def __init__(self):
+ self.cidr = None
+ self.cidrs = None
+ self.interface = None
+ self.ip = None
+ self.gateway = None
+
+class Host:
+ def __init__(self):
+ self.name = None
+ self.is_controller = False
+ self.is_caas_master = False
+ self.is_compute = False
+ self.is_storage = False
+ self.is_management = False
+ self.networks = []
+ self.hwmgmt_address = None
+ self.hwmgmt_password = None
+ self.hwmgmt_user = None
+ self.mgmt_mac = None
+ self.is_performance = False
+ self.os_max_threads = 16
+
+
+ def get_network_ip(self, networkname):
+ for network in self.networks:
+ if network.network.name == networkname:
+ return network.ip.split('/')[0]
+
+ def get_network_ip_holding_interface(self, networkname):
+ for network in self.networks:
+ if network.network.name == networkname:
+ return network.ip_holding_interface
+
+
+class HAProxy:
+ def __init__(self):
+ self.internal_vip = None
+ self.external_vip = None
+
+class HAS:
+ def __init__(self):
+ self.haproxy = HAProxy()
+
+class openstackinventory(cmansibleinventoryconfig.CMAnsibleInventoryConfigPlugin):
+ def __init__(self, confman, inventory, ownhost):
+ super(openstackinventory, self).__init__(confman, inventory, ownhost)
+ self.networks = []
+ self.hosts = []
+ self.controllers = []
+ self.managements = []
+ self.caas_masters = []
+ self.computes = []
+ self.storages = []
+ self.neutron_agent_hosts = set()
+ self.has = HAS()
+ self.general = General()
+ self._init_jinja_environment()
+ self.orig_inventory = inventory.copy()
+
+
+ def handle_bootstrapping(self):
+ self.handle('bootstrapping')
+
+ def handle_provisioning(self):
+ self.handle('provisioning')
+
+ def handle_postconfig(self):
+ self.handle('postconfig')
+
+ def handle_setup(self):
+ try:
+ ownhostobj = None
+ for host in self.hosts:
+ if host.name == self.ownhost:
+ ownhostobj = host
+ break
+ if not ownhostobj:
+ raise cmerror.CMError('Invalid own host configuration %s' % self.ownhost)
+ text = Environment().from_string(json_text_setup).render(host=ownhostobj, installation_controller=self.ownhost, general=self.general)
+
+ inventory = json.loads(text)
+
+ #add some variables from the original inventory
+ self.inventory.update(inventory)
+ self.inventory['all'] = {'hosts': [self.ownhost]}
+ self.inventory['all']['vars'] = {}
+
+ setuphosts = {}
+ setupnetworking = {}
+ setupnetworkprofiles = {}
+
+ if 'hosts' in self.orig_inventory['all']['vars'] and self.ownhost in self.orig_inventory['all']['vars']['hosts']:
+ setuphosts = self.orig_inventory['all']['vars']['hosts'][self.ownhost]
+ if 'networking' in self.orig_inventory['all']['vars']:
+ setupnetworking = self.orig_inventory['all']['vars']['networking']
+ if 'network_profiles' in self.orig_inventory['all']['vars']:
+ setupnetworkprofiles = self.orig_inventory['all']['vars']['network_profiles']
+
+ if setuphosts:
+ self.inventory['all']['vars']['hosts'] = {self.ownhost: setuphosts}
+ if setupnetworking:
+ self.inventory['all']['vars']['networking'] = setupnetworking
+ if setupnetworkprofiles:
+ self.inventory['all']['vars']['network_profiles'] = setupnetworkprofiles
+
+ #add networking configuration to own host
+ if self.ownhost in self.orig_inventory['_meta']['hostvars'] and 'networking' in self.orig_inventory['_meta']['hostvars'][self.ownhost]:
+ self.inventory['_meta']['hostvars'][self.ownhost]['networking'] = self.orig_inventory['_meta']['hostvars'][self.ownhost]['networking']
+
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def handle(self, phase):
+ try:
+ networkingconf = self.confman.get_networking_config_handler()
+ hostsconf = self.confman.get_hosts_config_handler()
+
+ infrainternal = networkingconf.get_infra_internal_network_name()
+ infraexternal = networkingconf.get_infra_external_network_name()
+
+ installation_controller = socket.gethostname()
+
+ # sort management nodes so that installation_controlle is the first
+ modified_list = []
+ for entry in self.managements:
+ if entry.name == installation_controller:
+ modified_list.append(entry)
+
+ for entry in self.managements:
+ if entry.name != installation_controller:
+ modified_list.append(entry)
+
+ self.managements = modified_list
+
+ installation_controller_ip = networkingconf.get_host_ip(installation_controller, infrainternal)
+ installation_network_domain = hostsconf.get_host_network_domain(installation_controller)
+
+ virtual_environment = utils.is_virtualized()
+
+ openstackconfig = self.confman.get_openstack_config_handler()
+ storagebackend = openstackconfig.get_storage_backend()
+
+ #construct privder netwrks based on the installation controller
+ infra_mgmt = ProviderNetwork()
+ infra_external = ProviderNetwork()
+
+ host = self._get_host(installation_controller)
+
+ #Installation controller has to be the first one in the controllers list
+ #Most of the openstack ansbile modules are executed on first host in the list. This does not work properly.
+ if self.controllers:
+ self.controllers.remove(host)
+ self.controllers.insert(0, host)
+
+ for hostnet in host.networks:
+ if hostnet.network.name == infrainternal:
+ infra_mgmt.cidr = hostnet.network.cidr
+ infra_mgmt.cidrs = hostnet.network.cidrs
+ infra_mgmt.interface = hostnet.ip_holding_interface
+ infra_mgmt.ip = networkingconf.get_host_ip(installation_controller, infrainternal)
+ elif hostnet.network.name == infraexternal:
+ infra_external.cidr = hostnet.network.cidr
+ infra_external.interface = hostnet.ip_holding_interface
+ infra_external.ip = networkingconf.get_host_ip(installation_controller, infraexternal)
+ infra_external.gateway = networkingconf.get_network_gateway(infraexternal, installation_network_domain)
+
+ caas_conf = self.confman.get_caas_config_handler()
+
+ text = Environment().from_string(json_text).render(hosts=self.hosts, networks=self.networks, general=self.general, has=self.has, virtual_environment=virtual_environment, installation_controller=installation_controller, installation_controller_ip=installation_controller_ip, infra_mgmt=infra_mgmt, infra_external=infra_external, controllers=self.controllers, computes=self.computes, storages=self.storages, neutron_agent_hosts=self.neutron_agent_hosts, config_phase=phase, hostsconf=hostsconf, networkingconf=networkingconf, storagebackend=storagebackend, vnf_embedded_deployment = caas_conf.get_vnf_flag(), caas_only_deployment = caas_conf.get_caas_only(), management_nodes = self.managements)
+ #print(text)
+ inventory = json.loads(text)
+
+ #process host vars
+ for host in inventory['_meta']['hostvars'].keys():
+ for var, value in inventory['_meta']['hostvars'][host].iteritems():
+ self.add_host_var(host, var, value)
+
+ #process all vars
+ for var, value in inventory['all']['vars'].iteritems():
+ self.add_global_var(var, value)
+
+ #process groups
+ for var, value in inventory.iteritems():
+ if var == '_meta' or var == 'all':
+ continue
+ self.inventory[var] = value
+
+ #create a mapping between service-groups and vips to be added to /etc/hosts
+ if phase == "postconfig":
+ sgvips = {}
+ sgvips['config-manager'] = networkingconf.get_internal_vip()
+ sgvips['haproxyvip'] = networkingconf.get_internal_vip()
+ self.add_global_var('extra_hosts_entries', sgvips)
+
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def _is_host_controller(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ controller_profile = profiles.Profiles.get_controller_service_profile()
+ for profile in node_service_profiles:
+ if profile == controller_profile:
+ return True
+ return False
+
+ def _is_host_caas_master(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ caas_master_profile = profiles.Profiles.get_caasmaster_service_profile()
+ for profile in node_service_profiles:
+ if profile == caas_master_profile:
+ return True
+ return False
+
+ def _is_host_management(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ management_profile = profiles.Profiles.get_management_service_profile()
+ for profile in node_service_profiles:
+ if profile == management_profile:
+ return True
+ return False
+
+ def _is_host_compute(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ compute_profile = profiles.Profiles.get_compute_service_profile()
+ for profile in node_service_profiles:
+ if profile == compute_profile:
+ return True
+ return False
+
+ def _is_host_storage(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ storage_profile = profiles.Profiles.get_storage_service_profile()
+ for profile in node_service_profiles:
+ if profile == storage_profile:
+ return True
+ return False
+
+ def _get_network(self, name, host):
+ for network in self.networks:
+ if network.name == name:
+ return network
+
+ hostsconf = self.confman.get_hosts_config_handler()
+ domain = hostsconf.get_host_network_domain(host)
+ networkingconf = self.confman.get_networking_config_handler()
+ network = Network()
+ network.name = name
+ network.cidr = networkingconf.get_network_cidr(name, domain)
+ for dom in networkingconf.get_network_domains(name):
+ network.cidrs.add(networkingconf.get_network_cidr(name, dom))
+ network.vlan = None
+ try:
+ network.vlan = networkingconf.get_network_vlan_id(name, domain)
+ except configerror.ConfigError:
+ pass
+
+ network.gateway = None
+ try:
+ network.gateway = networkingconf.get_network_gateway(name, domain)
+ except configerror.ConfigError:
+ pass
+
+ self.networks.append(network)
+ return network
+
+ def _get_platform_cpus(self, host):
+ hostsconf = self.confman.get_hosts_config_handler()
+ cpus = 0
+ try:
+ perfprofconf = self.confman.get_performance_profiles_config_handler()
+ pprofile = hostsconf.get_performance_profiles(host.name)[0]
+ platform_cpus = perfprofconf.get_platform_cpus(pprofile)
+ if platform_cpus:
+ for alloc in platform_cpus.values():
+ cpus = cpus + int(alloc)
+ except configerror.ConfigError:
+ pass
+ except IndexError:
+ pass
+ except KeyError:
+ pass
+ return cpus
+
+ def _get_host(self, name):
+ for host in self.hosts:
+ if host.name == name:
+ return host
+
+
+ hostsconf = self.confman.get_hosts_config_handler()
+ networkprofilesconf = self.confman.get_network_profiles_config_handler()
+ networkingconf = self.confman.get_networking_config_handler()
+
+ host = Host()
+ host.name = name
+ host.is_controller = self._is_host_controller(name)
+ host.is_caas_master = self._is_host_caas_master(name)
+ host.is_compute = self._is_host_compute(name)
+ host.is_storage = self._is_host_storage(name)
+ host.is_management = self._is_host_management(name)
+ host.hwmgmt_address = hostsconf.get_hwmgmt_ip(name)
+ host.hwmgmt_user = hostsconf.get_hwmgmt_user(name)
+ host.hwmgmt_password = hostsconf.get_hwmgmt_password(name)
+ host.mgmt_mac = hostsconf.get_mgmt_mac(name)
+
+
+ platform_cpus = self._get_platform_cpus(host)
+ if platform_cpus:
+ host.os_max_threads = platform_cpus
+ host.is_performance = True
+
+ hostnetprofiles = hostsconf.get_network_profiles(name)
+
+ hostnetnames = hostsconf.get_host_networks(name)
+ domain = hostsconf.get_host_network_domain(name)
+
+ for net in hostnetnames:
+ hostnetwork = HostNetwork()
+ hostnetwork.network = self._get_network(net, name)
+ hostnetwork.interface = hostsconf.get_host_network_interface(name, net)
+ hostnetwork.ip_holding_interface = hostsconf.get_host_network_ip_holding_interface(name, net)
+ hostnetwork.ip = networkingconf.get_host_ip(name, net)
+ mask = networkingconf.get_network_mask(net, domain)
+ hostnetwork.ip = hostnetwork.ip + '/' + str(mask)
+
+ hostnetwork.is_bonding = False
+
+ for profile in hostnetprofiles:
+ try:
+ bondinginterfaces = networkprofilesconf.get_profile_bonding_interfaces(profile)
+ if hostnetwork.interface in bondinginterfaces:
+ hostnetwork.is_bonding = True
+ hostnetwork.members = networkprofilesconf.get_profile_bonded_interfaces(profile, hostnetwork.interface)
+ hostnetwork.linux_bonding_options = networkprofilesconf.get_profile_linux_bonding_options(profile)
+ break
+ except configerror.ConfigError:
+ pass
+ host.networks.append(hostnetwork)
+
+ self.hosts.append(host)
+ if host.is_controller:
+ self.controllers.append(host)
+ self.neutron_agent_hosts.add(host)
+ if host.is_caas_master:
+ self.caas_masters.append(host)
+ if host.is_management:
+ self.managements.append(host)
+ if host.is_compute:
+ self.computes.append(host)
+ self.neutron_agent_hosts.add(host)
+ if host.is_storage:
+ self.storages.append(host)
+
+
+ def _init_jinja_environment(self):
+ # initialize networks and hosts
+ networkingconf = self.confman.get_networking_config_handler()
+ networks = networkingconf.get_networks()
+ hostsconf = self.confman.get_hosts_config_handler()
+ hosts = hostsconf.get_enabled_hosts()
+ for net in networks:
+ for host in hosts:
+ self._get_network(net, host)
+ self._get_host(host)
+
+ # initialize HAS
+ self.has.haproxy.external_vip = networkingconf.get_external_vip()
+ self.has.haproxy.internal_vip = networkingconf.get_internal_vip()
+
+ # initialize general
+ self.general.dns_servers = networkingconf.get_dns()
+ timeconf = self.confman.get_time_config_handler()
+ self.general.ntp_servers = timeconf.get_ntp_servers()
+ self.general.zone = timeconf.get_zone()
+ usersconf = self.confman.get_users_config_handler()
+ self.general.admin = usersconf.get_admin_user()
+ self.general.password = usersconf.get_admin_user_password()
+ caas_conf = self.confman.get_caas_config_handler()
+ if caas_conf.get_caas_only():
+ self.general.openstack_password = usersconf.get_admin_password()
+ else:
+ openstackconfighandler = self.confman.get_openstack_config_handler()
+ self.general.openstack_password = openstackconfighandler.get_admin_password()
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=missing-docstring,invalid-name,too-few-public-methods,too-many-instance-attributes,too-many-lines
+import os
+import json
+from jinja2 import Environment
+from cmframework.apis import cmansibleinventoryconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+from serviceprofiles import profiles
+import hw_detector.hw_detect_lib as hw
+
+
+import math
+
+NEAREST_POWER_OF_2_PERCENTAGE = 0.25
+
+TARGET_PGS_PER_OSD_NO_INCREASE_EXPECTED = 100
+TARGET_PGS_PER_OSD_UP_TO_DOUBLE_SIZE_INCREASE_EXPECTED = 200
+TARGET_PGS_PER_OSD_TWO_TO_THREE_TIMES_SIZE_INCREASE_EXPECTED = 300
+# Please visit ceph.com/pgcalc for details on previous values
+
+MINIMUM_PG_NUM = 32
+
+
+class PGNum(object):
+ """Calculates the pg_num for the given attributes."""
+
+ def __init__(self, number_of_pool_osds, pool_data_percentage, number_of_replicas):
+ self._number_of_pool_osds = number_of_pool_osds
+ self._pool_data_percentage = pool_data_percentage
+ self._number_of_replicas = number_of_replicas
+
+ @staticmethod
+ def _round_up_to_closest_power_of_2(num):
+ """Smallest power of 2 greater than or equal to num."""
+ return 2**(num-1).bit_length() if num > 0 else 1
+
+ @staticmethod
+ def _round_down_to_closest_power_of_2(num):
+ """Largest power of 2 less than or equal to num."""
+ return 2**(num.bit_length()-1) if num > 0 else 1
+
+ @staticmethod
+ def _check_percentage_of_values(diff_to_lower, org_pgnum):
+ """ If the nearest power of 2 is more than 25% below the original value,
+ the next higher power of 2 is used. Please visit ceph.com/pgcalc
+ """
+ return float(float(diff_to_lower) / float(org_pgnum)) > NEAREST_POWER_OF_2_PERCENTAGE
+
+ def _rounded_pgnum_to_the_nearest_power_of_2(self, pgnum):
+ higher_power = self._round_up_to_closest_power_of_2(pgnum)
+ lower_power = self._round_down_to_closest_power_of_2(pgnum)
+ diff_to_lower = pgnum - lower_power
+ if pgnum != 0 and self._check_percentage_of_values(diff_to_lower, pgnum):
+ return higher_power
+ return lower_power
+
+ def _calculate_pg_num_formula(self, number_of_pool_osds, pool_percentage):
+ return TARGET_PGS_PER_OSD_UP_TO_DOUBLE_SIZE_INCREASE_EXPECTED \
+ * number_of_pool_osds * float(pool_percentage) / self._number_of_replicas
+
+ def _select_pgnum_formula_result(self, number_of_pool_osds, pool_percentage):
+ pgnum = self._calculate_pg_num_formula(number_of_pool_osds, pool_percentage)
+ return int(math.ceil(max(pgnum, MINIMUM_PG_NUM)))
+
+ def calculate(self):
+ """ The formula of the calculation can be found from ceph.com/pgcalc.
+
+ pgnum = (target_pgs x number_of_osds_in_pool x pool_percentage)/number_of_replicas
+ return : rounded pgnum to the nearest power of 2
+
+ """
+ pgnum = self._select_pgnum_formula_result(
+ self._number_of_pool_osds, self._pool_data_percentage)
+ return self._rounded_pgnum_to_the_nearest_power_of_2(pgnum)
+
+
+NUMBER_OF_POOLS = 4
+SUPPORTED_INSTANCE_BACKENDS = ['default', 'cow', 'lvm']
+ALL_DEFAULT_INSTANCE_BACKENDS = SUPPORTED_INSTANCE_BACKENDS + ['rbd']
+
+DEFAULT_INSTANCE_LV_PERCENTAGE = "100"
+
+USER_SECRETS = "/etc/openstack_deploy/user_secrets.yml"
+
+# Ceph PG share percentages for Openstack pools
+OSD_POOL_IMAGES_PG_NUM_PERCENTAGE = 0.09
+OSD_POOL_VOLUMES_PG_NUM_PERCENTAGE = 0.69
+OSD_POOL_VMS_PG_NUM_PERCENTAGE = 0.20
+OSD_POOL_SHARED_PG_NUM_PERCENTAGE = 0.02
+# Ceph PG share percentages for CaaS pools
+OSD_POOL_CAAS_PG_NUM_PERCENTAGE = 1.0
+
+DEFAULT_ROOTDISK_DEVICE = "/dev/sda"
+# root disk partition 2 system volume group VG percentages
+INSTANCE_NODE_VG_PERCENTAGE = 0.47
+NOT_INSTANCE_NODE_VG_PERCENTAGE = 1
+"""
+/dev/sda1 fixed partition size : 50GiB fixed size = 10% of the total disk size
+/dev/sda2 system VG partition size: 47% of remaining total disk size = 42% of total disk size
+/dev/sda3 instance partition size 53% of remaining total disk size = 47% of total disk size
+"""
+
+
+JSON_EXTERNAL_CEPH_CINDER_BACKEND_HOST_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "ext_ceph_user": "{{ ext_ceph_user }}",
+ "ext_ceph_user_key": "{{ ext_ceph_user_key }}",
+ "cephkeys_access_group": "cephkeys",
+
+ "ceph_mons": [
+ {% for host in hosts %}
+ "{{ host.name }}"
+ {% if not loop.last %},{% endif %}
+ {% endfor %}],
+
+ "ext_ceph_fsid": "{{ ext_ceph_fsid }}",
+ "ext_ceph_mon_hosts": "{{ ext_ceph_mon_hosts }}",
+
+ "cinder_service_hostname": "{{ host.name }}",
+ "cinder_backends": {
+ "rbd": {
+ "volume_driver": "cinder.volume.drivers.rbd.RBDDriver",
+ "rbd_pool": "{{ cinder_pool_name }}",
+ "rbd_ceph_conf": "/etc/ceph/ceph.conf",
+ "ceph_conf": "/etc/ceph/ceph.conf",
+ "rbd_flatten_volume_from_snapshot": "false",
+ "rbd_max_clone_depth": "5",
+ "rbd_store_chunk_size": "4",
+ "rados_connect_timeout": "-1",
+ "volume_backend_name": "RBD",
+ "rbd_secret_uuid": "{{ cinder_ceph_client_uuid }}",
+ "rbd_user": "{{ ext_ceph_user }}",
+ "backend_host": "controller",
+ "rbd_exclusive_cinder_pool": "True"
+ }
+ },
+
+ "ext_openstack_pools": [
+ "{{ glance_pool_name }}",
+ "{{ cinder_pool_name }}",
+ "{{ nova_pool_name }}",
+ "{{ platform_pool_name }}"
+ ],
+
+ "cinder_ceph_client": "{{ ext_ceph_user }}",
+ "nova_ceph_client": "{{ ext_ceph_user }}",
+
+ "glance_default_store": "rbd",
+ "glance_additional_stores": ["http", "cinder", "file"],
+ "glance_rbd_store_pool": "{{ glance_pool_name }}",
+ "glance_rbd_store_chunk_size": "8",
+ "glance_ceph_client": "{{ ext_ceph_user }}",
+ "ceph_conf": "/etc/ceph/ceph.conf"
+
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+JSON_CINDER_BACKENDS_HOST_VAR = """
+{
+ {%- set loopvar = {'first_entry': True} %}
+ {% for host in hosts %}
+ {% if host.is_controller %}
+ {%- if not loopvar.first_entry %},{%- endif %}
+ {%- if loopvar.update({'first_entry': False}) %}{%- endif %}
+ "{{ host.name }}": {
+ "cinder_service_hostname": "{{ host.name }}",
+ "cinder_backends": {
+ {% if openstack_storage == 'ceph' %}
+ "rbd": {
+ "volume_driver": "cinder.volume.drivers.rbd.RBDDriver",
+ "rbd_pool": "{{ cinder_pool_name }}",
+ "rbd_ceph_conf": "/etc/ceph/ceph.conf",
+ "ceph_conf": "/etc/ceph/ceph.conf",
+ "rbd_flatten_volume_from_snapshot": "false",
+ "rbd_max_clone_depth": "5",
+ "rbd_store_chunk_size": "4",
+ "rados_connect_timeout": "-1",
+ "volume_backend_name": "volumes_hdd",
+ "rbd_secret_uuid": "{{ cinder_ceph_client_uuid }}",
+ "rbd_user": "cinder",
+ "backend_host": "controller",
+ "rbd_exclusive_cinder_pool": "True"
+ }
+ {% endif %}
+ {% if openstack_storage == 'lvm' %}
+ "lvm": {
+ "iscsi_ip_address": "{{ installation_controller_ip }}",
+ "volume_backend_name": "LVM_iSCSI",
+ "volume_driver": "cinder.volume.drivers.lvm.LVMVolumeDriver",
+ "volume_group": "cinder-volumes"
+ }
+ {% endif %}
+ }
+ }
+ {% endif %}
+ {% endfor %}
+}
+"""
+
+JSON_STORAGE_HOST_VAR = """
+{
+ {%- set loopvar = {'first_entry': True} %}
+ {% for host in hosts %}
+ {% if host.is_rbd_ceph %}
+ {%- if not loopvar.first_entry %},{%- endif %}
+ {%- if loopvar.update({'first_entry': False}) %}{%- endif %}
+ "{{ host.name }}": {
+ "devices": [
+ {% for disk in host.ceph_osd_disks %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}{% endfor %}]
+ }
+ {% endif %}
+ {% endfor %}
+}
+"""
+
+JSON_STORAGE_HOST_DISK_CONFIGURATION = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "by_path_disks":
+ { "os" : "{{ host.os_disk }}",
+ "osd" : "{{ host.ceph_osd_disks }}",
+ "osd_disks_ids" : "{{ host.osd_disks_ids }}"
+ },
+ "rootdisk_vg_percentage": "{{ host.vg_percentage }}",
+ "default_rootdisk_device": "{{ rootdisk_device }}"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+
+JSON_LVM_STORAGE_HOST_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "devices": [
+ {% for disk in host.cinder_disks %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}{% endfor %}],
+ "cinder_physical_volumes": [
+ {% for disk in host.cinder_physical_volumes %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}{% endfor %}]
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+
+JSON_BARE_LVM_STORAGE_HOST_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ {% if host.is_bare_lvm %}
+ "bare_lvm": {
+ "disks": [
+ {% for disk in host.bare_lvm_disks %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}{% endfor %}],
+ "physical_volumes": [
+ {% for disk in host.bare_lvm_physical_volumes %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}{% endfor %}],
+ "mount_options": "{{ host.mount_options }}",
+ "mount_dir": "{{ host.mount_dir }}",
+ "name": "{{ host.bare_lvm_lv_name }}"
+ }
+ {% endif %}
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+JSON_DEVICE_HOST_VAR = """
+{
+ {%- set loopvar = {'first_entry': True} %}
+ {% for host in hosts %}
+ {% if host.instance_physical_volumes %}
+ {%- if not loopvar.first_entry %},{%- endif %}
+ {%- if loopvar.update({'first_entry': False}) %}{%- endif %}
+ "{{ host.name }}": {
+ "instance_disks": [
+ {% for disk in host.instance_disks %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}
+ {% endfor %}],
+ "instance_physical_volumes": [
+ {% for disk in host.instance_physical_volumes %}
+ "{{disk}}"
+ {%if not loop.last %},{% endif %}
+ {% endfor %}],
+ "instance_lv_percentage": "{{ host.instance_lv_percentage }}"
+ }
+ {% endif %}
+ {% endfor %}
+}
+"""
+
+# /etc/ansible/roles/os_nova/templates/nova.conf.j2
+JSON_NOVA_RBD_HOST_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "nova_libvirt_images_rbd_pool": "{{ nova_pool_name }}",
+ "nova_ceph_client": "{{ nova_ceph_client }}"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+
+#
+# /opt/ceph-ansible/group_vars/osds.yml
+JSON_OVERRIDE = """
+{
+ "ceph_conf_overrides": {
+ "global": {
+ "mon_max_pg_per_osd": "400",
+ "mon_pg_warn_max_object_skew": "-1",
+ "osd_pool_default_size": "{{ osd_pool_default_size }}",
+ "osd_pool_default_min_size": "{{ osd_pool_default_min_size }}",
+ "osd_pool_default_pg_num": "{{ osd_pool_default_pg_num }}",
+ "osd_pool_default_pgp_num": "{{ osd_pool_default_pg_num }}",
+ "osd_heartbeat_grace": "3",
+ "osd_heartbeat_interval": "2",
+ "mon_osd_min_down_reporters": "1",
+ "mon_osd_adjust_heartbeat_grace": "false",
+ "auth_client_required": "cephx"
+ },
+ "mgr": {
+ "mgr_modules": "dashboard"
+ },
+ "mon": {
+ "mon_health_preluminous_compat_warning": "false",
+ "mon_health_preluminous_compat": "true",
+ "mon_timecheck_interval": "60",
+ "mon_sd_reporter_subtree_level": "device",
+ "mon_clock_drift_allowed": "0.1"
+ },
+ "osd": {
+ "osd_mon_heartbeat_interval": "10",
+ "osd_mon_report_interval_min": "1",
+ "osd_mon_report_interval_max": "15"
+ }
+ }
+}
+"""
+JSON_OVERRIDE_CACHE = """
+{
+ "ceph_conf_overrides": {
+ "global": {
+ "mon_max_pg_per_osd": "400",
+ "mon_pg_warn_max_object_skew": "-1",
+ "osd_pool_default_size": "{{ osd_pool_default_size }}",
+ "osd_pool_default_min_size": "{{ osd_pool_default_min_size }}",
+ "osd_pool_default_pg_num": "{{ osd_pool_default_pg_num }}",
+ "osd_pool_default_pgp_num": "{{ osd_pool_default_pg_num }}",
+ "osd_heartbeat_grace": "3",
+ "osd_heartbeat_interval": "2",
+ "mon_osd_adjust_heartbeat_grace": "false",
+ "bluestore_cache_size": "1073741824",
+ "auth_client_required": "cephx"
+ },
+ "mgr": {
+ "mgr_modules": "dashboard"
+ },
+ "mon": {
+ "mon_health_preluminous_compat_warning": "false",
+ "mon_health_preluminous_compat": "true",
+ "mon_timecheck_interval": "60",
+ "mon_sd_reporter_subtree_level": "device",
+ "mon_clock_drift_allowed": "0.1"
+ },
+ "osd": {
+ "osd_mon_heartbeat_interval": "10",
+ "osd_mon_report_interval_min": "1",
+ "osd_mon_report_interval_max": "15"
+ }
+ }
+}
+"""
+JSON_OVERRIDE_3CONTROLLERS = """
+{
+ "ceph_conf_overrides": {
+ "global": {
+ "mon_max_pg_per_osd": "400",
+ "mon_pg_warn_max_object_skew": "-1",
+ "osd_pool_default_size": "{{ osd_pool_default_size }}",
+ "osd_pool_default_min_size": "{{ osd_pool_default_min_size }}",
+ "osd_pool_default_pg_num": "{{ osd_pool_default_pg_num }}",
+ "osd_pool_default_pgp_num": "{{ osd_pool_default_pg_num }}",
+ "osd_heartbeat_grace": "3",
+ "osd_heartbeat_interval": "2",
+ "mon_osd_adjust_heartbeat_grace": "false",
+ "bluestore_cache_size": "1073741824",
+ "auth_client_required": "cephx"
+ },
+ "mgr": {
+ "mgr_modules": "dashboard"
+ },
+ "mon": {
+ "mon_health_preluminous_compat_warning": "false",
+ "mon_health_preluminous_compat": "true",
+ "mon_lease": "1.0",
+ "mon_election_timeout": "2",
+ "mon_lease_renew_interval_factor": "0.4",
+ "mon_lease_ack_timeout_factor": "1.5",
+ "mon_timecheck_interval": "60",
+ "mon_sd_reporter_subtree_level": "device",
+ "mon_clock_drift_allowed": "0.1"
+ },
+ "osd": {
+ "osd_mon_heartbeat_interval": "10",
+ "osd_mon_report_interval_min": "1",
+ "osd_mon_report_interval_max": "15"
+ }
+ }
+}
+"""
+
+JSON_NETWORK = """
+{
+ "public_network": "{{ public_networks }}",
+ "cluster_network": "{{ cluster_networks }}"
+}
+"""
+
+JSON_OS_TUNING = """
+{
+ "os_tuning_params": [{
+ "name": "vm.min_free_kbytes",
+ "value": "1048576"
+ }]
+}
+"""
+
+JSON_OSD_POOL_PGNUMS = """
+{
+ "osd_pool_images_pg_num": "{{ osd_pool_images_pg_num }}",
+ "osd_pool_volumes_pg_num": "{{ osd_pool_volumes_pg_num }}",
+ "osd_pool_vms_pg_num": "{{ osd_pool_vms_pg_num }}",
+ "osd_pool_shared_pg_num": "{{ osd_pool_shared_pg_num }}"{%- if 0 < osd_pool_caas_pg_num %},
+ "osd_pool_caas_pg_num": "{{ osd_pool_caas_pg_num }}"
+{% endif %}
+}
+"""
+
+JSON_CEPH_HOSTS = """
+{
+ "ceph-mon": [ {% for host in mons %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "ceph-mon_hosts": [ {% for host in mons %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "mons": [ {% for host in mons %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "ceph_mons": [ {% for host in mons %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "ceph-osd": [ {% for host in osds %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "ceph-osd_hosts": [ {% for host in osds %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "osds": [ {% for host in osds %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "mgrs": [ {% for host in mgrs %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ],
+ "ceph-mgr": [ {% for host in mgrs %}"{{ host.name }}"{% if not loop.last %},{% endif %}{% endfor %} ]
+}
+"""
+# "storage_backend": ceph
+
+
+# Replaces variables in /opt/openstack-ansible/playbooks/inventory/group_vars/glance_all.yml
+JSON_GLANCE_CEPH_ALL_GROUP_VARS = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "glance_default_store": "rbd",
+ "glance_additional_stores": ["http", "cinder", "file"],
+ "glance_rbd_store_pool": "{{ glance_pool_name }}",
+ "glance_rbd_store_chunk_size": "8",
+ "ceph_conf": "/etc/ceph/ceph.conf"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+JSON_GLANCE_LVM_ALL_GROUP_VARS = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "glance_default_store": "file"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+# ceph-ansible variables must be set at host_vars -level
+# ceph-ansible sample variables in group_vars
+# group_vars - all.yml.sample
+JSON_CEPH_ANSIBLE_ALL_HOST_VARS = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "mon_group_name": "mons",
+ "osd_group_name": "osds",
+ "mgr_group_name": "mgrs",
+ "ceph_stable_release": "luminous",
+ "generate_fsid": "true",
+ "cephx": "true",
+ "journal_size": "10240",
+ "osd_objectstore": "bluestore"
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+# pylint: disable=line-too-long
+# ceph-ansible
+# group_vars - mons.yml.sample
+JSON_CEPH_ANSIBLE_MONS_HOST_VARS = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "monitor_secret": "{{ '{{ monitor_keyring.stdout }}' }}",
+ "openstack_config": true,
+ "cephkeys_access_group": "cephkeys",
+ "openstack_pools": [
+ {
+ "name": "{{ platform_pool }}",
+ "pg_num": "{{ osd_pool_shared_pg_num }}",
+ "rule_name": ""
+ }{% if is_openstack_deployment %},
+ {
+ "name": "{{ glance_pool }}",
+ "pg_num": "{{ osd_pool_images_pg_num }}",
+ "rule_name": ""
+ },
+ {
+ "name": "{{ cinder_pool }}",
+ "pg_num": "{{ osd_pool_volumes_pg_num }}",
+ "rule_name": ""
+ },
+ {
+ "name": "{{ nova_pool }}",
+ "pg_num": "{{ osd_pool_vms_pg_num }}",
+ "rule_name": ""
+ }
+ {%- endif %}
+ {%- if is_caas_deployment and 0 < osd_pool_caas_pg_num %},
+ {
+ "name": "caas",
+ "pg_num": "{{ osd_pool_caas_pg_num }}",
+ "rule_name": ""
+ }
+ {%- endif %}
+ ],
+ "openstack_keys": [
+ {
+ "acls": [],
+ "key": "$(ceph-authtool --gen-print-key)",
+ "mode": "0600",
+ "mon_cap": "allow r",
+ "name": "client.shared",
+ "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool={{ platform_pool }}"
+ }{% if is_openstack_deployment %},
+ {
+ "acls": [],
+ "key": "$(ceph-authtool --gen-print-key)",
+ "mode": "0640",
+ "mon_cap": "allow r",
+ "name": "client.glance",
+ "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool={{ glance_pool }}"
+ },
+ {
+ "acls": [],
+ "key": "$(ceph-authtool --gen-print-key)",
+ "mode": "0640",
+ "mon_cap": "allow r, allow command \\\\\\\\\\\\\\"osd blacklist\\\\\\\\\\\\\\"",
+ "name": "client.cinder",
+ "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool={{ cinder_pool }}, allow rwx pool={{ nova_pool }}, allow rx pool={{ glance_pool }}"
+ }
+ {%- endif %}
+ {%- if is_caas_deployment and 0 < osd_pool_caas_pg_num %},
+ {
+ "acls": [],
+ "key": "$(ceph-authtool --gen-print-key)",
+ "mode": "0600",
+ "mon_cap": "allow r",
+ "name": "client.caas",
+ "osd_cap": "allow class-read object_prefix rbd_children, allow rwx pool=caas"
+ }
+ {%- endif %}
+ ]
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+# pylint: enable=line-too-long
+
+# ceph-ansible
+# group_vars - osds.yml.sample
+JSON_CEPH_ANSIBLE_OSDS_HOST_VARS = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "raw_journal_devices": [],
+ "journal_collocation": true,
+ "raw_multi_journal": false,
+ "dmcrytpt_journal_collocation": false,
+ "dmcrypt_dedicated_journal": false,
+ "osd_scenario": "collocated",
+ "dedicated_devices": []
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+
+JSON_SINGLE_CONTROLLER_VAR = """
+{
+ {% for host in hosts %}
+ "{{ host.name }}": {
+ "single_controller_host": true
+ } {% if not loop.last %},{% endif %}
+ {% endfor %}
+}
+"""
+
+
+class Host(object):
+ def __init__(self):
+ self.name = None
+ self.is_lvm = None
+ self.is_osd = None
+ self.is_mon = None
+ self.is_mgr = None
+ self.is_rbd_ceph = None
+ self.ceph_osd_disks = []
+ self.lvm_disks = []
+ self.cinder_disks = []
+ self.is_controller = False
+ self.is_compute = False
+ self.is_storage = False
+ self.instance_physical_volumes = []
+ self.cinder_physical_volumes = []
+ self.instance_disks = []
+ self.instance_lv_percentage = ""
+ self.os_disk = ""
+ self.osd_disks_ids = []
+ self.vg_percentage = NOT_INSTANCE_NODE_VG_PERCENTAGE
+ self.mount_dir = ""
+ self.bare_lvm_disks = None
+ self.is_bare_lvm = None
+ self.bare_lvm_physical_volumes = None
+ self.mount_options = None
+ self.bare_lvm_lv_name = None
+
+
+class storageinventory(cmansibleinventoryconfig.CMAnsibleInventoryConfigPlugin):
+
+ def __init__(self, confman, inventory, ownhost):
+ super(storageinventory, self).__init__(confman, inventory, ownhost)
+ self.hosts = []
+ self.storage_hosts = []
+ self.compute_hosts = []
+ self.controller_hosts = []
+ self._mon_hosts = []
+ self._osd_hosts = []
+ self._mgr_hosts = []
+ self.single_node_config = False
+ self._networking_config_handler = self.confman.get_networking_config_handler()
+ self._hosts_config_handler = self.confman.get_hosts_config_handler()
+ self._storage_config_handler = self.confman.get_storage_config_handler()
+ self._openstack_config_handler = self.confman.get_openstack_config_handler()
+ self._sp_config_handler = self.confman.get_storage_profiles_config_handler()
+ self._caas_config_handler = self.confman.get_caas_config_handler()
+ self._ceph_caas_pg_proportion = 0.0
+ self._ceph_openstack_pg_proportion = 0.0
+ self._cinder_pool_name = 'volumes'
+ self._glance_pool_name = 'images'
+ self._nova_pool_name = 'vms'
+ self._platform_pool_name = 'shared'
+ self._storage_profile_attribute_properties = {
+ 'lvm_cinder_storage_partitions': {
+ 'backends': ['lvm'],
+ 'getter': self._sp_config_handler.get_profile_lvm_cinder_storage_partitions
+ },
+ 'mount_options': {
+ 'backends': ['bare_lvm'],
+ 'getter': self._sp_config_handler.get_profile_bare_lvm_mount_options
+ },
+ 'mount_dir': {
+ 'backends': ['bare_lvm'],
+ 'getter': self._sp_config_handler.get_profile_bare_lvm_mount_dir
+ },
+ 'lv_name': {
+ 'backends': ['bare_lvm'],
+ 'getter': self._sp_config_handler.get_profile_bare_lvm_lv_name
+ },
+ 'nr_of_ceph_osd_disks': {
+ 'backends': ['ceph'],
+ 'getter': self._sp_config_handler.get_profile_nr_of_ceph_osd_disks
+ },
+ 'lvm_instance_storage_partitions': {
+ 'backends': ['lvm', 'bare_lvm'],
+ 'getter': self._sp_config_handler.get_profile_lvm_instance_storage_partitions
+ },
+ 'lvm_instance_cow_lv_storage_percentage': {
+ 'backends': ['lvm'],
+ 'getter': self._sp_config_handler.get_profile_lvm_instance_cow_lv_storage_percentage
+ },
+ 'openstack_pg_proportion': {
+ 'backends': ['ceph'],
+ 'getter': self._sp_config_handler.get_profile_ceph_openstack_pg_proportion
+ },
+ 'caas_pg_proportion': {
+ 'backends': ['ceph'],
+ 'getter': self._sp_config_handler.get_profile_ceph_caas_pg_proportion
+ },
+ }
+
+ def _is_host_managment(self, host):
+ return self._is_profile_in_hosts_profiles(profiles.Profiles.get_management_service_profile(), host)
+
+ def _is_host_controller(self, host):
+ return self._is_profile_in_hosts_profiles(profiles.Profiles.get_controller_service_profile(), host)
+
+ def _is_profile_in_hosts_profiles(self, profile, host):
+ node_service_profiles = self._hosts_config_handler.get_service_profiles(host)
+ return profile in node_service_profiles
+
+ def _is_host_compute(self, host):
+ return self._is_profile_in_hosts_profiles(profiles.Profiles.get_compute_service_profile(), host)
+
+ def _is_host_caas_master(self, host):
+ return self._is_profile_in_hosts_profiles(profiles.Profiles.get_caasmaster_service_profile(), host)
+
+ def _is_host_storage(self, host):
+ return self._is_profile_in_hosts_profiles(profiles.Profiles.get_storage_service_profile(), host)
+
+ def _is_controller_has_compute(self):
+ if set.intersection(set(self.compute_hosts), set(self.controller_hosts)):
+ return True
+ return False
+
+ def _is_collocated_controller_node_config(self):
+ if set.intersection(set(self.storage_hosts), set(self.controller_hosts)):
+ return True
+ return False
+
+ def _is_collocated_3controllers_config(self):
+ if (self._is_collocated_controller_node_config() and
+ (len(self.controller_hosts) == 3) and (len(self.hosts) == 3)):
+ return True
+ return False
+
+ def _is_dedicated_storage_config(self):
+ collocated_config = set.intersection(set(self.storage_hosts), set(self.controller_hosts))
+ if collocated_config and (collocated_config == set(self.controller_hosts)):
+ return False
+ elif self.storage_hosts:
+ return True
+ else:
+ return False
+
+ def handle_bootstrapping(self):
+ self.handle('bootstrapping')
+
+ def handle_provisioning(self):
+ self.handle('provisioning')
+
+ def handle_postconfig(self):
+ self.handle('postconfig')
+
+ def handle_setup(self):
+ pass
+
+ def _template_and_add_vars_to_hosts(self, template, **variables):
+ try:
+ text = Environment().from_string(template).render(variables)
+ if text:
+ self._add_vars_for_hosts(text)
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def _add_vars_for_hosts(self, inventory_text):
+ inventory = json.loads(inventory_text)
+ for host in inventory.keys():
+ for var, value in inventory[host].iteritems():
+ self.add_host_var(host, var, value)
+
+ @staticmethod
+ def _read_cinder_ceph_client_uuid():
+ if os.path.isfile(USER_SECRETS):
+ d = dict(line.split(':', 1) for line in open(USER_SECRETS))
+ cinder_ceph_client_uuid = d['cinder_ceph_client_uuid'].strip()
+ return cinder_ceph_client_uuid
+ else:
+ raise cmerror.CMError("The file {} does not exist.".format(USER_SECRETS))
+
+ def _add_cinder_backends(self):
+ self._template_and_add_vars_to_hosts(
+ JSON_CINDER_BACKENDS_HOST_VAR,
+ hosts=self.controller_hosts,
+ installation_controller_ip=self._installation_host_ip,
+ cinder_ceph_client_uuid=self._read_cinder_ceph_client_uuid(),
+ openstack_storage=self._openstack_config_handler.get_storage_backend(),
+ cinder_pool_name=self._cinder_pool_name)
+
+ def _add_external_ceph_cinder_backends(self):
+ handler = self._storage_config_handler
+ self._template_and_add_vars_to_hosts(
+ JSON_EXTERNAL_CEPH_CINDER_BACKEND_HOST_VAR,
+ hosts=self.hosts,
+ cinder_ceph_client_uuid=self._read_cinder_ceph_client_uuid(),
+ ext_ceph_user=handler.get_ext_ceph_ceph_user(),
+ ext_ceph_user_key=handler.get_ext_ceph_ceph_user_key(),
+ ext_ceph_fsid=handler.get_ext_ceph_fsid(),
+ ext_ceph_mon_hosts=", ".join(handler.get_ext_ceph_mon_hosts()),
+ nova_pool_name=self._nova_pool_name,
+ glance_pool_name=self._glance_pool_name,
+ cinder_pool_name=self._cinder_pool_name,
+ platform_pool_name=self._platform_pool_name)
+
+ def _add_storage_nodes_configs(self):
+ rbdhosts = []
+ for host in self.hosts:
+ if host.is_rbd_ceph:
+ rbdhosts.append(host)
+ self._template_and_add_vars_to_hosts(JSON_STORAGE_HOST_VAR, hosts=rbdhosts)
+
+ def _add_hdd_storage_configs(self):
+ self._template_and_add_vars_to_hosts(
+ JSON_STORAGE_HOST_DISK_CONFIGURATION,
+ hosts=self.hosts,
+ rootdisk_device=DEFAULT_ROOTDISK_DEVICE)
+
+ def _add_lvm_storage_configs(self):
+ self._template_and_add_vars_to_hosts(JSON_LVM_STORAGE_HOST_VAR, hosts=self.hosts)
+
+ def _add_bare_lvm_storage_configs(self):
+ self._template_and_add_vars_to_hosts(JSON_BARE_LVM_STORAGE_HOST_VAR, hosts=self.hosts)
+
+ def _add_instance_devices(self):
+ self._template_and_add_vars_to_hosts(JSON_DEVICE_HOST_VAR, hosts=self.compute_hosts)
+
+ def _add_ceph_hosts(self):
+ self._add_host_group(
+ Environment().from_string(JSON_CEPH_HOSTS).render(
+ mons=self._mon_hosts,
+ osds=self._osd_hosts,
+ mgrs=self._mgr_hosts))
+
+ self._add_global_parameters(
+ Environment().from_string(JSON_CEPH_HOSTS).render(
+ mons=self._mon_hosts,
+ osds=self._osd_hosts,
+ mgrs=self._mgr_hosts))
+
+ def _add_glance(self):
+ if self.is_ceph_backend:
+ self._template_and_add_vars_to_hosts(
+ JSON_GLANCE_CEPH_ALL_GROUP_VARS,
+ hosts=self.hosts,
+ glance_pool_name=self._glance_pool_name)
+ elif self.is_lvm_backend:
+ self._template_and_add_vars_to_hosts(JSON_GLANCE_LVM_ALL_GROUP_VARS, hosts=self.hosts)
+
+ def _add_ceph_ansible_all_sample_host_vars(self):
+ self._template_and_add_vars_to_hosts(JSON_CEPH_ANSIBLE_ALL_HOST_VARS, hosts=self.hosts)
+
+ def _add_ceph_ansible_mons_sample_host_vars(self):
+ self._template_and_add_vars_to_hosts(
+ JSON_CEPH_ANSIBLE_MONS_HOST_VARS,
+ hosts=self.hosts,
+ **self._get_ceph_vars())
+
+ def _get_ceph_vars(self):
+ return {
+ 'osd_pool_images_pg_num': self._calculated_images_pg_num,
+ 'osd_pool_volumes_pg_num': self._calculated_volumes_pg_num,
+ 'osd_pool_vms_pg_num': self._calculated_vms_pg_num,
+ 'osd_pool_shared_pg_num': self._calculated_shared_pg_num,
+ 'osd_pool_caas_pg_num': self._calculated_caas_pg_num,
+ 'is_openstack_deployment': self._is_openstack_deployment,
+ 'is_caas_deployment': self._is_caas_deployment,
+ 'is_hybrid_deployment': self._is_hybrid_deployment,
+ 'nova_pool': self._nova_pool_name,
+ 'glance_pool': self._glance_pool_name,
+ 'cinder_pool': self._cinder_pool_name,
+ 'platform_pool': self._platform_pool_name
+ }
+
+ def _add_ceph_ansible_osds_sample_host_vars(self):
+ self._template_and_add_vars_to_hosts(JSON_CEPH_ANSIBLE_OSDS_HOST_VARS, hosts=self.hosts)
+
+ def _add_nova(self):
+ if self.is_external_ceph_backend:
+ nova_ceph_client = self._storage_config_handler.get_ext_ceph_ceph_user()
+ else:
+ nova_ceph_client = 'cinder'
+
+ self._template_and_add_vars_to_hosts(
+ JSON_NOVA_RBD_HOST_VAR, hosts=self.compute_hosts,
+ nova_pool_name=self._nova_pool_name,
+ nova_ceph_client=nova_ceph_client)
+
+ def _add_single_controller_host_var(self):
+ self._template_and_add_vars_to_hosts(
+ JSON_SINGLE_CONTROLLER_VAR, hosts=self.controller_hosts)
+
+ def _add_global_parameters(self, text):
+ try:
+ inventory = json.loads(text)
+ for var, value in inventory.iteritems():
+ self.add_global_var(var, value)
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def _add_host_group(self, text):
+ try:
+ inventory = json.loads(text)
+ for var, value in inventory.iteritems():
+ self.add_host_group(var, value)
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ @property
+ def cluster_network_cidrs(self):
+ cidrs = []
+ network = self._networking_config_handler.get_infra_storage_cluster_network_name()
+ for domain in self._networking_config_handler.get_network_domains(network):
+ cidrs.append(self._networking_config_handler.get_network_cidr(network, domain))
+ return ','.join(cidrs)
+
+ @property
+ def public_network_cidrs(self):
+ cidrs = set()
+ cluster_network = self._networking_config_handler.get_infra_storage_cluster_network_name()
+ public_network = self._networking_config_handler.get_infra_internal_network_name()
+ for domain in self._networking_config_handler.get_network_domains(cluster_network):
+ cidrs.add(self._networking_config_handler.get_network_cidr(public_network, domain))
+ for host in self._mon_hosts:
+ domain = self._hosts_config_handler.get_host_network_domain(host.name)
+ cidrs.add(self._networking_config_handler.get_network_cidr(public_network, domain))
+ return ','.join(cidrs)
+
+ def _add_networks(self):
+ self._add_global_parameters(
+ Environment().from_string(JSON_NETWORK).render(
+ public_networks=self.public_network_cidrs,
+ cluster_networks=self.cluster_network_cidrs))
+
+ def _add_monitor_address(self):
+ infra_storage_network = self._networking_config_handler.get_infra_internal_network_name()
+ for host in self._mon_hosts:
+ monitor_address = \
+ self._networking_config_handler.get_host_ip(host.name, infra_storage_network)
+ self.add_host_var(host.name, "monitor_address", monitor_address)
+
+ def _add_override_settings(self):
+ ceph_osd_pool_size = self._storage_config_handler.get_ceph_osd_pool_size()
+
+ if self._is_collocated_3controllers_config():
+ self._add_global_parameters(
+ Environment().from_string(JSON_OVERRIDE_3CONTROLLERS).render(
+ osd_pool_default_size=ceph_osd_pool_size,
+ osd_pool_default_min_size=str(ceph_osd_pool_size-1),
+ osd_pool_default_pg_num=self._calculated_default_pg_num))
+
+ self._add_global_parameters(
+ Environment().from_string(JSON_OS_TUNING).render())
+
+ elif self._is_controller_has_compute():
+ self._add_global_parameters(
+ Environment().from_string(JSON_OVERRIDE_CACHE).render(
+ osd_pool_default_size=ceph_osd_pool_size,
+ osd_pool_default_min_size=str(ceph_osd_pool_size-1),
+ osd_pool_default_pg_num=self._calculated_default_pg_num))
+
+ self._add_global_parameters(
+ Environment().from_string(JSON_OS_TUNING).render())
+ else:
+ self._add_global_parameters(
+ Environment().from_string(JSON_OVERRIDE).render(
+ osd_pool_default_size=ceph_osd_pool_size,
+ osd_pool_default_min_size=str(ceph_osd_pool_size-1),
+ osd_pool_default_pg_num=self._calculated_default_pg_num))
+
+ def _calculate_pg_num(self, pool_data_percentage):
+ pgnum = PGNum(self._total_number_of_osds,
+ pool_data_percentage,
+ self._number_of_replicas)
+ return pgnum.calculate()
+
+ @property
+ def _calculated_default_pg_num(self):
+ return self._calculate_pg_num(self._pool_data_percentage)
+
+ @property
+ def _calculated_volumes_pg_num(self):
+ return self._calculate_pg_num(
+ OSD_POOL_VOLUMES_PG_NUM_PERCENTAGE * self._ceph_openstack_pg_proportion)
+
+ @property
+ def _calculated_images_pg_num(self):
+ return self._calculate_pg_num(
+ OSD_POOL_IMAGES_PG_NUM_PERCENTAGE * self._ceph_openstack_pg_proportion)
+
+ @property
+ def _calculated_vms_pg_num(self):
+ return self._calculate_pg_num(
+ OSD_POOL_VMS_PG_NUM_PERCENTAGE * self._ceph_openstack_pg_proportion)
+
+ @property
+ def _calculated_shared_pg_num(self):
+ return self._calculate_pg_num(
+ OSD_POOL_SHARED_PG_NUM_PERCENTAGE)
+
+ @property
+ def _calculated_caas_pg_num(self):
+ if self._ceph_caas_pg_proportion > 0:
+ return self._calculate_pg_num(
+ (OSD_POOL_CAAS_PG_NUM_PERCENTAGE - OSD_POOL_SHARED_PG_NUM_PERCENTAGE) *
+ self._ceph_caas_pg_proportion)
+ return 0
+
+ def _add_osd_pool_pg_nums(self):
+ self._add_global_parameters(
+ Environment().from_string(JSON_OSD_POOL_PGNUMS).render(**self._get_ceph_vars()))
+
+ @property
+ def _installation_host(self):
+ return self._hosts_config_handler.get_installation_host()
+
+ @property
+ def _infra_internal_network_name(self):
+ return self._networking_config_handler.get_infra_internal_network_name()
+
+ @property
+ def _installation_host_ip(self):
+ return self._networking_config_handler.get_host_ip(
+ self._installation_host, self._infra_internal_network_name)
+
+ @property
+ def is_ceph_backend(self):
+ return self._storage_config_handler.is_ceph_enabled()
+
+ @property
+ def is_external_ceph_backend(self):
+ return (self._storage_config_handler.is_external_ceph_enabled() and
+ self._ceph_is_openstack_storage_backend)
+
+ def _set_external_ceph_pool_names(self):
+ if self.is_external_ceph_backend:
+ h = self._storage_config_handler
+ self._nova_pool_name = h.get_ext_ceph_nova_pool()
+ self._cinder_pool_name = h.get_ext_ceph_cinder_pool()
+ self._glance_pool_name = h.get_ext_ceph_glance_pool()
+ self._platform_pool_name = h.get_ext_ceph_platform_pool()
+
+ @property
+ def _lvm_is_openstack_storage_backend(self):
+ return True if self._openstack_config_handler.get_storage_backend() == 'lvm' else False
+
+ @property
+ def _ceph_is_openstack_storage_backend(self):
+ return True if self._openstack_config_handler.get_storage_backend() == 'ceph' else False
+
+ @property
+ def is_lvm_backend(self):
+ return (self._storage_config_handler.is_lvm_enabled() and
+ self._lvm_is_openstack_storage_backend)
+
+ @property
+ def instance_default_backend(self):
+ return self._openstack_config_handler.get_instance_default_backend()
+
+ @property
+ def _hosts_with_ceph_storage_profile(self):
+ # return filter(lambda host: host.is_rbd, self.hosts)
+ return [host for host in self.hosts if host.is_rbd_ceph]
+
+ @property
+ def _is_openstack_deployment(self):
+ return self._caas_config_handler.is_openstack_deployment()
+
+ @property
+ def _is_caas_deployment(self):
+ return self._caas_config_handler.is_caas_deployment()
+
+ @property
+ def _is_hybrid_deployment(self):
+ return self._caas_config_handler.is_hybrid_deployment()
+
+ def handle(self, phase):
+ self._init_jinja_environment()
+ self.add_global_var("external_ceph_configured", self.is_external_ceph_backend)
+ self.add_global_var("ceph_configured", self.is_ceph_backend)
+ self.add_global_var("lvm_configured", self.is_lvm_backend)
+ if phase == 'bootstrapping':
+ self._add_hdd_storage_configs()
+ else:
+ self._add_hdd_storage_configs()
+ if self.is_external_ceph_backend:
+ self._set_external_ceph_pool_names()
+ self._add_external_ceph_cinder_backends()
+ else:
+ if self._is_openstack_deployment:
+ self._add_cinder_backends()
+ self._add_glance()
+
+ ceph_hosts = self._hosts_with_ceph_storage_profile
+ if ceph_hosts:
+ self._set_ceph_pg_proportions(ceph_hosts)
+ self._add_ceph_ansible_all_sample_host_vars()
+ self._add_ceph_ansible_mons_sample_host_vars()
+ self._add_ceph_ansible_osds_sample_host_vars()
+ self._add_ceph_hosts()
+ self._add_storage_nodes_configs()
+ self._add_monitor_address()
+ self._add_override_settings()
+ self._add_osd_pool_pg_nums()
+ self._add_networks()
+ self.add_global_var("cinder_ceph_client_uuid", self._read_cinder_ceph_client_uuid())
+ if self.is_lvm_backend:
+ self._add_lvm_storage_configs()
+ self._add_bare_lvm_storage_configs()
+
+ self.add_global_var("instance_default_backend", self.instance_default_backend)
+ self.add_global_var("storage_single_node_config", self.single_node_config)
+ self.add_global_var("one_controller_node_config", self._is_one_controller_node_config)
+ if self._is_one_controller_node_config:
+ self._add_single_controller_host_var()
+ self.add_global_var("collocated_controller_node_config",
+ self._is_collocated_controller_node_config())
+ self.add_global_var("dedicated_storage_node_config",
+ self._is_dedicated_storage_config())
+ self.add_global_var("storage_one_controller_multi_nodes_config",
+ self._is_one_controller_multi_nodes_config)
+ if self.instance_default_backend == 'rbd':
+ self._add_nova()
+ elif self.instance_default_backend in SUPPORTED_INSTANCE_BACKENDS:
+ self._add_instance_devices()
+
+ def _set_ceph_pg_proportions(self, ceph_hosts):
+ # FIXME: First storage host's storage profile assumed to get pg proportion values
+ hostname = ceph_hosts[0].name
+ if self._is_hybrid_deployment:
+ self._ceph_openstack_pg_proportion = self._get_ceph_openstack_pg_proportion(hostname)
+ self._ceph_caas_pg_proportion = self._get_ceph_caas_pg_proportion(hostname)
+ elif self._is_openstack_deployment:
+ self._ceph_openstack_pg_proportion = 1.0
+ self._ceph_caas_pg_proportion = 0.0
+ elif self._is_caas_deployment:
+ self._ceph_openstack_pg_proportion = 0.0
+ self._ceph_caas_pg_proportion = 1.0
+
+ def _init_host_data(self):
+ hosts = self._hosts_config_handler.get_enabled_hosts()
+ self.single_node_config = True if len(hosts) == 1 else False
+ for name in hosts:
+ host = self._initialize_host_object(name)
+ self.hosts.append(host)
+ if host.is_osd:
+ self._osd_hosts.append(host)
+ if host.is_mon:
+ self._mon_hosts.append(host)
+ if host.is_mgr:
+ self._mgr_hosts.append(host)
+
+ for host in self.hosts:
+ if host.is_compute:
+ self.compute_hosts.append(host)
+ if host.is_controller:
+ self.controller_hosts.append(host)
+ if host.is_storage:
+ self.storage_hosts.append(host)
+
+ @property
+ def _number_of_osd_hosts(self):
+ return len(self._osd_hosts)
+
+ @property
+ def _is_one_controller_multi_nodes_config(self):
+ if len(self.controller_hosts) == 1 and not self.single_node_config:
+ return True
+ return False
+
+ @property
+ def _is_one_controller_node_config(self):
+ if len(self.controller_hosts) == 1:
+ return True
+ return False
+
+ @property
+ def _number_of_osds_per_host(self):
+ first_osd_host = self._osd_hosts[0].name
+ return self._get_nr_of_ceph_osd_disks(first_osd_host)
+
+ @property
+ def _total_number_of_osds(self):
+ return self._number_of_osds_per_host * self._number_of_osd_hosts
+
+ @property
+ def _number_of_pools(self):
+ """TODO: Get dynamically"""
+ return NUMBER_OF_POOLS
+
+ @property
+ def _pool_data_percentage(self):
+ return float(1.0 / self._number_of_pools)
+
+ @property
+ def _number_of_replicas(self):
+ num = self._storage_config_handler.get_ceph_osd_pool_size()
+ return 2 if num == 0 else num
+
+ def _init_jinja_environment(self):
+ self._init_host_data()
+
+ def _is_backend_configured(self, backend, host_name):
+ try:
+ if self._get_storage_profile_for_backend(host_name, backend):
+ return True
+ return False
+ except configerror.ConfigError:
+ return False
+
+ def _get_storage_profile_for_backend(self, host_name, *backends):
+ storage_profiles = self._hosts_config_handler.get_storage_profiles(host_name)
+ sp_handler = self._sp_config_handler
+ for storage_profile in storage_profiles:
+ if sp_handler.get_profile_backend(storage_profile) in backends:
+ return storage_profile
+ return None
+
+ def _get_nr_of_ceph_osd_disks(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'nr_of_ceph_osd_disks')
+
+ def _get_storage_profile_attribute(self, host_name, attribute):
+ attribute_properties = self._storage_profile_attribute_properties[attribute]
+ storage_profile = self._get_storage_profile_for_backend(host_name,
+ *attribute_properties['backends'])
+ if storage_profile:
+ return attribute_properties['getter'](storage_profile)
+ raise cmerror.CMError(str("Failed to get %s" % attribute))
+
+ def _get_ceph_openstack_pg_proportion(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'openstack_pg_proportion')
+
+ def _get_ceph_caas_pg_proportion(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'caas_pg_proportion')
+
+ def _get_lvm_instance_storage_partitions(self, host_name):
+ try:
+ if self.instance_default_backend in SUPPORTED_INSTANCE_BACKENDS:
+ return self._get_storage_profile_attribute(
+ host_name, 'lvm_instance_storage_partitions')
+ except configerror.ConfigError:
+ pass
+
+ if self.instance_default_backend not in ALL_DEFAULT_INSTANCE_BACKENDS:
+ raise cmerror.CMError(
+ str("Unknown instance_default_backend %s "
+ "not supported" % self.instance_default_backend))
+ return []
+
+ def _get_lvm_cinder_storage_partitions(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'lvm_cinder_storage_partitions')
+
+ def _get_bare_lvm_mount_options(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'mount_options')
+
+ def _get_bare_lvm_mount_dir(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'mount_dir')
+
+ def _get_bare_lvm_lv_name(self, host_name):
+ return self._get_storage_profile_attribute(host_name, 'lv_name')
+
+ def _get_instance_lv_percentage(self, host_name):
+ try:
+ if self.instance_default_backend in SUPPORTED_INSTANCE_BACKENDS:
+ return self._get_storage_profile_attribute(
+ host_name, 'lvm_instance_cow_lv_storage_percentage')
+ except configerror.ConfigError:
+ return DEFAULT_INSTANCE_LV_PERCENTAGE
+ raise cmerror.CMError(str("Failed to found lvm from storage_profiles"))
+
+ def _is_osd_host(self, name):
+ try:
+ return bool(name in self._hosts_config_handler.get_service_profile_hosts('storage'))
+ except configerror.ConfigError:
+ return False
+
+ def _is_rbd_ceph_configured(self, host_name):
+ return self._is_backend_configured('ceph', host_name)
+
+ def _is_lvm_configured(self, host_name):
+ return self._is_backend_configured('lvm', host_name)
+
+ def _is_bare_lvm_configured(self, host_name):
+ return self._is_backend_configured('bare_lvm', host_name)
+
+ def _get_hw_type(self, name):
+ hwmgmt_addr = self._hosts_config_handler.get_hwmgmt_ip(name)
+ hwmgmt_user = self._hosts_config_handler.get_hwmgmt_user(name)
+ hwmgmt_pass = self._hosts_config_handler.get_hwmgmt_password(name)
+ return hw.get_hw_type(hwmgmt_addr, hwmgmt_user, hwmgmt_pass)
+
+ @staticmethod
+ def _get_os_disk(hw_type):
+ return hw.get_os_hd(hw_type)
+
+ def _get_osd_disks_for_embedded_deployment(self, host_name):
+ return self._hosts_config_handler.get_ceph_osd_disks(host_name)
+
+ @staticmethod
+ def _get_osd_disks(hw_type):
+ return hw.get_hd_with_usage(hw_type, "osd")
+
+ def _by_path_disks(self, hw_type, nr_of_disks):
+ return self._get_osd_disks(hw_type)[0:nr_of_disks]
+
+ @staticmethod
+ def _is_by_path_disks(disk_list):
+ return [disk for disk in disk_list if "by-path" in disk]
+
+ def _get_physical_volumes(self, disk_list):
+ partition_nr = "1"
+ if self._is_by_path_disks(disk_list):
+ return [disk+"-part"+partition_nr for disk in disk_list]
+ else:
+ return [disk+partition_nr for disk in disk_list]
+
+ def _initialize_host_object(self, name):
+ host = Host()
+ host.name = name
+ host.is_mgr = self._is_host_managment(host.name)
+ host.is_controller = self._is_host_controller(host.name)
+ host.is_compute = self._is_host_compute(host.name)
+ host.is_storage = self._is_host_storage(host.name)
+ host.is_rbd_ceph = self._is_rbd_ceph_configured(host.name)
+ host.is_lvm = self._is_lvm_configured(host.name)
+ host.is_bare_lvm = self._is_bare_lvm_configured(host.name)
+ host.is_osd = self._is_osd_host(host.name)
+ host.is_mon = host.is_mgr
+ hw_type = self._get_hw_type(name)
+ host.os_disk = self._get_os_disk(hw_type)
+ if host.is_bare_lvm:
+ partitions = self._get_lvm_instance_storage_partitions(host.name)
+ host.bare_lvm_disks = self._by_path_disks(hw_type, len(partitions))
+ host.bare_lvm_physical_volumes = self._get_physical_volumes(host.bare_lvm_disks)
+ host.mount_options = self._get_bare_lvm_mount_options(host.name)
+ host.mount_dir = self._get_bare_lvm_mount_dir(host.name)
+ host.bare_lvm_lv_name = self._get_bare_lvm_lv_name(host.name)
+
+ if host.is_compute and self.instance_default_backend != 'rbd':
+ host.vg_percentage = INSTANCE_NODE_VG_PERCENTAGE
+
+ if self.is_lvm_backend and host.is_controller:
+ nr_of_cinder_disks = int(len(self._get_lvm_cinder_storage_partitions(host.name)))
+ nr_of_nova_disks = int(len(self._get_lvm_instance_storage_partitions(host.name)))
+ nr_of_all_disks = nr_of_cinder_disks + nr_of_nova_disks
+ if nr_of_nova_disks > 0:
+ host.cinder_disks = \
+ self._by_path_disks(hw_type, nr_of_all_disks)[-nr_of_cinder_disks:]
+ else:
+ host.cinder_disks = self._by_path_disks(hw_type, nr_of_cinder_disks)
+ host.cinder_physical_volumes = self._get_physical_volumes(host.cinder_disks)
+
+ if host.is_rbd_ceph:
+ nr_of_osd_disks = self._get_nr_of_ceph_osd_disks(host.name)
+ if self._caas_config_handler.is_vnf_embedded_deployment():
+ host.ceph_osd_disks = \
+ self._get_osd_disks_for_embedded_deployment(host.name)[0:nr_of_osd_disks]
+ else:
+ host.ceph_osd_disks = self._get_osd_disks(hw_type)[0:nr_of_osd_disks]
+ host.osd_disks_ids = range(1, nr_of_osd_disks+1)
+
+ if host.is_lvm and host.is_compute:
+ partitions = self._get_lvm_instance_storage_partitions(host.name)
+ host.instance_disks = self._by_path_disks(hw_type, len(partitions))
+ host.instance_physical_volumes = self._get_physical_volumes(host.instance_disks)
+ host.instance_lv_percentage = self._get_instance_lv_percentage(host.name)
+ return host
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+Name: userconfighandlers
+Version: %{_version}
+Release: 1%{?dist}
+Summary: Basic user configuration handlers
+License: %{_platform_licence}
+Source0: %{name}-%{version}.tar.gz
+Vendor: %{_platform_vendor}
+
+BuildArch: noarch
+
+%define PKG_BASE_DIR /opt/cmframework/userconfighandlers
+
+%description
+User configuration handlers
+
+
+%prep
+%autosetup
+
+%build
+
+%install
+mkdir -p %{buildroot}/%{PKG_BASE_DIR}/
+find userconfighandlers -name '*.py' -exec cp {} %{buildroot}/%{PKG_BASE_DIR}/ \;
+
+%files
+%defattr(0755,root,root,0755)
+%{PKG_BASE_DIR}/*.py*
+
+%preun
+
+
+%postun
+
+%clean
+rm -rf ${buildroot}
+
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+
+import os
+import re
+
+"""
+
+"""
+class caashandler(cmuserconfig.CMUserConfigPlugin):
+ def __init__(self):
+ super(caashandler,self).__init__()
+
+ def handle(self, confman):
+ try:
+ caasconf = confman.get_caas_config_handler()
+ caasconf.add_defaults()
+
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import netifaces as ni
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+from cmdatahandlers.api import utils
+"""
+This plugin is used to define the installation node in the system
+"""
+class installhandler(cmuserconfig.CMUserConfigPlugin):
+ def __init__(self):
+ super(installhandler,self).__init__()
+
+ def handle(self, confman):
+ try:
+ hostsconf = confman.get_hosts_config_handler()
+ hostname = 'controller-1'
+ if not utils.is_virtualized():
+ ownip = utils.get_own_hwmgmt_ip()
+ hostname = hostsconf.get_host_having_hwmgmt_address(ownip)
+ else:
+ mgmt_addr = {}
+
+ for host in hostsconf.get_hosts():
+ try:
+ mgmt_addr[host] = hostsconf.get_mgmt_mac(host)[0]
+ except IndexError:
+ pass
+ for interface in ni.interfaces():
+ a = ni.ifaddresses(interface)
+ mac_list = []
+ for mac in a[ni.AF_LINK]:
+ mac_list.append(mac.get('addr', None))
+ for host, mgmt_mac in mgmt_addr.iteritems():
+ if mgmt_mac in mac_list:
+ hostsconf.set_installation_host(host)
+ return
+
+ hostsconf.set_installation_host(hostname)
+ except configerror.ConfigError as exp:
+ raise cmerror.CMError(str(exp))
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+"""
+This plugin is used to add IP addresses for all the host(s) that are defined in
+the user configuration. The IP addresses will be allocated in the hosts
+according to which networks are actually used in the host.
+It also takes care of allocating the ipmit console port and vbmc ports.
+"""
+class iphandler(cmuserconfig.CMUserConfigPlugin):
+ def __init__(self):
+ super(iphandler,self).__init__()
+
+ def handle(self, confman):
+ try:
+ hostsconf = confman.get_hosts_config_handler()
+ netconf = confman.get_networking_config_handler()
+ hosts = hostsconf.get_hosts()
+ installation_host = hostsconf.get_installation_host()
+ # Installation host has to be the first one in the list
+ # this so that the IP address of the installation host
+ # does not change during the deployment.
+ hosts.remove(installation_host)
+ hosts.insert(0, installation_host)
+ for host in hosts:
+ netconf.add_host_networks(host)
+ hostsconf.add_vbmc_port(host)
+ hostsconf.add_ipmi_terminal_port(host)
+ # add the vip(s)
+ netconf.add_external_vip()
+ netconf.add_internal_vip()
+ except configerror.ConfigError as exp:
+ raise cmerror.CMError(str(exp))
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import yaml
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from serviceprofiles import profiles
+
+extra_localstoragedict = {'cephcontroller':{}}
+
+
+class localstorage(cmuserconfig.CMUserConfigPlugin):
+ localstorageconfdir = '/etc/opt/localstorage'
+
+ def __init__(self):
+ super(localstorage, self).__init__()
+ profs = profiles.Profiles()
+ allprofs = profs.get_profiles()
+ self.host_group_localstoragedict = {}
+ for name, prof in allprofs.iteritems():
+ self.host_group_localstoragedict[name] = {}
+ self.host_group_localstoragedict.update(extra_localstoragedict)
+
+ def handle(self, confman):
+ try:
+ localstorageconf = confman.get_localstorage_config_handler()
+ deploy_type_dir = os.path.join(self.localstorageconfdir,
+ self._get_deployment_type(confman))
+ for localstoragefile in os.listdir(deploy_type_dir):
+ localstoragefilepath = os.path.join(deploy_type_dir, localstoragefile)
+ localstorageconfdict = yaml.load(open(localstoragefilepath))
+ logical_volumes = localstorageconfdict.get("logical_volumes", [])
+ for host_group in localstorageconfdict.get("service_profiles", []):
+ if host_group not in self.host_group_localstoragedict.keys():
+ raise cmerror.CMError(
+ "%s: Not a valid host group. Check configuration in %s"
+ % (host_group, localstoragefilepath))
+ self._add_logical_volumes_to_host_group(logical_volumes, host_group)
+
+ localstorageconf.add_localstorage(self.host_group_localstoragedict)
+
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
+
+ def _get_deployment_type(self, confman):
+ caasconf = confman.get_caas_config_handler()
+ hostsconf = confman.get_hosts_config_handler()
+ if caasconf.get_caas_only():
+ return "caas"
+ if (hostsconf.get_service_profile_hosts('controller')
+ and hostsconf.get_service_profile_hosts('caas_master')):
+ return "multinode_hybrid"
+ return "openstack"
+
+ def _add_logical_volumes_to_host_group(self, lvs, host_group):
+ lv_data = {lv["lvm_name"]: lv for lv in lvs}
+ self.host_group_localstoragedict[host_group].update(lv_data)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+from serviceprofiles import profiles
+
+import os
+import re
+
+"""
+This plugin is used to setup OVS defaults
+"""
+class ovshandler(cmuserconfig.CMUserConfigPlugin):
+ def __init__(self):
+ super(ovshandler,self).__init__()
+
+ def handle(self, confman):
+ try:
+ hostsconf = confman.get_hosts_config_handler()
+ netconf = confman.get_networking_config_handler()
+
+ hosts = hostsconf.get_hosts()
+ for host in hosts:
+ node_service_profiles = hostsconf.get_service_profiles(host)
+ for profile in node_service_profiles:
+ if profile == profiles.Profiles.get_compute_service_profile():
+ netconf.add_ovs_config_defaults(host)
+ except Exception as exp:
+ raise cmerror.CMError(str(exp))
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+
+
+class storagehandler(cmuserconfig.CMUserConfigPlugin):
+
+ def __init__(self):
+ super(storagehandler, self).__init__()
+ self.hosts_config_handler = None
+ self.storage_config_handler = None
+ self.openstack_config_handler = None
+
+ @property
+ def _managements(self):
+ return self.hosts_config_handler.get_service_profile_hosts('management')
+
+ @property
+ def _storages(self):
+ return self.hosts_config_handler.get_service_profile_hosts('storage')
+
+ @property
+ def _backend(self):
+ return self.openstack_config_handler.get_storage_backend()
+
+ @property
+ def _storage_backends(self):
+ return self.storage_config_handler.get_storage_backends()
+
+ def _set_handlers(self, confman):
+ self.storage_config_handler = confman.get_storage_config_handler()
+ self.hosts_config_handler = confman.get_hosts_config_handler()
+ self.openstack_config_handler = confman.get_openstack_config_handler()
+
+ def handle(self, confman):
+ """TODO: Set these dynamically according to user configuration instead."""
+ try:
+ self._set_handlers(confman)
+ if (self._backend == 'ceph') and ('ceph' in self._storage_backends):
+ if self.storage_config_handler.is_ceph_enabled():
+ self.storage_config_handler.set_mons(self._managements)
+ self.storage_config_handler.set_ceph_mons(self._managements)
+ self.storage_config_handler.set_osds(self._storages)
+
+
+ except configerror.ConfigError as exp:
+ raise cmerror.CMError(str(exp))
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmuserconfig
+from cmframework.apis import cmerror
+from cmdatahandlers.api import configerror
+
+"""
+This plugin is used to add default values for auth_type and serverkeys_path parameters in user_config
+if they are not present
+"""
+class timehandler(cmuserconfig.CMUserConfigPlugin):
+ def __init__(self):
+ super(timehandler, self).__init__()
+
+ def handle(self, confman):
+ try:
+ timeconf = confman.get_time_config_handler()
+ ROOT = 'cloud.time'
+ if 'auth_type' not in timeconf.config[ROOT]:
+ timeconf.config[ROOT]['auth_type'] = 'none'
+ if 'serverkeys_path' not in timeconf.config[ROOT]:
+ timeconf.config[ROOT]['serverkeys_path'] = ''
+ except configerror.ConfigError as exp:
+ raise cmerror.CMError(str(exp))
--- /dev/null
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+Name: validators
+Version: %{_version}
+Release: 1%{?dist}
+Summary: Configuration validators
+License: %{_platform_licence}
+Source0: %{name}-%{version}.tar.gz
+Vendor: %{_platform_vendor}
+BuildArch: noarch
+BuildRequires: python
+
+Requires: python-django, python-ipaddr
+
+%define PKG_BASE_DIR /opt/cmframework/validators
+
+%description
+This RPM contains source code for configuration validators
+
+%prep
+%autosetup
+
+%install
+mkdir -p %{buildroot}/%{PKG_BASE_DIR}/
+find validators/src -name '*.py' -exec cp {} %{buildroot}/%{PKG_BASE_DIR}/ \;
+
+%files
+%defattr(0755,root,root,0755)
+%{PKG_BASE_DIR}/*.py*
+
+%pre
+
+%post
+
+%preun
+
+%postun
+
+%clean
+rm -rf %{buildroot}
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import re
+import base64
+import logging
+import ipaddr
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+from cmdatahandlers.api import configerror
+
+
+class CaasValidationError(configerror.ConfigError):
+ def __init__(self, description):
+ configerror.ConfigError.__init__(
+ self, 'Validation error in caas_validation: {}'.format(description))
+
+
+class CaasValidationUtils(object):
+
+ def __init__(self):
+ pass
+
+ @staticmethod
+ def check_key_in_dict(key, dictionary):
+ if key not in dictionary:
+ raise CaasValidationError("{} cannot be found in {} ".format(key, dictionary))
+
+ def get_every_key_occurrence(self, var, key):
+ if hasattr(var, 'iteritems'):
+ for k, v in var.iteritems():
+ if k == key:
+ yield v
+ if isinstance(v, dict):
+ for result in self.get_every_key_occurrence(v, key):
+ yield result
+ elif isinstance(v, list):
+ for d in v:
+ for result in self.get_every_key_occurrence(d, key):
+ yield result
+
+ @staticmethod
+ def is_optional_param_present(key, dictionary):
+ if key not in dictionary:
+ logging.info('{} key is not in the config dictionary, since this is an optional '
+ 'parameter, validation is skipped.'.format(key))
+ return False
+ if not dictionary[key]:
+ logging.info('Although {} key is in the config dictionary the correspondig value is '
+ 'empty, since this is an optional parametery, '
+ 'validation is skipped.'.format(key))
+ return False
+ return True
+
+
+class CaasValidation(cmvalidator.CMValidator):
+
+ SUBSCRIPTION = r'^cloud\.caas|cloud\.hosts|cloud\.networking$'
+ CAAS_DOMAIN = 'cloud.caas'
+ NETW_DOMAIN = 'cloud.networking'
+ HOSTS_DOMAIN = 'cloud.hosts'
+
+ SERV_PROF = 'service_profiles'
+ CAAS_PROFILE_PATTERN = 'caas_master|caas_worker'
+ CIDR = 'cidr'
+
+ DOCKER_SIZE_QOUTA = "docker_size_quota"
+ DOCKER_SIZE_QOUTA_PATTERN = r"^\d*[G,M,K]$"
+
+ CHART_NAME = "chart_name"
+ CHART_NAME_PATTERN = r"[A-Za-z0-9\.-_]+"
+
+ CHART_VERSION = "chart_version"
+ CHART_VERSION_PATTERN = r"^\d+\.\d+\.\d+$"
+
+ HELM_OP_TIMEOUT = "helm_operation_timeout"
+
+ DOCKER0_CIDR = "docker0_cidr"
+
+ INSTANTIATION_TIMEOUT = "instantiation_timeout"
+
+ HELM_PARAMETERS = "helm_parameters"
+
+ ENCRYPTED_CA = "encrypted_ca"
+ ENCRYPTED_CA_KEY = "encrypted_ca_key"
+
+ def __init__(self):
+ cmvalidator.CMValidator.__init__(self)
+ self.validation_utils = validation.ValidationUtils()
+ self.conf = None
+ self.caas_conf = None
+ self.caas_utils = CaasValidationUtils()
+
+ def get_subscription_info(self):
+ return self.SUBSCRIPTION
+
+ def validate_set(self, props):
+ if not self.is_caas_mandatory(props):
+ logging.info("{} not found in {}, caas validation is not needed.".format(
+ self.CAAS_PROFILE_PATTERN, self.HOSTS_DOMAIN))
+ return
+ self.props_pre_check(props)
+ self.validate_docker_size_quota()
+ self.validate_chart_name()
+ self.validate_chart_version()
+ self.validate_helm_operation_timeout()
+ self.validate_docker0_cidr(props)
+ self.validate_instantiation_timeout()
+ self.validate_helm_parameters()
+ self.validate_encrypted_ca(self.ENCRYPTED_CA)
+ self.validate_encrypted_ca(self.ENCRYPTED_CA_KEY)
+
+ def is_caas_mandatory(self, props):
+ hosts_conf = json.loads(props[self.HOSTS_DOMAIN])
+ service_profiles = self.caas_utils.get_every_key_occurrence(hosts_conf, self.SERV_PROF)
+ pattern = re.compile(self.CAAS_PROFILE_PATTERN)
+ for profile in service_profiles:
+ if filter(pattern.match, profile):
+ return True
+ return False
+
+ def props_pre_check(self, props):
+ if not isinstance(props, dict):
+ raise CaasValidationError('The given input: {} is not a dictionary!'.format(props))
+ if self.CAAS_DOMAIN not in props:
+ raise CaasValidationError(
+ '{} configuration is missing from {}!'.format(self.CAAS_DOMAIN, props))
+ self.caas_conf = json.loads(props[self.CAAS_DOMAIN])
+ self.conf = {self.CAAS_DOMAIN: self.caas_conf}
+ if not self.caas_conf:
+ raise CaasValidationError('{} is an empty dictionary!'.format(self.conf))
+
+ def validate_docker_size_quota(self):
+ if not self.caas_utils.is_optional_param_present(self.DOCKER_SIZE_QOUTA, self.caas_conf):
+ return
+ if not re.match(self.DOCKER_SIZE_QOUTA_PATTERN, self.caas_conf[self.DOCKER_SIZE_QOUTA]):
+ raise CaasValidationError('{} is not a valid {}!'.format(
+ self.caas_conf[self.DOCKER_SIZE_QOUTA],
+ self.DOCKER_SIZE_QOUTA))
+
+ def validate_chart_name(self):
+ if not self.caas_utils.is_optional_param_present(self.CHART_NAME, self.caas_conf):
+ return
+ if not re.match(self.CHART_NAME_PATTERN, self.caas_conf[self.CHART_NAME]):
+ raise CaasValidationError('{} is not a valid {} !'.format(
+ self.caas_conf[self.CHART_NAME],
+ self.CHART_NAME))
+
+ def validate_chart_version(self):
+ if not self.caas_utils.is_optional_param_present(self.CHART_VERSION, self.caas_conf):
+ return
+ if not self.caas_conf[self.CHART_NAME]:
+ logging.warn('{} shall be set only, when {} is set.'.format(
+ self.CHART_VERSION, self.CHART_NAME))
+ if not re.match(self.CHART_VERSION_PATTERN, self.caas_conf[self.CHART_VERSION]):
+ raise CaasValidationError('{} is not a valid {} !'.format(
+ self.caas_conf[self.CHART_VERSION],
+ self.CHART_VERSION))
+
+ def validate_helm_operation_timeout(self):
+ if not self.caas_utils.is_optional_param_present(self.HELM_OP_TIMEOUT, self.caas_conf):
+ return
+ if not isinstance(self.caas_conf[self.HELM_OP_TIMEOUT], int):
+ raise CaasValidationError('{}:{} is not an integer'.format(
+ self.HELM_OP_TIMEOUT,
+ self.caas_conf[self.HELM_OP_TIMEOUT]))
+
+ def get_docker0_cidr_netw_obj(self, subnet):
+ try:
+ return ipaddr.IPNetwork(subnet)
+ except ValueError as exc:
+ raise CaasValidationError('{} is an invalid subnet address: {}'.format(
+ self.DOCKER0_CIDR, exc))
+
+ def check_docker0_cidr_overlaps_with_netw_subnets(self, docker0_cidr, props):
+ netw_conf = json.loads(props[self.NETW_DOMAIN])
+ cidrs = self.caas_utils.get_every_key_occurrence(netw_conf, self.CIDR)
+ for cidr in cidrs:
+ if docker0_cidr.overlaps(ipaddr.IPNetwork(cidr)):
+ raise CaasValidationError(
+ 'CIDR configured for {} shall be an unused IP range, '
+ 'but it overlaps with {} from {}.'.format(
+ self.DOCKER0_CIDR, cidr, self.NETW_DOMAIN))
+
+ def validate_docker0_cidr(self, props):
+ if not self.caas_utils.is_optional_param_present(self.DOCKER0_CIDR, self.caas_conf):
+ return
+ docker0_cidr_obj = self.get_docker0_cidr_netw_obj(self.caas_conf[self.DOCKER0_CIDR])
+ self.check_docker0_cidr_overlaps_with_netw_subnets(docker0_cidr_obj, props)
+
+ def validate_instantiation_timeout(self):
+ if not self.caas_utils.is_optional_param_present(self.INSTANTIATION_TIMEOUT,
+ self.caas_conf):
+ return
+ if not isinstance(self.caas_conf[self.INSTANTIATION_TIMEOUT], int):
+ raise CaasValidationError('{}:{} is not an integer'.format(
+ self.INSTANTIATION_TIMEOUT,
+ self.caas_conf[self.INSTANTIATION_TIMEOUT]))
+
+ def validate_helm_parameters(self):
+ if not self.caas_utils.is_optional_param_present(self.HELM_PARAMETERS, self.caas_conf):
+ return
+ if not isinstance(self.caas_conf[self.HELM_PARAMETERS], dict):
+ raise CaasValidationError('The given input: {} is not a dictionary!'.format(
+ self.caas_conf[self.HELM_PARAMETERS]))
+
+ def validate_encrypted_ca(self, enc_ca):
+ self.caas_utils.check_key_in_dict(enc_ca, self.caas_conf)
+ enc_ca_str = self.caas_conf[enc_ca][0]
+ if not enc_ca_str:
+ raise CaasValidationError('{} shall not be empty !'.format(enc_ca))
+ try:
+ base64.b64decode(enc_ca_str)
+ except TypeError as exc:
+ raise CaasValidationError('Invalid {}: {}'.format(enc_ca, exc))
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import json
+import re
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class HostOSValidation(cmvalidator.CMValidator):
+ domain = 'cloud.host_os'
+ GRUB2_PASSWORD_PATTERN = r"^grub\.pbkdf2\.sha512\.\d+\.[0-9A-F]+\.[0-9A-F]+$"
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ return r'^cloud\.host_os$'
+
+ def validate_set(self, dict_key_value):
+ grub2pass_attr = 'grub2_password'
+ lockout_time_attr = 'lockout_time'
+ failed_login_attempts_attr = 'failed_login_attempts'
+ logging.debug('validate_set called with %s' % str(dict_key_value))
+
+ value_str = dict_key_value.get(self.domain, None)
+ logging.debug('{0} domain value: {1}'.format(self.domain, value_str))
+ if value_str is not None:
+ value_dict = json.loads(value_str)
+
+ if not isinstance(value_dict, dict):
+ raise validation.ValidationError('%s value is not a dict' % self.domain)
+
+ passwd = value_dict.get(grub2pass_attr)
+ if passwd:
+ self.validate_passwd_hash(passwd)
+
+ lockout_t = value_dict.get(lockout_time_attr)
+ if lockout_t:
+ self.validate_lockout_time(lockout_t)
+
+ failed_login_a = value_dict.get(failed_login_attempts_attr)
+ if failed_login_a:
+ self.validate_failed_login_attempts(failed_login_a)
+ else:
+ raise validation.ValidationError('Missing domain: %s' % self.domain)
+
+ def validate_delete(self, dict_key_value):
+ logging.debug('validate_delete called with %s' % str(dict_key_value))
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
+
+ def validate_passwd_hash(self, passwd_hash):
+ if not re.match(self.GRUB2_PASSWORD_PATTERN, passwd_hash):
+ raise validation.ValidationError('The passwd hash: "%s" is not a valid hash!' % passwd_hash)
+
+ def validate_lockout_time(self, _lockout_time):
+ if not re.match(r"^[0-9]+$", str(_lockout_time)):
+ raise validation.ValidationError('The lockout time: "%s" is not valid!' % _lockout_time)
+
+ def validate_failed_login_attempts(self, _failed_login_attempts):
+ if not re.match(r"^[0-9]+$", str(_failed_login_attempts)):
+ raise validation.ValidationError('The failed login attempts: "%s" is not valid!' % _failed_login_attempts)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import json
+import re
+from netaddr import IPRange
+from netaddr import IPNetwork
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+from cmdatahandlers.api import utils
+from serviceprofiles import profiles as service_profiles
+
+
+class ConfigurationDoesNotExist(Exception):
+ pass
+
+
+class HostsValidation(cmvalidator.CMValidator):
+ domain = 'cloud.hosts'
+ management_profile = 'management'
+ controller_profile = 'controller'
+ caas_master_profile = 'caas_master'
+ caas_worker_profile = 'caas_worker'
+ base_profile = 'base'
+ storage_profile = 'storage'
+
+ storage_profile_attr = 'cloud.storage_profiles'
+ network_profile_attr = 'cloud.network_profiles'
+ performance_profile_attr = 'cloud.performance_profiles'
+ networking_attr = 'cloud.networking'
+ MIN_PASSWORD_LENGTH = 8
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ hosts = r'cloud\.hosts'
+ net_profiles = r'cloud\.network_profiles'
+ storage_profiles = r'cloud\.storage_profiles'
+ perf_profiles = r'cloud\.performance_profiles'
+ net = r'cloud\.networking'
+ return '^%s|%s|%s|%s|%s$' % (hosts, net_profiles, storage_profiles, perf_profiles, net)
+
+ def validate_set(self, dict_key_value):
+ logging.debug('HostsValidation: validate_set called with %s', dict_key_value)
+
+ for key, value in dict_key_value.iteritems():
+ value_dict = {} if not value else json.loads(value)
+ if not value_dict:
+ if key != self.storage_profile_attr:
+ raise validation.ValidationError('No value for %s' % key)
+
+ if key == self.domain:
+ if not isinstance(value_dict, dict):
+ raise validation.ValidationError('%s value is not a dict' % self.domain)
+
+ net_profile_dict = self.get_domain_dict(dict_key_value,
+ self.network_profile_attr)
+ storage_profile_dict = self.get_domain_dict(dict_key_value,
+ self.storage_profile_attr)
+ perf_profile_dict = self.get_domain_dict(dict_key_value,
+ self.performance_profile_attr)
+ networking_dict = self.get_domain_dict(dict_key_value,
+ self.networking_attr)
+ self.validate_hosts(value_dict,
+ net_profile_dict,
+ storage_profile_dict,
+ perf_profile_dict,
+ networking_dict)
+
+ self.validate_scale_in(dict_key_value)
+
+ elif key == self.network_profile_attr:
+ profile_list = [] if not value_dict else value_dict.keys()
+
+ host_dict = self.get_domain_dict(dict_key_value, self.domain)
+ perf_profile_config = self.get_domain_dict(dict_key_value,
+ self.performance_profile_attr)
+ storage_profile_config = self.get_domain_dict(dict_key_value,
+ self.storage_profile_attr)
+ net_profile_dict = self.get_domain_dict(dict_key_value,
+ self.network_profile_attr)
+ networking_dict = self.get_domain_dict(dict_key_value,
+ self.networking_attr)
+
+ self.validate_network_ranges(host_dict, net_profile_dict, networking_dict)
+
+ for host_name, host_data in host_dict.iteritems():
+ attr = 'network_profiles'
+ profiles = host_data.get(attr)
+ profile_name = profiles[0]
+ self.validate_profile_list(profiles, profile_list, host_name, attr)
+
+ performance_profiles = host_data.get('performance_profiles')
+
+ if self.is_provider_type_ovs_dpdk(profile_name, value_dict):
+ if self.base_profile not in host_data['service_profiles']:
+ reason = 'Missing base service profile with ovs_dpdk'
+ reason += ' type provider network'
+ raise validation.ValidationError(reason)
+ if not performance_profiles:
+ reason = \
+ 'Missing performance profiles with ovs_dpdk type provider network'
+ raise validation.ValidationError(reason)
+ self.validate_performance_profile(perf_profile_config,
+ performance_profiles[0])
+
+ if self.is_provider_type_sriov(profile_name, value_dict):
+ if not self.is_sriov_allowed_for_host(host_data['service_profiles']):
+ reason = 'Missing base or caas_* service profile'
+ reason += ' with SR-IOV type provider network'
+ raise validation.ValidationError(reason)
+
+ subnet_name = 'infra_internal'
+ if not self.network_is_mapped(value_dict.get(profile_name), subnet_name):
+ raise validation.ValidationError('%s is not mapped for %s' % (subnet_name,
+ host_name))
+ if self.management_profile in host_data['service_profiles']:
+ subnet_name = 'infra_external'
+ if not self.network_is_mapped(value_dict.get(profile_name), subnet_name):
+ raise validation.ValidationError('%s is not mapped for %s' %
+ (subnet_name, host_name))
+ else:
+ subnet_name = 'infra_external'
+ if self.network_is_mapped(value_dict.get(profile_name), subnet_name):
+ raise validation.ValidationError('%s is mapped for %s' %
+ (subnet_name, host_name))
+
+ if self.storage_profile in host_data['service_profiles']:
+ storage_profile_list = host_data.get('storage_profiles')
+ subnet_name = 'infra_storage_cluster'
+ if not self.network_is_mapped(value_dict.get(profile_name), subnet_name) \
+ and self.is_ceph_profile(storage_profile_config,
+ storage_profile_list):
+ raise validation.ValidationError('%s is not mapped for %s' %
+ (subnet_name, host_name))
+
+ elif key == self.storage_profile_attr:
+ profile_list = [] if not value_dict else value_dict.keys()
+
+ host_dict = self.get_domain_dict(dict_key_value, self.domain)
+
+ for host_name, host_data in host_dict.iteritems():
+ attr = 'storage_profiles'
+ profiles = host_data.get(attr)
+ if profiles:
+ self.validate_profile_list(profiles, profile_list, host_name, attr)
+
+ elif key == self.performance_profile_attr:
+ profile_list = [] if not value_dict else value_dict.keys()
+
+ host_dict = self.get_domain_dict(dict_key_value, self.domain)
+ network_profile_config = self.get_domain_dict(dict_key_value,
+ self.network_profile_attr)
+
+ for host_name, host_data in host_dict.iteritems():
+ attr = 'performance_profiles'
+ profiles = host_data.get(attr)
+ if profiles:
+ self.validate_profile_list(profiles, profile_list, host_name, attr)
+ self.validate_nonempty_performance_profile(value_dict, profiles[0],
+ host_name)
+
+ network_profiles = host_data.get('network_profiles')
+ if self.is_provider_type_ovs_dpdk(network_profiles[0], network_profile_config):
+ if not profiles:
+ reason = \
+ 'Missing performance profiles with ovs_dpdk type provider network'
+ raise validation.ValidationError(reason)
+ self.validate_performance_profile(value_dict,
+ profiles[0])
+ elif key == self.networking_attr:
+ networking_dict = value_dict
+
+ hosts_dict = self.get_domain_dict(dict_key_value, self.domain)
+ profile_config = self.get_domain_dict(dict_key_value,
+ self.network_profile_attr)
+
+ self.validate_network_ranges(hosts_dict, profile_config, networking_dict)
+
+ else:
+ raise validation.ValidationError('Unexpected configuration %s' % key)
+
+ def validate_delete(self, props):
+ logging.debug('validate_delete called with %s', props)
+ if self.domain in props:
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
+ else:
+ raise validation.ValidationError('References in %s, cannot be deleted' % self.domain)
+
+ def validate_hosts(self, hosts_config, nw_profile_config,
+ storage_profile_config, perf_profile_config,
+ networking_config):
+ net_profile_list = [] if not nw_profile_config \
+ else nw_profile_config.keys()
+ storage_profile_list = [] if not storage_profile_config else storage_profile_config.keys()
+ performance_profile_list = [] if not perf_profile_config else perf_profile_config.keys()
+
+ service_profile_list = service_profiles.Profiles().get_service_profiles()
+
+ bases = []
+ storages = []
+ caas_masters = []
+ managements = []
+
+ for key, value in hosts_config.iteritems():
+ # Hostname
+ if not re.match(r'^[\da-z][\da-z-]*$', key) or len(key) > 63:
+ raise validation.ValidationError('Invalid hostname %s' % key)
+
+ # Network domain
+ attr = 'network_domain'
+ network_domain = value.get(attr)
+ if not network_domain:
+ reason = 'Missing %s for %s' % (attr, key)
+ raise validation.ValidationError(reason)
+
+ # Network profiles
+ attr = 'network_profiles'
+ profiles = value.get(attr)
+ self.validate_profile_list(profiles, net_profile_list, key, attr)
+ if len(profiles) != 1:
+ reason = 'More than one %s defined for %s' % (attr, key)
+ raise validation.ValidationError(reason)
+
+ nw_profile_name = profiles[0]
+ subnet_name = 'infra_internal'
+ if not self.network_is_mapped(nw_profile_config.get(nw_profile_name), subnet_name):
+ raise validation.ValidationError('%s is not mapped for %s' % (subnet_name, key))
+
+ # Performance profiles
+ attr = 'performance_profiles'
+ perf_profile = None
+ profiles = value.get(attr)
+ if profiles:
+ self.validate_profile_list(profiles, performance_profile_list,
+ key, attr)
+ if len(profiles) != 1:
+ reason = 'More than one %s defined for %s' % (attr, key)
+ raise validation.ValidationError(reason)
+ perf_profile = profiles[0]
+ self.validate_nonempty_performance_profile(perf_profile_config, perf_profile, key)
+
+ if self.is_provider_type_ovs_dpdk(nw_profile_name, nw_profile_config):
+ if not profiles:
+ reason = 'Missing performance profiles with ovs_dpdk type provider network'
+ raise validation.ValidationError(reason)
+ self.validate_performance_profile(perf_profile_config, perf_profile)
+
+ # Service profiles
+ attr = 'service_profiles'
+ profiles = value.get(attr)
+ self.validate_profile_list(profiles, service_profile_list, key, attr)
+ if self.is_provider_type_ovs_dpdk(nw_profile_name, nw_profile_config):
+ if self.base_profile not in profiles:
+ reason = 'Missing base service profile with ovs_dpdk type provider network'
+ raise validation.ValidationError(reason)
+ if self.is_provider_type_sriov(nw_profile_name, nw_profile_config):
+ if not self.is_sriov_allowed_for_host(profiles):
+ reason = 'Missing base or caas_* service profile'
+ reason += ' with SR-IOV type provider network'
+ raise validation.ValidationError(reason)
+ if perf_profile:
+ if not self.is_perf_allowed_for_host(profiles):
+ reason = 'Missing base or caas_* service profile'
+ reason += ' with performance profile host'
+ raise validation.ValidationError(reason)
+ if self.management_profile in profiles:
+ managements.append(key)
+ subnet_name = 'infra_external'
+ if not self.network_is_mapped(nw_profile_config.get(nw_profile_name), subnet_name):
+ raise validation.ValidationError('%s is not mapped for %s' % (subnet_name, key))
+ else:
+ subnet_name = 'infra_external'
+ if self.network_is_mapped(nw_profile_config.get(nw_profile_name), subnet_name):
+ raise validation.ValidationError('%s is mapped for %s' % (subnet_name, key))
+
+ if self.base_profile in profiles:
+ bases.append(key)
+ if self.caas_master_profile in profiles:
+ caas_masters.append(key)
+
+ if self.storage_profile in profiles:
+ storages.append(key)
+ st_profiles = value.get('storage_profiles')
+ self.validate_profile_list(st_profiles, storage_profile_list,
+ key, 'storage_profiles')
+ subnet_name = 'infra_storage_cluster'
+ if not self.network_is_mapped(nw_profile_config.get(nw_profile_name), subnet_name) \
+ and self.is_ceph_profile(storage_profile_config, st_profiles):
+ raise validation.ValidationError('%s is not mapped for %s' % (subnet_name, key))
+
+ # HW management
+ self.validate_hwmgmt(value.get('hwmgmt'), key)
+
+ # MAC address
+ self.validate_mac_list(value.get('mgmt_mac'))
+
+ # Preallocated IP validation
+ self.validate_preallocated_ips(value, nw_profile_config, networking_config)
+
+ # Check duplicated Preallocated IPs
+ self.search_for_duplicate_ips(hosts_config)
+
+ # There should be least one management node
+ if not managements and not caas_masters:
+ reason = 'No management node defined'
+ raise validation.ValidationError(reason)
+
+ # Number of caas_masters 1 or 3
+ if caas_masters:
+ if len(caas_masters) != 1 and len(caas_masters) != 3:
+ reason = 'Unexpected number of caas_master nodes %d' % len(caas_masters)
+ raise validation.ValidationError(reason)
+
+ # Number of management nodes 1 or 3
+ if managements:
+ if len(managements) != 1 and len(managements) != 3:
+ reason = 'Unexpected number of controller nodes %d' % len(managements)
+ raise validation.ValidationError(reason)
+
+ # All managements must be in same network domain
+ management_network_domain = None
+ for management in managements:
+ if management_network_domain is None:
+ management_network_domain = hosts_config[management].get('network_domain')
+ else:
+ if not management_network_domain == hosts_config[management].get('network_domain'):
+ reason = 'All management nodes must belong to the same networking domain'
+ raise validation.ValidationError(reason)
+
+ if len(managements) == 3 and len(storages) < 2:
+ raise validation.ValidationError('There are not enough storage nodes')
+
+ self.validate_network_ranges(hosts_config, nw_profile_config, networking_config)
+
+ def validate_network_ranges(self, hosts_config, nw_profile_config, networking_config):
+ host_counts = {} # (infra_network, network_domain) as a key, mapped host count as a value
+ for host_conf in hosts_config.itervalues():
+ if (isinstance(host_conf, dict) and
+ host_conf.get('network_profiles') and
+ isinstance(host_conf['network_profiles'], list) and
+ host_conf['network_profiles']):
+ domain = host_conf.get('network_domain')
+ profile = nw_profile_config.get(host_conf['network_profiles'][0])
+ if (isinstance(profile, dict) and
+ profile.get('interface_net_mapping') and
+ isinstance(profile['interface_net_mapping'], dict)):
+ for infras in profile['interface_net_mapping'].itervalues():
+ if isinstance(infras, list):
+ for infra in infras:
+ key = (infra, domain)
+ if key in host_counts:
+ host_counts[key] += 1
+ else:
+ host_counts[key] = 1
+ for (infra, domain), count in host_counts.iteritems():
+ self.validate_infra_network_range(infra, domain, networking_config, count)
+
+ def validate_infra_network_range(self, infra, network_domain, networking_config, host_count):
+ infra_conf = networking_config.get(infra)
+ if not isinstance(infra_conf, dict):
+ return
+
+ domains_conf = infra_conf.get('network_domains')
+ if not isinstance(domains_conf, dict) or network_domain not in domains_conf:
+ reason = '%s does not contain %s network domain configuration' % \
+ (infra, network_domain)
+ raise validation.ValidationError(reason)
+ cidr = domains_conf[network_domain].get('cidr')
+ start = domains_conf[network_domain].get('ip_range_start')
+ end = domains_conf[network_domain].get('ip_range_end')
+
+ if not start and cidr:
+ start = str(IPNetwork(cidr)[1])
+ if not end and cidr:
+ end = str(IPNetwork(cidr)[-2])
+ required = host_count if infra != 'infra_external' else host_count + 1
+ if len(IPRange(start, end)) < required:
+ reason = 'IP range %s - %s does not contain %d addresses' % (start, end, required)
+ raise validation.ValidationError(reason)
+
+ def validate_profile_list(self, profile_list, profile_defs, host, attribute):
+ if not profile_list:
+ raise validation.ValidationError('Missing %s for %s' % (attribute, host))
+ if not isinstance(profile_list, list):
+ raise validation.ValidationError('%s %s value must be a list' % (host, attribute))
+ for profile in profile_list:
+ if profile not in profile_defs:
+ raise validation.ValidationError('Unknown %s %s for %s' %
+ (attribute, profile, host))
+
+ def validate_hwmgmt(self, hwmgmt, host):
+ if not hwmgmt:
+ raise validation.ValidationError('Missing hwmgmt configuration for %s' % host)
+ if not hwmgmt.get('user'):
+ raise validation.ValidationError('Missing hwmgmt username for %s' % host)
+ if not hwmgmt.get('password'):
+ raise validation.ValidationError('Missing hwmgmt password for %s' % host)
+ validationutils = validation.ValidationUtils()
+ validationutils.validate_ip_address(hwmgmt.get('address'))
+
+ def validate_nonempty_performance_profile(self, config, profile_name, host_name):
+ profile = config.get(profile_name)
+ if not isinstance(profile, dict) or not profile:
+ reason = 'Empty performance profile %s defined for %s' % (profile_name, host_name)
+ raise validation.ValidationError(reason)
+
+ def validate_performance_profile(self, config, profile_name):
+ attributes = ['default_hugepagesz', 'hugepagesz', 'hugepages',
+ 'ovs_dpdk_cpus']
+ profile = config.get(profile_name)
+ if not profile:
+ profile = {}
+ for attr in attributes:
+ if not profile.get(attr):
+ raise validation.ValidationError('Missing %s value for performance profile %s'
+ % (attr, profile_name))
+
+ def validate_mac_list(self, mac_list):
+ if not mac_list:
+ return
+
+ if not isinstance(mac_list, list):
+ raise validation.ValidationError('mgmt_mac value must be a list')
+
+ for mac in mac_list:
+ pattern = '[0-9a-f]{2}([-:])[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$'
+ if not mac or not re.match(pattern, mac.lower()):
+ raise validation.ValidationError('Invalid mac address syntax %s' % mac)
+
+ def validate_preallocated_ips(self, host, nw_profile_config, networking_config):
+ if not self.host_has_preallocated_ip(host):
+ return
+ validationutils = validation.ValidationUtils()
+ for network_name, ip in host["pre_allocated_ips"].iteritems():
+ for net_profile_name in host["network_profiles"]:
+ if not self.is_network_in_net_profile(
+ network_name, nw_profile_config.get(net_profile_name)):
+ raise validation.ValidationError(
+ "Network %s is missing from network profile %s" %
+ (network_name, net_profile_name))
+ network_domains = networking_config.get(network_name).get("network_domains")
+ host_network_domain = host["network_domain"]
+ subnet = network_domains.get(host_network_domain)["cidr"]
+ validationutils.validate_ip_address(ip)
+ utils.validate_ip_in_network(ip, subnet)
+
+ def host_has_preallocated_ip(self, host):
+ ips_field = "pre_allocated_ips"
+ if ips_field in host and host.get(ips_field, {}) and all(host[ips_field]):
+ return True
+ return False
+
+ def is_network_in_net_profile(self, network_name, network_profile):
+ for networks in network_profile["interface_net_mapping"].itervalues():
+ if network_name in networks:
+ return True
+ return False
+
+ def search_for_duplicate_ips(self, hosts):
+ ips_field = "pre_allocated_ips"
+ hosts_with_preallocated_ip = {name: attributes
+ for name, attributes in hosts.iteritems()
+ if self.host_has_preallocated_ip(attributes)}
+ for host_name, host in hosts_with_preallocated_ip.iteritems():
+ other_hosts = {name: attributes
+ for name, attributes in hosts_with_preallocated_ip.iteritems()
+ if name != host_name}
+ for other_host_name, other_host in other_hosts.iteritems():
+ if self.host_has_preallocated_ip(other_host):
+ logging.debug(
+ "Checking %s and %s for duplicated preallocated IPs",
+ host_name, other_host_name)
+ duplicated_ip = self.is_ip_duplicated(host[ips_field], other_host[ips_field])
+ if duplicated_ip:
+ raise validation.ValidationError(
+ "%s and %s has duplicated IP address: %s" %
+ (host_name, other_host_name, duplicated_ip))
+
+ def is_ip_duplicated(self, ips, other_host_ips):
+ logging.debug("Checking for IP duplication from %s to %s", ips, other_host_ips)
+ for network_name, ip in ips.iteritems():
+ if (network_name in other_host_ips and
+ ip == other_host_ips[network_name]):
+ return ip
+ return False
+
+ def get_attribute_value(self, config, name_list):
+ value = config
+ for name in name_list:
+ value = None if not isinstance(value, dict) else value.get(name)
+ if not value:
+ break
+ return value
+
+ def get_domain_dict(self, config, domain_name):
+ client = self.get_plugin_client()
+ str_value = config.get(domain_name)
+ if not str_value:
+ str_value = client.get_property(domain_name)
+ dict_value = {} if not str_value else json.loads(str_value)
+ return dict_value
+
+ def is_provider_type_ovs_dpdk(self, profile_name, profile_config):
+ path = [profile_name, 'provider_network_interfaces']
+ provider_ifs = self.get_attribute_value(profile_config, path)
+ if provider_ifs:
+ for value in provider_ifs.values():
+ if value.get('type') == 'ovs-dpdk':
+ return True
+ return False
+
+ def is_provider_type_sriov(self, profile_name, profile_config):
+ path = [profile_name, 'sriov_provider_networks']
+ if self.get_attribute_value(profile_config, path):
+ return True
+ return False
+
+ def is_sriov_allowed_for_host(self, profiles):
+ return (self.base_profile in profiles or
+ self.caas_worker_profile in profiles or
+ self.caas_master_profile in profiles)
+
+ def is_perf_allowed_for_host(self, profiles):
+ return self.is_sriov_allowed_for_host(profiles)
+
+ def network_is_mapped(self, network_profile, name):
+ mapping = network_profile.get('interface_net_mapping')
+ if isinstance(mapping, dict):
+ for interface in mapping.values():
+ if name in interface:
+ return True
+ return False
+
+ def is_ceph_profile(self, storage_profiles, profile_list):
+ ceph = 'ceph'
+ for profile in profile_list:
+ backend = storage_profiles[profile].get('backend')
+ if backend == ceph:
+ return True
+ return False
+
+ def _get_type_of_nodes(self, nodetype, config):
+ nodes = [k for k, v in config.iteritems() if nodetype in v['service_profiles']]
+ return nodes
+
+ def _get_storage_nodes(self, config):
+ return self._get_type_of_nodes(self.storage_profile, config)
+
+ def _get_changed_hosts_config(self, config, domain_name):
+ str_value = config.get(domain_name)
+ return {} if not str_value else json.loads(str_value)
+
+ def _get_running_hosts_config(self):
+ return self.get_domain_dict({}, self.domain)
+
+ def _get_number_of_changed_storage_hosts(self, changes):
+ conf = self._get_changed_hosts_config(changes, self.domain)
+ num = len(self._get_storage_nodes(conf))
+ logging.debug(
+ 'HostsValidator: number of changed storage hosts: %s', str(num))
+ return num
+
+ def _get_number_of_old_storage_hosts(self):
+ conf = self._get_running_hosts_config()
+ if conf:
+ num = len(self._get_storage_nodes(conf))
+ logging.debug(
+ 'HostsValidator: number of existing storage hosts: %s', str(num))
+ return num
+ raise ConfigurationDoesNotExist(
+ "The running hosts configuration does not exist -> deployment ongoing.")
+
+ def _validate_only_one_storage_host_removed(self, changes):
+ num_existing_storage_hosts = self._get_number_of_old_storage_hosts()
+ if self._get_number_of_changed_storage_hosts(changes) < num_existing_storage_hosts-1:
+ raise validation.ValidationError(
+ "It is allowed to scale-in only 1 storage node at a time.")
+
+ def validate_scale_in(self, changes):
+ try:
+ self._validate_only_one_storage_host_removed(changes)
+ except ConfigurationDoesNotExist as exc:
+ logging.debug(str(exc))
+ return
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import re
+
+from cmdatahandlers.api import validation
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import utils
+
+
+class NetworkProfilesValidation(cmvalidator.CMValidator):
+ SUBSCRIPTION = r'^cloud\.network_profiles|cloud\.networking$'
+ DOMAIN = 'cloud.network_profiles'
+ NETWORKING = 'cloud.networking'
+
+ MAX_IFACE_NAME_LEN = 15
+ IFACE_NAME_MATCH = r'^[a-z][\da-z]+$'
+ BOND_NAME_MATCH = r'^bond[\d]+$'
+
+ INTERFACE_NET_MAPPING = 'interface_net_mapping'
+ PROVIDER_NETWORK_INTERFACES = 'provider_network_interfaces'
+ PROVIDER_NETWORKS = 'provider_networks'
+ SRIOV_PROVIDER_NETWORKS = 'sriov_provider_networks'
+ INTERFACES = 'interfaces'
+ TRUSTED = 'trusted'
+ VF_COUNT = 'vf_count'
+ TYPE = 'type'
+ DPDK_MAX_RX_QUEUES = 'dpdk_max_rx_queues'
+ BONDING_INTERFACES = 'bonding_interfaces'
+ LINUX_BONDING_OPTIONS = 'linux_bonding_options'
+ OVS_BONDING_OPTIONS = 'ovs_bonding_options'
+
+ TYPE_OVS = 'ovs'
+ TYPE_OVS_DPDK = 'ovs-dpdk'
+ TYPE_OVS_OFFLOAD_SRIOV = "ovs-offload-sriov"
+ TYPE_OVS_OFFLOAD_VIRTIO = "ovs-offload-virtio"
+ VALID_TYPES = [TYPE_OVS, TYPE_OVS_DPDK, TYPE_OVS_OFFLOAD_SRIOV, TYPE_OVS_OFFLOAD_VIRTIO]
+
+ MODE_LACP = 'mode=lacp'
+ MODE_LACP_LAYER34 = 'mode=lacp-layer34'
+ MODE_AB = 'mode=active-backup'
+ VALID_BONDING_OPTIONS = [MODE_LACP, MODE_LACP_LAYER34, MODE_AB]
+
+ VLAN_RANGES = 'vlan_ranges'
+ VLAN = 'vlan'
+ MTU = 'mtu'
+ DEFAULT_MTU = 1500
+ NETWORK_DOMAINS = 'network_domains'
+
+ UNTAGGED = 'untagged'
+
+ INPUT_ERR_CONTEXT = 'validate_set() input'
+ ERR_INPUT_NOT_DICT = 'Invalid %s, not a dictionary' % INPUT_ERR_CONTEXT
+
+ ERR_MISSING = 'Missing {1} configuration in {0}'
+ ERR_NOT_DICT = 'Invalid {1} value in {0}: Empty or not a dictionary'
+ ERR_NOT_LIST = 'Invalid {1} value in {0}: Empty, contains duplicates or not a list'
+ ERR_NOT_STR = 'Invalid {1} value in {0}: Not a string'
+ ERR_NOT_INT = 'Invalid {1} value in {0}: Not an integer'
+ ERR_NOT_BOOL = 'Invalid {1} value in {0}: Not a boolean value'
+
+ ERR_INVALID_IFACE_NAME = 'Invalid interface name in {}'
+ ERR_IFACE_NAME_LEN = 'Too long interface name in {}, max %s chars' % MAX_IFACE_NAME_LEN
+ ERR_IFACE_VLAN = 'Interface in {0} cannot be vlan interface: {1}'
+ ERR_IFACE_BOND = 'Interface in {0} cannot be bond interface: {1}'
+ ERR_IFACE_NOT_BOND = 'Invalid bonding interface name {1} in {0}'
+ ERR_NET_MAPPING_CONFLICT = 'Network {1} mapped to multiple interfaces in {0}'
+ ERR_UNTAGGED_INFRA_CONFLICT = 'Multiple untagged networks on interface {1} in {0}'
+ ERR_UNTAGGED_MTU_SIZE = 'Untagged network {1} in {0} has too small MTU, ' + \
+ 'VLAN tagged networks with bigger MTU exists on the same interface'
+
+ ERR_INVALID_TYPE = 'Invalid provider network type for interface {}, valid types: %s' % \
+ VALID_TYPES
+ ERR_DPDK_MAX_RX_QUEUES = 'Invalid %s value {}, must be positive integer' % DPDK_MAX_RX_QUEUES
+ ERR_MISSPLACED_MTU = 'Missplaced MTU inside %s interface {}' % PROVIDER_NETWORK_INTERFACES
+ ERR_OVS_TYPE_CONFLICT = 'Cannot have both %s and %s types of provider networks in {}' % \
+ (TYPE_OVS, TYPE_OVS_DPDK)
+ ERR_DPDK_SRIOV_CONFLICT = 'Cannot have both %s and sr-iov on same interface in {}' % \
+ TYPE_OVS_DPDK
+ ERR_OFFLOAD_SRIOV_CONFLICT = 'Cannot have both %s and sr-iov on same profile in {}' % \
+ TYPE_OVS_OFFLOAD_SRIOV
+ ERR_OFFLOAD_DPDK_CONFLICT = 'Cannot have both %s and %s types of provider networks in {}' % \
+ (TYPE_OVS_OFFLOAD_SRIOV, TYPE_OVS_DPDK)
+
+ ERR_INVALID_BONDING_OPTIONS = 'Invalid {1} in {0}, valid options: %s' % VALID_BONDING_OPTIONS
+ ERR_MISSING_BOND = 'Missing bonding interface definition for {1} in {0}'
+ ERR_LACP_SLAVE_COUNT = 'Invalid bonding slave interface count for {1} in {0} ' + \
+ 'at least two interfaces required with %s' % MODE_LACP
+ ERR_AB_SLAVE_COUNT = 'Invalid bonding slave interface count for {1} in {0}, ' + \
+ 'exactly two interfaces required with %s' % MODE_AB
+ ERR_SLAVE_CONFLICT = 'Same interface mapped to multiple bond interfaces in {}'
+ ERR_SLAVE_IN_NET = 'Network physical interface {1} mapped also as part of bond in {0}'
+
+ ERR_SRIOV_MTU_SIZE = 'SR-IOV network {0} MTU {1} cannot be greater than interface {2} MTU {3}'
+ ERR_SRIOV_INFRA_VLAN_CONFLICT = \
+ 'SR-IOV network {} vlan range is conflicting with infra network vlan'
+ ERR_SRIOV_PROVIDER_VLAN_CONFLICT = \
+ 'SR-IOV network {} vlan range is conflicting with other provider network vlan'
+ ERR_SINGLE_NIC_VIOLATION = \
+ 'Provider and infa networks on the same interface in {}: ' + \
+ 'Supported only if all networks on the same interface'
+ ERR_SINGLE_NIC_DPDK = \
+ 'Provider and infa networks on the same interface in {}: ' + \
+ 'Not supported for %s type of provider networks' % TYPE_OVS_DPDK
+ ERR_INFRA_PROVIDER_VLAN_CONFLICT = \
+ 'Provider network {} vlan range is conflicting with infra network vlan'
+ ERR_INFRA_PROVIDER_UNTAGGED_CONFLICT = \
+ 'Sharing untagged infra and provider network {} not supported'
+ ERR_SRIOV_LACP_CONFLICT = 'Bonding mode %s not supported with SR-IOV networks' % MODE_LACP
+ ERR_SRIOV_IFACE_CONFLICT = 'Same interface mapped to multiple SR-IOV networks in {}'
+ ERR_VF_COUNT = 'SR-IOV network {} %s must be positive integer' % VF_COUNT
+
+ ERR_PROVIDER_VLAN_CONFLICT = 'Provider network vlan ranges conflicting on interface {}'
+
+ @staticmethod
+ def err_input_not_dict():
+ err = NetworkProfilesValidation.ERR_INPUT_NOT_DICT
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_missing(context, key):
+ err = NetworkProfilesValidation.ERR_MISSING.format(context, key)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_not_dict(context, key):
+ err = NetworkProfilesValidation.ERR_NOT_DICT.format(context, key)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_not_list(context, key):
+ err = NetworkProfilesValidation.ERR_NOT_LIST.format(context, key)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_not_str(context, key):
+ raise validation.ValidationError(NetworkProfilesValidation.ERR_NOT_STR.format(context, key))
+
+ @staticmethod
+ def err_not_int(context, key):
+ raise validation.ValidationError(NetworkProfilesValidation.ERR_NOT_INT.format(context, key))
+
+ @staticmethod
+ def err_not_bool(context, key):
+ err = NetworkProfilesValidation.ERR_NOT_BOOL.format(context, key)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_invalid_iface_name(context):
+ err = NetworkProfilesValidation.ERR_INVALID_IFACE_NAME.format(context)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_iface_name_len(context):
+ err = NetworkProfilesValidation.ERR_IFACE_NAME_LEN.format(context)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_iface_vlan(context, iface):
+ err = NetworkProfilesValidation.ERR_IFACE_VLAN.format(context, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_iface_bond(context, iface):
+ err = NetworkProfilesValidation.ERR_IFACE_BOND.format(context, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_provnet_type(iface):
+ err = NetworkProfilesValidation.ERR_INVALID_TYPE.format(iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_dpdk_max_rx_queues(value):
+ err = NetworkProfilesValidation.ERR_DPDK_MAX_RX_QUEUES.format(value)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_missplaced_mtu(iface):
+ err = NetworkProfilesValidation.ERR_MISSPLACED_MTU.format(iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_iface_not_bond(context, iface):
+ err = NetworkProfilesValidation.ERR_IFACE_NOT_BOND.format(context, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_bonding_options(profile, options_type):
+ err = NetworkProfilesValidation.ERR_INVALID_BONDING_OPTIONS.format(profile, options_type)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_missing_bond_def(profile, iface):
+ err = NetworkProfilesValidation.ERR_MISSING_BOND.format(profile, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_lacp_slave_count(profile, iface):
+ err = NetworkProfilesValidation.ERR_LACP_SLAVE_COUNT.format(profile, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_ab_slave_count(profile, iface):
+ err = NetworkProfilesValidation.ERR_AB_SLAVE_COUNT.format(profile, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_slave_conflict(profile):
+ err = NetworkProfilesValidation.ERR_SLAVE_CONFLICT.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_slave_in_net(profile, iface):
+ err = NetworkProfilesValidation.ERR_SLAVE_IN_NET.format(profile, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_ovs_type_conflict(profile):
+ err = NetworkProfilesValidation.ERR_OVS_TYPE_CONFLICT.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_offload_dpdk_conflict(profile):
+ err = NetworkProfilesValidation.ERR_OFFLOAD_DPDK_CONFLICT.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_dpdk_sriov_conflict(profile):
+ err = NetworkProfilesValidation.ERR_DPDK_SRIOV_CONFLICT.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_offload_sriov_conflict(profile):
+ err = NetworkProfilesValidation.ERR_OFFLOAD_SRIOV_CONFLICT.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_net_mapping_conflict(profile, network):
+ err = NetworkProfilesValidation.ERR_NET_MAPPING_CONFLICT.format(profile, network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_untagged_infra_conflict(profile, iface):
+ err = NetworkProfilesValidation.ERR_UNTAGGED_INFRA_CONFLICT.format(profile, iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_untagged_mtu_size(context, network):
+ err = NetworkProfilesValidation.ERR_UNTAGGED_MTU_SIZE.format(context, network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_sriov_mtu_size(sriov_net, sriov_mtu, phys_iface, iface_mtu):
+ err = NetworkProfilesValidation.ERR_SRIOV_MTU_SIZE.format(sriov_net, sriov_mtu,
+ phys_iface, iface_mtu)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_sriov_infra_vlan_conflict(network):
+ err = NetworkProfilesValidation.ERR_SRIOV_INFRA_VLAN_CONFLICT.format(network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_sriov_provider_vlan_conflict(network):
+ err = NetworkProfilesValidation.ERR_SRIOV_PROVIDER_VLAN_CONFLICT.format(network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_single_nic_violation(profile):
+ err = NetworkProfilesValidation.ERR_SINGLE_NIC_VIOLATION.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_single_nic_dpdk(profile):
+ err = NetworkProfilesValidation.ERR_SINGLE_NIC_DPDK.format(profile)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_infra_provider_vlan_conflict(network):
+ err = NetworkProfilesValidation.ERR_INFRA_PROVIDER_VLAN_CONFLICT.format(network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_infra_provider_untagged_conflict(network):
+ err = NetworkProfilesValidation.ERR_INFRA_PROVIDER_UNTAGGED_CONFLICT.format(network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_sriov_lacp_conflict():
+ err = NetworkProfilesValidation.ERR_SRIOV_LACP_CONFLICT
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_sriov_iface_conflict():
+ err = NetworkProfilesValidation.ERR_SRIOV_IFACE_CONFLICT
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_vf_count(network):
+ err = NetworkProfilesValidation.ERR_VF_COUNT.format(network)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_provider_vlan_conflict(iface):
+ err = NetworkProfilesValidation.ERR_PROVIDER_VLAN_CONFLICT.format(iface)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def is_dict(conf):
+ return isinstance(conf, dict)
+
+ @staticmethod
+ def is_bond_iface(iface):
+ return re.match(NetworkProfilesValidation.BOND_NAME_MATCH, iface)
+
+ @staticmethod
+ def is_non_empty_dict(conf):
+ return isinstance(conf, dict) and len(conf) > 0
+
+ @staticmethod
+ def key_exists(conf_dict, key):
+ return key in conf_dict
+
+ @staticmethod
+ def val_is_int(conf_dict, key):
+ return isinstance(conf_dict[key], (int, long))
+
+ @staticmethod
+ def val_is_bool(conf_dict, key):
+ return isinstance(conf_dict[key], bool)
+
+ @staticmethod
+ def val_is_str(conf_dict, key):
+ return isinstance(conf_dict[key], basestring)
+
+ @staticmethod
+ def val_is_non_empty_list(conf_dict, key):
+ return (isinstance(conf_dict[key], list) and
+ len(conf_dict[key]) > 0 and
+ len(conf_dict[key]) == len(set(conf_dict[key])))
+
+ @staticmethod
+ def val_is_non_empty_dict(conf_dict, key):
+ return NetworkProfilesValidation.is_non_empty_dict(conf_dict[key])
+
+ @staticmethod
+ def key_must_exist(conf_dict, entry, key):
+ if not NetworkProfilesValidation.key_exists(conf_dict[entry], key):
+ NetworkProfilesValidation.err_missing(entry, key)
+
+ @staticmethod
+ def must_be_str(conf_dict, entry, key):
+ NetworkProfilesValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkProfilesValidation.val_is_str(conf_dict[entry], key):
+ NetworkProfilesValidation.err_not_str(entry, key)
+
+ @staticmethod
+ def must_be_list(conf_dict, entry, key):
+ NetworkProfilesValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkProfilesValidation.val_is_non_empty_list(conf_dict[entry], key):
+ NetworkProfilesValidation.err_not_list(entry, key)
+
+ @staticmethod
+ def must_be_dict(conf_dict, entry, key):
+ NetworkProfilesValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkProfilesValidation.val_is_non_empty_dict(conf_dict[entry], key):
+ NetworkProfilesValidation.err_not_dict(entry, key)
+
+ @staticmethod
+ def exists_as_dict(conf_dict, entry, key):
+ if not NetworkProfilesValidation.key_exists(conf_dict[entry], key):
+ return False
+ if not NetworkProfilesValidation.val_is_non_empty_dict(conf_dict[entry], key):
+ NetworkProfilesValidation.err_not_dict(entry, key)
+ return True
+
+ @staticmethod
+ def exists_as_int(conf_dict, entry, key):
+ if not NetworkProfilesValidation.key_exists(conf_dict[entry], key):
+ return False
+ if not NetworkProfilesValidation.val_is_int(conf_dict[entry], key):
+ NetworkProfilesValidation.err_not_int(entry, key)
+ return True
+
+ @staticmethod
+ def are_overlapping(ranges1, ranges2):
+ for range1 in ranges1:
+ for range2 in ranges2:
+ if not (range1[0] > range2[1] or range1[1] < range2[0]):
+ return True
+ return False
+
+ def __init__(self):
+ cmvalidator.CMValidator.__init__(self)
+ self.conf = None
+ self.networking = None
+
+ def get_subscription_info(self):
+ return self.SUBSCRIPTION
+
+ def validate_set(self, props):
+ if not self.is_dict(props):
+ self.err_input_not_dict()
+
+ if not (self.key_exists(props, self.DOMAIN) or
+ self.key_exists(props, self.NETWORKING)):
+ self.err_missing(self.INPUT_ERR_CONTEXT,
+ '{} or {}'.format(self.DOMAIN, self.NETWORKING))
+
+ if self.key_exists(props, self.DOMAIN):
+ if not props[self.DOMAIN]:
+ self.err_not_dict(self.INPUT_ERR_CONTEXT, self.DOMAIN)
+ self.conf = json.loads(props[self.DOMAIN])
+ else:
+ self.conf = json.loads(self.get_plugin_client().get_property(self.DOMAIN))
+
+ if not self.is_non_empty_dict(self.conf):
+ self.err_not_dict(self.INPUT_ERR_CONTEXT, self.DOMAIN)
+
+ if self.key_exists(props, self.NETWORKING):
+ if not props[self.NETWORKING]:
+ self.err_not_dict(self.INPUT_ERR_CONTEXT, self.NETWORKING)
+ self.networking = json.loads(props[self.NETWORKING])
+ else:
+ self.networking = json.loads(self.get_plugin_client().get_property(self.NETWORKING))
+
+ if not self.is_non_empty_dict(self.networking):
+ self.err_not_dict(self.INPUT_ERR_CONTEXT, self.NETWORKING)
+
+ self.validate()
+
+ def validate(self):
+ for profile_name in self.conf:
+ if not self.val_is_non_empty_dict(self.conf, profile_name):
+ self.err_not_dict(self.DOMAIN, profile_name)
+ self.validate_network_profile(profile_name)
+
+ def validate_network_profile(self, profile_name):
+ self.validate_interface_net_mapping(profile_name)
+ self.validate_bonding_interfaces(profile_name)
+ self.validate_bonding_options(profile_name)
+ self.validate_provider_net_ifaces(profile_name)
+ self.validate_network_integrity(profile_name)
+ self.validate_sriov_provider_networks(profile_name)
+ self.validate_provider_networks(profile_name)
+
+ def validate_interface_net_mapping(self, profile_name):
+ self.must_be_dict(self.conf, profile_name, self.INTERFACE_NET_MAPPING)
+ networks = []
+ for iface in self.conf[profile_name][self.INTERFACE_NET_MAPPING]:
+ self.validate_iface_name(self.INTERFACE_NET_MAPPING, iface)
+ self.validate_not_vlan(self.INTERFACE_NET_MAPPING, iface)
+ self.must_be_list(self.conf[profile_name], self.INTERFACE_NET_MAPPING, iface)
+ iface_nets = self.conf[profile_name][self.INTERFACE_NET_MAPPING][iface]
+ self.validate_used_infra_networks_defined(iface_nets)
+ for domain in self.get_network_domains(iface_nets):
+ self.validate_untagged_infra_integrity(iface_nets, iface, profile_name, domain)
+ networks.extend(iface_nets)
+ self.validate_networks_mapped_only_once(profile_name, networks)
+
+ def validate_used_infra_networks_defined(self, networks):
+ for net in networks:
+ if not self.key_exists(self.networking, net):
+ self.err_missing(self.NETWORKING, net)
+ self.must_be_dict(self.networking, net, self.NETWORK_DOMAINS)
+ for domain in self.networking[net][self.NETWORK_DOMAINS]:
+ self.must_be_dict(self.networking[net], self.NETWORK_DOMAINS, domain)
+
+ def get_network_domains(self, networks):
+ domains = set()
+ for net in networks:
+ domains.update(self.networking[net][self.NETWORK_DOMAINS].keys())
+ return domains
+
+ def validate_untagged_infra_integrity(self, iface_nets, iface, profile_name, network_domain):
+ untagged_infras = []
+ untagged_mtu = None
+ max_vlan_mtu = 0
+ default_mtu = self.get_default_mtu()
+
+ for net in iface_nets:
+ if self.key_exists(self.networking[net][self.NETWORK_DOMAINS], network_domain):
+ if not self.key_exists(self.networking[net][self.NETWORK_DOMAINS][network_domain],
+ self.VLAN):
+ untagged_infras.append(net)
+ if self.exists_as_int(self.networking, net, self.MTU):
+ untagged_mtu = self.networking[net][self.MTU]
+ else:
+ untagged_mtu = default_mtu
+ else:
+ if self.exists_as_int(self.networking, net, self.MTU):
+ mtu = self.networking[net][self.MTU]
+ else:
+ mtu = default_mtu
+ if mtu > max_vlan_mtu:
+ max_vlan_mtu = mtu
+
+ if not utils.is_virtualized():
+ if len(untagged_infras) > 1:
+ self.err_untagged_infra_conflict(profile_name, iface)
+
+ if untagged_mtu and untagged_mtu < max_vlan_mtu:
+ self.err_untagged_mtu_size(self.NETWORKING, untagged_infras[0])
+
+ def validate_bonding_interfaces(self, profile_name):
+ slaves = []
+ if self.exists_as_dict(self.conf, profile_name, self.BONDING_INTERFACES):
+ for iface in self.conf[profile_name][self.BONDING_INTERFACES]:
+ self.validate_iface_name(self.BONDING_INTERFACES, iface)
+ if not self.is_bond_iface(iface):
+ self.err_iface_not_bond(self.BONDING_INTERFACES, iface)
+ self.must_be_list(self.conf[profile_name], self.BONDING_INTERFACES, iface)
+ for slave in self.conf[profile_name][self.BONDING_INTERFACES][iface]:
+ self.validate_bond_slave(iface, slave)
+ slaves.append(slave)
+ if len(slaves) != len(set(slaves)):
+ self.err_slave_conflict(profile_name)
+
+ def validate_bond_slave(self, iface, slave):
+ self.validate_iface_name(iface, slave)
+ self.validate_not_vlan(iface, slave)
+ self.validate_not_bond(iface, slave)
+
+ def validate_not_bond(self, context, iface):
+ if 'bond' in iface:
+ self.err_iface_bond(context, iface)
+
+ def validate_bonding_options(self, profile_name):
+ self.validate_bonding_option(profile_name, self.LINUX_BONDING_OPTIONS)
+ self.validate_bonding_option(profile_name, self.OVS_BONDING_OPTIONS)
+
+ def validate_bonding_option(self, profile_name, options_type):
+ if self.key_exists(self.conf[profile_name], options_type):
+ if self.conf[profile_name][options_type] not in self.VALID_BONDING_OPTIONS:
+ self.err_bonding_options(profile_name, options_type)
+
+ def validate_provider_net_ifaces(self, profile_name):
+ if self.exists_as_dict(self.conf, profile_name, self.PROVIDER_NETWORK_INTERFACES):
+ types = set()
+ networks = []
+ for iface in self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES]:
+ self.validate_iface_name(self.PROVIDER_NETWORK_INTERFACES, iface)
+ self.validate_not_vlan(self.PROVIDER_NETWORK_INTERFACES, iface)
+ provnet_ifaces_conf = self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES]
+ self.validate_provider_net_type(provnet_ifaces_conf, iface)
+ self.validate_provider_net_vf_count(provnet_ifaces_conf, iface)
+ self.validate_dpdk_max_rx_queues(provnet_ifaces_conf, iface)
+ self.validate_no_mtu(provnet_ifaces_conf, iface)
+ self.must_be_list(provnet_ifaces_conf, iface, self.PROVIDER_NETWORKS)
+ types.add(provnet_ifaces_conf[iface][self.TYPE])
+ networks.extend(provnet_ifaces_conf[iface][self.PROVIDER_NETWORKS])
+ if self.TYPE_OVS_DPDK in types and self.TYPE_OVS in types:
+ self.err_ovs_type_conflict(profile_name)
+ if self.TYPE_OVS_DPDK in types and self.TYPE_OVS_OFFLOAD_SRIOV in types:
+ self.err_offload_dpdk_conflict(profile_name)
+ self.validate_networks_mapped_only_once(profile_name, networks)
+ self.validate_used_provider_networks_defined(networks)
+
+ def validate_sriov_provider_networks(self, profile_name):
+ if self.exists_as_dict(self.conf, profile_name, self.SRIOV_PROVIDER_NETWORKS):
+ networks = self.conf[profile_name][self.SRIOV_PROVIDER_NETWORKS]
+ self.validate_used_provider_networks_defined(networks)
+ sriov_ifaces = []
+ for network in networks:
+ if (self.exists_as_int(networks, network, self.VF_COUNT) and
+ networks[network][self.VF_COUNT] < 1):
+ self.err_vf_count(network)
+ if (self.key_exists(networks[network], self.TRUSTED) and
+ not self.val_is_bool(networks[network], self.TRUSTED)):
+ self.err_not_bool(network, self.TRUSTED)
+ self.must_be_list(networks, network, self.INTERFACES)
+ for iface in networks[network][self.INTERFACES]:
+ sriov_ifaces.append(iface)
+ self.validate_iface_name(network, iface)
+ self.validate_not_vlan(network, iface)
+ self.validate_not_bond(network, iface)
+ self.validate_not_part_of_lacp(self.conf[profile_name], iface)
+ infra_info = self.get_iface_infra_info(self.conf[profile_name], iface)
+ if infra_info is not None:
+ self.validate_shared_sriov_infra(network, iface, infra_info)
+ provider_info = self.get_iface_provider_info(self.conf[profile_name], iface)
+ if provider_info[self.TYPE] == self.TYPE_OVS_DPDK:
+ self.err_dpdk_sriov_conflict(profile_name)
+ if provider_info[self.TYPE] == self.TYPE_OVS_OFFLOAD_SRIOV:
+ self.err_offload_sriov_conflict(profile_name)
+ if provider_info[self.VLAN_RANGES]:
+ self.validate_shared_sriov_provider(network,
+ provider_info[self.VLAN_RANGES])
+ if len(sriov_ifaces) != len(set(sriov_ifaces)):
+ self.err_sriov_iface_conflict()
+
+ def validate_provider_networks(self, profile_name):
+ if self.key_exists(self.conf[profile_name], self.PROVIDER_NETWORK_INTERFACES):
+ for iface in self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES]:
+ iface_info = self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES][iface]
+ vlan_ranges_list = []
+ for network in iface_info[self.PROVIDER_NETWORKS]:
+ vlan_ranges = self.get_vlan_ranges(network)
+ vlan_ranges_list.append(vlan_ranges)
+ infra_info = self.get_iface_infra_info(self.conf[profile_name], iface)
+ if infra_info is not None:
+ if (len(self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES]) > 1 or
+ len(self.conf[profile_name][self.INTERFACE_NET_MAPPING]) > 1):
+ self.err_single_nic_violation(profile_name)
+ if iface_info[self.TYPE] == self.TYPE_OVS_DPDK:
+ self.err_single_nic_dpdk(profile_name)
+ self.validate_shared_infra_provider(network, infra_info, vlan_ranges)
+ for idx, ranges1 in enumerate(vlan_ranges_list):
+ for ranges2 in vlan_ranges_list[(idx+1):]:
+ if self.are_overlapping(ranges1, ranges2):
+ self.err_provider_vlan_conflict(iface)
+
+ def validate_not_part_of_lacp(self, profile_conf, iface):
+ if self.key_exists(profile_conf, self.PROVIDER_NETWORK_INTERFACES):
+ for provider_iface in profile_conf[self.PROVIDER_NETWORK_INTERFACES]:
+ if self.is_bond_iface(provider_iface):
+ if iface in profile_conf[self.BONDING_INTERFACES][provider_iface]:
+ if profile_conf[self.OVS_BONDING_OPTIONS] == self.MODE_LACP:
+ self.err_sriov_lacp_conflict()
+ # part of ovs bonding
+ # do not check linux bonding options even if shared with infra networks
+ return
+ for infra_iface in profile_conf[self.INTERFACE_NET_MAPPING]:
+ if self.is_bond_iface(infra_iface):
+ if iface in profile_conf[self.BONDING_INTERFACES][infra_iface]:
+ if profile_conf[self.LINUX_BONDING_OPTIONS] == self.MODE_LACP:
+ self.err_sriov_lacp_conflict()
+ break
+
+ def validate_shared_sriov_infra(self, sriov_net, iface, infra_info):
+ sriov_info = self.get_sriov_info(sriov_net)
+ if sriov_info[self.MTU] > infra_info[self.MTU]:
+ self.err_sriov_mtu_size(sriov_net, sriov_info[self.MTU], iface, infra_info[self.MTU])
+ for vlan_range in sriov_info[self.VLAN_RANGES]:
+ for infra_vlan in infra_info[self.VLAN]:
+ if not (infra_vlan < vlan_range[0] or infra_vlan > vlan_range[1]):
+ self.err_sriov_infra_vlan_conflict(sriov_net)
+
+ def validate_shared_sriov_provider(self, sriov_net, ovs_vlan_ranges):
+ sriov_vlan_ranges = self.get_vlan_ranges(sriov_net)
+ if self.are_overlapping(sriov_vlan_ranges, ovs_vlan_ranges):
+ self.err_sriov_provider_vlan_conflict(sriov_net)
+
+ def validate_shared_infra_provider(self, provider_net, infra_info, vlan_ranges):
+ if infra_info[self.UNTAGGED]:
+ self.err_infra_provider_untagged_conflict(provider_net)
+ for vlan in infra_info[self.VLAN]:
+ for vlan_range in vlan_ranges:
+ if not (vlan_range[0] > vlan or vlan_range[1] < vlan):
+ self.err_infra_provider_vlan_conflict(provider_net)
+
+ def get_iface_infra_info(self, profile_conf, iface):
+ infra_info = {self.VLAN: [], self.MTU: 0, self.UNTAGGED: False}
+ default_mtu = self.get_default_mtu()
+ infra_iface = self.get_master_iface(profile_conf, iface)
+
+ if self.key_exists(profile_conf[self.INTERFACE_NET_MAPPING], infra_iface):
+ for infra in profile_conf[self.INTERFACE_NET_MAPPING][infra_iface]:
+ for domain in self.networking[infra][self.NETWORK_DOMAINS].itervalues():
+ if self.key_exists(domain, self.VLAN):
+ infra_info[self.VLAN].append(domain[self.VLAN])
+ else:
+ infra_info[self.UNTAGGED] = True
+ if self.exists_as_int(self.networking, infra, self.MTU):
+ mtu = self.networking[infra][self.MTU]
+ else:
+ mtu = default_mtu
+ if mtu > infra_info[self.MTU]:
+ infra_info[self.MTU] = mtu
+
+ if infra_info[self.MTU] == 0:
+ return None
+
+ return infra_info
+
+ def get_iface_provider_info(self, profile_conf, iface):
+ provider_info = {self.TYPE: None, self.VLAN_RANGES: []}
+ provider_iface = self.get_master_iface(profile_conf, iface)
+
+ if self.key_exists(profile_conf, self.PROVIDER_NETWORK_INTERFACES):
+ if self.key_exists(profile_conf[self.PROVIDER_NETWORK_INTERFACES], provider_iface):
+ iface_info = profile_conf[self.PROVIDER_NETWORK_INTERFACES][provider_iface]
+ provider_info[self.TYPE] = iface_info[self.TYPE]
+ for network in iface_info[self.PROVIDER_NETWORKS]:
+ provider_info[self.VLAN_RANGES].extend(self.get_vlan_ranges(network))
+
+ return provider_info
+
+ def get_master_iface(self, profile_conf, slave_iface):
+ if self.key_exists(profile_conf, self.BONDING_INTERFACES):
+ for bond in profile_conf[self.BONDING_INTERFACES]:
+ if slave_iface in profile_conf[self.BONDING_INTERFACES][bond]:
+ return bond
+ return slave_iface
+
+ def get_sriov_info(self, network):
+ sriov_info = {self.VLAN_RANGES: []}
+ if self.exists_as_int(self.networking[self.PROVIDER_NETWORKS], network, self.MTU):
+ sriov_info[self.MTU] = self.networking[self.PROVIDER_NETWORKS][network][self.MTU]
+ else:
+ sriov_info[self.MTU] = self.get_default_mtu()
+ sriov_info[self.VLAN_RANGES] = self.get_vlan_ranges(network)
+ return sriov_info
+
+ def get_vlan_ranges(self, network):
+ vlan_ranges = []
+ networks = self.networking[self.PROVIDER_NETWORKS]
+ self.must_be_str(networks, network, self.VLAN_RANGES)
+ for vlan_range in networks[network][self.VLAN_RANGES].split(','):
+ vids = vlan_range.split(':')
+ if len(vids) != 2:
+ break
+ try:
+ start = int(vids[0])
+ end = int(vids[1])
+ except ValueError:
+ break
+ if end >= start:
+ vlan_ranges.append([start, end])
+ return vlan_ranges
+
+ def get_default_mtu(self):
+ if (self.key_exists(self.networking, self.MTU) and
+ self.val_is_int(self.networking, self.MTU)):
+ return self.networking[self.MTU]
+ return self.DEFAULT_MTU
+
+ def validate_iface_name(self, context, iface):
+ if not isinstance(iface, basestring) or not re.match(self.IFACE_NAME_MATCH, iface):
+ self.err_invalid_iface_name(context)
+ if len(iface) > self.MAX_IFACE_NAME_LEN:
+ self.err_iface_name_len(context)
+
+ def validate_not_vlan(self, context, iface):
+ if 'vlan' in iface:
+ self.err_iface_vlan(context, iface)
+
+ def validate_provider_net_type(self, provnet_ifaces_conf, iface):
+ self.must_be_str(provnet_ifaces_conf, iface, self.TYPE)
+ if provnet_ifaces_conf[iface][self.TYPE] not in self.VALID_TYPES:
+ self.err_provnet_type(iface)
+
+ def validate_provider_net_vf_count(self, provnet_ifaces_conf, iface):
+ if self.exists_as_int(provnet_ifaces_conf, iface, self.VF_COUNT):
+ value = provnet_ifaces_conf[iface][self.VF_COUNT]
+ if value < 1:
+ self.err_vf_count(iface)
+
+ def validate_dpdk_max_rx_queues(self, provnet_ifaces_conf, iface):
+ if self.exists_as_int(provnet_ifaces_conf, iface, self.DPDK_MAX_RX_QUEUES):
+ value = provnet_ifaces_conf[iface][self.DPDK_MAX_RX_QUEUES]
+ if value < 1:
+ self.err_dpdk_max_rx_queues(value)
+
+ def validate_no_mtu(self, provnet_ifaces_conf, iface):
+ if self.key_exists(provnet_ifaces_conf[iface], self.MTU):
+ self.err_missplaced_mtu(iface)
+
+ def validate_networks_mapped_only_once(self, profile_name, networks):
+ prev_net = None
+ for net in sorted(networks):
+ if net == prev_net:
+ self.err_net_mapping_conflict(profile_name, net)
+ prev_net = net
+
+ def validate_used_provider_networks_defined(self, networks):
+ for net in networks:
+ self.key_must_exist(self.networking, self.PROVIDER_NETWORKS, net)
+
+ def validate_network_integrity(self, profile_name):
+ provider_ifaces = []
+ if self.key_exists(self.conf[profile_name], self.PROVIDER_NETWORK_INTERFACES):
+ for iface in self.conf[profile_name][self.PROVIDER_NETWORK_INTERFACES]:
+ self.validate_net_iface_integrity(profile_name, iface, self.OVS_BONDING_OPTIONS)
+ provider_ifaces.append(iface)
+ for iface in self.conf[profile_name][self.INTERFACE_NET_MAPPING]:
+ if iface not in provider_ifaces:
+ self.validate_net_iface_integrity(profile_name, iface, self.LINUX_BONDING_OPTIONS)
+
+ def validate_net_iface_integrity(self, profile_name, iface, bonding_type):
+ if self.is_bond_iface(iface):
+ if (not self.key_exists(self.conf[profile_name], self.BONDING_INTERFACES) or
+ iface not in self.conf[profile_name][self.BONDING_INTERFACES]):
+ self.err_missing_bond_def(profile_name, iface)
+ self.key_must_exist(self.conf, profile_name, bonding_type)
+ self.validate_bond_slave_count(profile_name, iface,
+ self.conf[profile_name][bonding_type])
+ elif self.key_exists(self.conf[profile_name], self.BONDING_INTERFACES):
+ for bond in self.conf[profile_name][self.BONDING_INTERFACES]:
+ for slave in self.conf[profile_name][self.BONDING_INTERFACES][bond]:
+ if iface == slave:
+ self.err_slave_in_net(profile_name, iface)
+
+ def validate_bond_slave_count(self, profile_name, iface, bonding_mode):
+ slave_count = len(self.conf[profile_name][self.BONDING_INTERFACES][iface])
+ if bonding_mode == self.MODE_AB and slave_count != 2:
+ self.err_ab_slave_count(profile_name, iface)
+ elif bonding_mode == self.MODE_LACP and slave_count < 2:
+ self.err_lacp_slave_count(profile_name, iface)
--- /dev/null
+#! /usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import re
+from netaddr import IPNetwork
+
+from cmdatahandlers.api import validation
+from cmframework.apis import cmvalidator
+
+
+class NetworkingValidation(cmvalidator.CMValidator):
+ SUBSCRIPTION = r'^cloud\.networking$'
+ DOMAIN = 'cloud.networking'
+
+ MAX_MTU = 9000
+ MIN_MTU = 1280
+ MIN_VLAN = 2
+ MAX_VLAN = 4094
+ MAX_PROVNET_LEN = 64
+ MAX_DNS = 2
+ PROVNET_NAME_MATCH = r'^[a-zA-Z][\da-zA-Z-_]+[\da-zA-Z]$'
+ NET_DOMAIN_MATCH = PROVNET_NAME_MATCH
+ MAX_NET_DOMAIN_LEN = MAX_PROVNET_LEN
+ DEFAULT_ROUTE_DEST = '0.0.0.0/0'
+
+ NETWORK_DOMAINS = 'network_domains'
+ INFRA_EXTERNAL = 'infra_external'
+ INFRA_INTERNAL = 'infra_internal'
+ INFRA_STORAGE_CLUSTER = 'infra_storage_cluster'
+ INFRA_NETWORKS = [INFRA_EXTERNAL,
+ INFRA_INTERNAL,
+ INFRA_STORAGE_CLUSTER]
+
+ DNS = 'dns'
+ MTU = 'mtu'
+ VLAN = 'vlan'
+ GATEWAY = 'gateway'
+ CIDR = 'cidr'
+ IP_START = 'ip_range_start'
+ IP_END = 'ip_range_end'
+ ROUTES = 'routes'
+ TO = 'to'
+ VIA = 'via'
+
+ PROVIDER_NETWORKS = 'provider_networks'
+ VLAN_RANGES = 'vlan_ranges'
+ SHARED = 'shared'
+
+ INPUT_ERR_CONTEXT = 'validate_set() input'
+ ERR_INPUT_NOT_DICT = 'Invalid %s, not a dictionary' % INPUT_ERR_CONTEXT
+
+ ERR_MISSING = 'Missing {1} configuration in {0}'
+ ERR_NOT_DICT = 'Invalid {1} value in {0}: Empty or not a dictionary'
+ ERR_NOT_LIST = 'Invalid {1} value in {0}: Empty, contains duplicates or not a list'
+ ERR_NOT_STR = 'Invalid {1} value in {0}: Not a string'
+ ERR_NOT_INT = 'Invalid {1} value in {0}: Not an integer'
+ ERR_NOT_BOOL = 'Invalid {1} value in {0}: Not a boolean value'
+
+ ERR_MTU = 'Invalid {} mtu: Not in range %i - %i' % (MIN_MTU, MAX_MTU)
+ ERR_VLAN = 'Invalid {} vlan: Not in range %i - %i' % (MIN_VLAN, MAX_VLAN)
+ ERR_DUPLICATE_INFRA_VLAN = 'Same VLAN ID {} used for multiple infra networks'
+ ERR_CIDRS_OVERLAPPING = 'Network CIDR values {} and {} are overlapping'
+ ERR_GW_NOT_SUPPORTED = 'Gateway address not supported for {}'
+ ERR_INVALID_ROUTES = 'Invalid static routes format for {0} {1}'
+ ERR_DEFAULT_ROUTE = 'Default route not supported for {0} {1}'
+
+ ERR_VLAN_RANGES_FORMAT = 'Invalid {} vlan_ranges format'
+ ERR_VLAN_RANGES_OVERLAPPING = 'Provider network vlan ranges {} and {} are overlapping'
+
+ ERR_INVALID_PROVNET_NAME = 'Invalid provider network name'
+ ERR_PROVNET_LEN = 'Too long provider network name, max %s chars' % MAX_PROVNET_LEN
+ ERR_SHARED_NETWORKS = 'Only one provider network can be configured as shared'
+
+ ERR_INVALID_NET_DOMAIN_NAME = 'Invalid network domain name'
+ ERR_NET_DOMAIN_LEN = 'Too long network domain name, max %s chars' % MAX_NET_DOMAIN_LEN
+
+ ERR_TOO_MANY_DNS = 'Too many DNS server IP addresses, max %i supported' % MAX_DNS
+
+ ERR_MTU_INSIDE_NETWORK_DOMAIN = 'Missplaced MTU inside {} network domain {}'
+
+ @staticmethod
+ def err_input_not_dict():
+ raise validation.ValidationError(NetworkingValidation.ERR_INPUT_NOT_DICT)
+
+ @staticmethod
+ def err_missing(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_MISSING.format(context, key))
+
+ @staticmethod
+ def err_not_dict(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_NOT_DICT.format(context, key))
+
+ @staticmethod
+ def err_not_list(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_NOT_LIST.format(context, key))
+
+ @staticmethod
+ def err_not_str(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_NOT_STR.format(context, key))
+
+ @staticmethod
+ def err_not_int(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_NOT_INT.format(context, key))
+
+ @staticmethod
+ def err_not_bool(context, key):
+ raise validation.ValidationError(NetworkingValidation.ERR_NOT_BOOL.format(context, key))
+
+ @staticmethod
+ def err_mtu(context):
+ raise validation.ValidationError(NetworkingValidation.ERR_MTU.format(context))
+
+ @staticmethod
+ def err_vlan(context):
+ raise validation.ValidationError(NetworkingValidation.ERR_VLAN.format(context))
+
+ @staticmethod
+ def err_duplicate_vlan(vid):
+ raise validation.ValidationError(NetworkingValidation.ERR_DUPLICATE_INFRA_VLAN.format(vid))
+
+ @staticmethod
+ def err_vlan_ranges_format(provnet):
+ err = NetworkingValidation.ERR_VLAN_RANGES_FORMAT.format(provnet)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_vlan_ranges_overlapping(range1, range2):
+ ranges = sorted([range1, range2])
+ err = NetworkingValidation.ERR_VLAN_RANGES_OVERLAPPING.format(ranges[0], ranges[1])
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_invalid_provnet_name():
+ raise validation.ValidationError(NetworkingValidation.ERR_INVALID_PROVNET_NAME)
+
+ @staticmethod
+ def err_provnet_len():
+ raise validation.ValidationError(NetworkingValidation.ERR_PROVNET_LEN)
+
+ @staticmethod
+ def err_invalid_net_domain_name():
+ raise validation.ValidationError(NetworkingValidation.ERR_INVALID_NET_DOMAIN_NAME)
+
+ @staticmethod
+ def err_net_domain_len():
+ raise validation.ValidationError(NetworkingValidation.ERR_NET_DOMAIN_LEN)
+
+ @staticmethod
+ def err_cidrs_overlapping(cidr1, cidr2):
+ cidrs = sorted([cidr1, cidr2])
+ err = NetworkingValidation.ERR_CIDRS_OVERLAPPING.format(cidrs[0], cidrs[1])
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_gw_not_supported(network):
+ raise validation.ValidationError(NetworkingValidation.ERR_GW_NOT_SUPPORTED.format(network))
+
+ @staticmethod
+ def err_invalid_routes(network, domain):
+ err = NetworkingValidation.ERR_INVALID_ROUTES.format(network, domain)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_default_route(network, domain):
+ err = NetworkingValidation.ERR_DEFAULT_ROUTE.format(network, domain)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def err_too_many_dns():
+ raise validation.ValidationError(NetworkingValidation.ERR_TOO_MANY_DNS)
+
+ @staticmethod
+ def err_shared_networks():
+ raise validation.ValidationError(NetworkingValidation.ERR_SHARED_NETWORKS)
+
+ @staticmethod
+ def err_mtu_inside_network_domain(infra, domain):
+ err = NetworkingValidation.ERR_MTU_INSIDE_NETWORK_DOMAIN.format(infra, domain)
+ raise validation.ValidationError(err)
+
+ @staticmethod
+ def is_dict(conf):
+ return isinstance(conf, dict)
+
+ @staticmethod
+ def key_exists(conf_dict, key):
+ return key in conf_dict
+
+ @staticmethod
+ def val_is_str(conf_dict, key):
+ return isinstance(conf_dict[key], basestring)
+
+ @staticmethod
+ def val_is_list(conf_dict, key):
+ return isinstance(conf_dict[key], list)
+
+ @staticmethod
+ def val_is_non_empty_list(conf_dict, key):
+ return (isinstance(conf_dict[key], list) and
+ len(conf_dict[key]) > 0 and
+ len(conf_dict[key]) == len(set(conf_dict[key])))
+
+ @staticmethod
+ def val_is_non_empty_dict(conf_dict, key):
+ return isinstance(conf_dict[key], dict) and len(conf_dict[key]) > 0
+
+ @staticmethod
+ def val_is_int(conf_dict, key):
+ return isinstance(conf_dict[key], (int, long))
+
+ @staticmethod
+ def val_is_bool(conf_dict, key):
+ return isinstance(conf_dict[key], bool)
+
+ @staticmethod
+ def key_must_exist(conf_dict, entry, key):
+ if not NetworkingValidation.key_exists(conf_dict[entry], key):
+ NetworkingValidation.err_missing(entry, key)
+
+ @staticmethod
+ def must_be_str(conf_dict, entry, key):
+ NetworkingValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkingValidation.val_is_str(conf_dict[entry], key):
+ NetworkingValidation.err_not_str(entry, key)
+
+ @staticmethod
+ def must_be_list(conf_dict, entry, key):
+ NetworkingValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkingValidation.val_is_non_empty_list(conf_dict[entry], key):
+ NetworkingValidation.err_not_list(entry, key)
+
+ @staticmethod
+ def must_be_dict(conf_dict, entry, key):
+ NetworkingValidation.key_must_exist(conf_dict, entry, key)
+ if not NetworkingValidation.val_is_non_empty_dict(conf_dict[entry], key):
+ NetworkingValidation.err_not_dict(entry, key)
+
+ @staticmethod
+ def exists_as_dict(conf_dict, entry, key):
+ if not NetworkingValidation.key_exists(conf_dict[entry], key):
+ return False
+ if not NetworkingValidation.val_is_non_empty_dict(conf_dict[entry], key):
+ NetworkingValidation.err_not_dict(entry, key)
+ return True
+
+ @staticmethod
+ def exists_as_int(conf_dict, entry, key):
+ if not NetworkingValidation.key_exists(conf_dict[entry], key):
+ return False
+ if not NetworkingValidation.val_is_int(conf_dict[entry], key):
+ NetworkingValidation.err_not_int(entry, key)
+ return True
+
+ @staticmethod
+ def exists_as_bool(conf_dict, entry, key):
+ if not NetworkingValidation.key_exists(conf_dict[entry], key):
+ return False
+ if not NetworkingValidation.val_is_bool(conf_dict[entry], key):
+ NetworkingValidation.err_not_bool(entry, key)
+ return True
+
+ def __init__(self):
+ cmvalidator.CMValidator.__init__(self)
+ self.utils = validation.ValidationUtils()
+ self.conf = None
+ self.net_conf = None
+
+ def get_subscription_info(self):
+ return self.SUBSCRIPTION
+
+ def validate_set(self, props):
+ self.prepare_validate(props)
+ self.validate()
+
+ def prepare_validate(self, props):
+ if not self.is_dict(props):
+ self.err_input_not_dict()
+
+ if not self.key_exists(props, self.DOMAIN):
+ self.err_missing(self.INPUT_ERR_CONTEXT, self.DOMAIN)
+
+ self.net_conf = json.loads(props[self.DOMAIN])
+ self.conf = {self.DOMAIN: self.net_conf}
+
+ if not self.val_is_non_empty_dict(self.conf, self.DOMAIN):
+ self.err_not_dict(self.INPUT_ERR_CONTEXT, self.DOMAIN)
+
+ def validate(self):
+ self.validate_dns()
+ self.validate_default_mtu()
+ self.validate_infra_networks()
+ self.validate_provider_networks()
+ self.validate_no_overlapping_cidrs()
+
+ def validate_dns(self):
+ self.must_be_list(self.conf, self.DOMAIN, self.DNS)
+ for server in self.net_conf[self.DNS]:
+ self.utils.validate_ip_address(server)
+ if len(self.net_conf[self.DNS]) > self.MAX_DNS:
+ self.err_too_many_dns()
+
+ def validate_default_mtu(self):
+ self.validate_mtu(self.conf, self.DOMAIN)
+
+ def validate_infra_networks(self):
+ self.validate_infra_internal()
+ self.validate_infra_external()
+ self.validate_infra_storage_cluster()
+ self.validate_no_duplicate_infra_vlans()
+
+ def validate_infra_internal(self):
+ self.validate_network_exists(self.INFRA_INTERNAL)
+ self.validate_infra_network(self.INFRA_INTERNAL)
+ self.validate_no_gateway(self.INFRA_INTERNAL)
+
+ def validate_infra_external(self):
+ self.validate_network_exists(self.INFRA_EXTERNAL)
+ self.validate_infra_network(self.INFRA_EXTERNAL)
+ self.validate_gateway(self.INFRA_EXTERNAL)
+
+ def validate_infra_storage_cluster(self):
+ if self.network_exists(self.INFRA_STORAGE_CLUSTER):
+ self.validate_network_domains(self.INFRA_STORAGE_CLUSTER)
+ self.validate_infra_network(self.INFRA_STORAGE_CLUSTER)
+ self.validate_no_gateway(self.INFRA_STORAGE_CLUSTER)
+
+ def validate_infra_network(self, network, vlan_must_exist=False):
+ self.validate_mtu(self.net_conf, network)
+ self.validate_cidr(network)
+ self.validate_vlan(network, vlan_must_exist)
+ self.validate_ip_range(network)
+ self.validate_routes(network)
+ self.validate_no_mtu_inside_network_domain(network)
+
+ def validate_no_duplicate_infra_vlans(self):
+ domvids = {}
+ for network in self.INFRA_NETWORKS:
+ if self.key_exists(self.net_conf, network):
+ for domain, domain_conf in self.net_conf[network][self.NETWORK_DOMAINS].iteritems():
+ if self.key_exists(domain_conf, self.VLAN):
+ if domain not in domvids:
+ domvids[domain] = []
+ domvids[domain].append(domain_conf[self.VLAN])
+ for vids in domvids.itervalues():
+ prev_vid = 0
+ for vid in sorted(vids):
+ if vid == prev_vid:
+ self.err_duplicate_vlan(vid)
+ prev_vid = vid
+
+ def validate_no_overlapping_cidrs(self):
+ cidrs = []
+ for network in self.INFRA_NETWORKS:
+ if self.key_exists(self.net_conf, network):
+ for domain_conf in self.net_conf[network][self.NETWORK_DOMAINS].itervalues():
+ cidrs.append(IPNetwork(domain_conf[self.CIDR]))
+ for idx, cidr1 in enumerate(cidrs):
+ for cidr2 in cidrs[(idx+1):]:
+ if not (cidr1[0] > cidr2[-1] or cidr1[-1] < cidr2[0]):
+ self.err_cidrs_overlapping(str(cidr1), str(cidr2))
+
+ def validate_ip_range(self, network):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ ip_start = self.get_ip_range_start(domains, domain)
+ ip_end = self.get_ip_range_end(domains, domain)
+ self.utils.validate_ip_range(ip_start, ip_end)
+
+ def get_ip_range_start(self, domains, domain):
+ if self.key_exists(domains[domain], self.IP_START):
+ self.validate_ip_range_limiter(domains, domain, self.IP_START)
+ return domains[domain][self.IP_START]
+ return str(IPNetwork(domains[domain][self.CIDR])[1])
+
+ def get_ip_range_end(self, domains, domain):
+ if self.key_exists(domains[domain], self.IP_END):
+ self.validate_ip_range_limiter(domains, domain, self.IP_END)
+ return domains[domain][self.IP_END]
+ return str(IPNetwork(domains[domain][self.CIDR])[-2])
+
+ def validate_ip_range_limiter(self, domains, domain, key):
+ self.must_be_str(domains, domain, key)
+ self.utils.validate_ip_address(domains[domain][key])
+ self.utils.validate_ip_in_subnet(domains[domain][key],
+ domains[domain][self.CIDR])
+
+ def validate_provider_networks(self):
+ if self.network_exists(self.PROVIDER_NETWORKS):
+ for netname in self.net_conf[self.PROVIDER_NETWORKS]:
+ self.validate_providernet(netname)
+ self.validate_shared_provider_network(self.net_conf[self.PROVIDER_NETWORKS])
+
+ def validate_providernet(self, netname):
+ self.validate_providernet_name(netname)
+ self.must_be_dict(self.net_conf, self.PROVIDER_NETWORKS, netname)
+ self.validate_mtu(self.net_conf[self.PROVIDER_NETWORKS], netname)
+ self.validate_vlan_ranges(self.net_conf[self.PROVIDER_NETWORKS], netname)
+
+ def validate_shared_provider_network(self, provider_conf):
+ shared_counter = 0
+ for netname in provider_conf:
+ if self.exists_as_bool(provider_conf, netname, self.SHARED):
+ if provider_conf[netname][self.SHARED] is True:
+ shared_counter += 1
+ if shared_counter > 1:
+ self.err_shared_networks()
+
+ def validate_mtu(self, conf, network):
+ if self.exists_as_int(conf, network, self.MTU):
+ mtu = conf[network][self.MTU]
+ if mtu < self.MIN_MTU or mtu > self.MAX_MTU:
+ self.err_mtu(network)
+
+ def validate_no_mtu_inside_network_domain(self, network):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ if self.key_exists(domains[domain], self.MTU):
+ self.err_mtu_inside_network_domain(network, domain)
+
+ def validate_vlan(self, network, must_exist=False):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ if must_exist and not self.key_exists(domains[domain], self.VLAN):
+ self.err_missing(network, self.VLAN)
+ if self.exists_as_int(domains, domain, self.VLAN):
+ self.validate_vlan_id(network, domains[domain][self.VLAN])
+
+ def validate_network_exists(self, network):
+ self.must_be_dict(self.conf, self.DOMAIN, network)
+ self.validate_network_domains(network)
+
+ def validate_network_domains(self, network):
+ self.must_be_dict(self.net_conf, network, self.NETWORK_DOMAINS)
+ for domain in self.net_conf[network][self.NETWORK_DOMAINS]:
+ self.validate_net_domain_name(domain)
+
+ def validate_net_domain_name(self, domain_name):
+ if (not isinstance(domain_name, basestring) or
+ not re.match(self.NET_DOMAIN_MATCH, domain_name)):
+ self.err_invalid_net_domain_name()
+ if len(domain_name) > self.MAX_NET_DOMAIN_LEN:
+ self.err_net_domain_len()
+
+ def network_exists(self, network):
+ return self.exists_as_dict(self.conf, self.DOMAIN, network)
+
+ def validate_cidr(self, network):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ self.must_be_str(domains, domain, self.CIDR)
+ self.utils.validate_subnet_address(domains[domain][self.CIDR])
+
+ def validate_gateway(self, network):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ self.must_be_str(domains, domain, self.GATEWAY)
+ self.utils.validate_ip_address(domains[domain][self.GATEWAY])
+ self.utils.validate_ip_in_subnet(domains[domain][self.GATEWAY],
+ domains[domain][self.CIDR])
+ self.utils.validate_ip_not_in_range(domains[domain][self.GATEWAY],
+ self.get_ip_range_start(domains, domain),
+ self.get_ip_range_end(domains, domain))
+
+ def validate_no_gateway(self, network):
+ for domain_conf in self.net_conf[network][self.NETWORK_DOMAINS].itervalues():
+ if self.key_exists(domain_conf, self.GATEWAY):
+ self.err_gw_not_supported(network)
+
+ def validate_routes(self, network):
+ domains = self.net_conf[network][self.NETWORK_DOMAINS]
+ for domain in domains:
+ if self.key_exists(domains[domain], self.ROUTES):
+ if (not self.val_is_list(domains[domain], self.ROUTES) or
+ not domains[domain][self.ROUTES]):
+ self.err_invalid_routes(network, domain)
+ for route in domains[domain][self.ROUTES]:
+ self.validate_route(network, domain, route)
+ self.utils.validate_ip_in_subnet(route[self.VIA],
+ domains[domain][self.CIDR])
+ self.utils.validate_ip_not_in_range(route[self.VIA],
+ self.get_ip_range_start(domains, domain),
+ self.get_ip_range_end(domains, domain))
+
+ def validate_route(self, network, domain, route):
+ if (not self.is_dict(route) or
+ self.TO not in route or
+ self.VIA not in route or
+ not self.val_is_str(route, self.TO) or
+ not self.val_is_str(route, self.VIA)):
+ self.err_invalid_routes(network, domain)
+ self.utils.validate_subnet_address(route[self.TO])
+ self.utils.validate_ip_address(route[self.VIA])
+ if route[self.TO] == self.DEFAULT_ROUTE_DEST:
+ self.err_default_route(network, domain)
+
+ def validate_providernet_name(self, netname):
+ if not isinstance(netname, basestring) or not re.match(self.PROVNET_NAME_MATCH, netname):
+ self.err_invalid_provnet_name()
+ if len(netname) > self.MAX_PROVNET_LEN:
+ self.err_provnet_len()
+
+ def validate_vlan_ranges(self, provnet_conf, provnet):
+ self.must_be_str(provnet_conf, provnet, self.VLAN_RANGES)
+ vlan_ranges = []
+ for vlan_range in provnet_conf[provnet][self.VLAN_RANGES].split(','):
+ vids = vlan_range.split(':')
+ if len(vids) != 2:
+ self.err_vlan_ranges_format(provnet)
+ try:
+ start = int(vids[0])
+ end = int(vids[1])
+ except ValueError:
+ self.err_vlan_ranges_format(provnet)
+ self.validate_vlan_id(provnet, start)
+ self.validate_vlan_id(provnet, end)
+ if end < start:
+ self.err_vlan_ranges_format(provnet)
+ vlan_ranges.append([start, end])
+ self.validate_vlan_ranges_not_overlapping(vlan_ranges)
+
+ def validate_vlan_ranges_not_overlapping(self, vlan_ranges):
+ for idx, range1 in enumerate(vlan_ranges):
+ for range2 in vlan_ranges[(idx+1):]:
+ if not (range1[0] > range2[1] or range1[1] < range2[0]):
+ self.err_vlan_ranges_overlapping(range1, range2)
+
+ def validate_vlan_id(self, network, vid):
+ if vid < self.MIN_VLAN or vid > self.MAX_VLAN:
+ self.err_vlan(network)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=invalid-name, missing-docstring, too-few-public-methods,
+# pylint: disable=logging-not-lazy, too-many-locals
+
+import logging
+import json
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class OpenstackValidationError(validation.ValidationError):
+ pass
+
+
+class OpenstackValidation(cmvalidator.CMValidator):
+ domain = "cloud.openstack"
+
+ def get_subscription_info(self): # pylint: disable=no-self-use
+ logging.debug('get_subscription info called')
+ return r'^cloud\.openstack$'
+
+ def validate_set(self, dict_key_value):
+ logging.debug('validate_set called with %s' % str(dict_key_value))
+
+ client = self.get_plugin_client()
+
+ for key, value in dict_key_value.iteritems():
+ value_str = value
+ value_dict = json.loads(value_str)
+
+ if key == self.domain:
+ openstack_config = value_dict
+ if not isinstance(value_dict, dict):
+ raise validation.ValidationError('%s value is not a dict' % self.domain)
+ else:
+ raise validation.ValidationError('Unexpected configuration %s' % key)
+ self.validate_openstack(openstack_config)
+
+ def validate_delete(self, properties):
+ logging.debug('validate_delete called with %s' % str(properties))
+ if self.domain in properties:
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
+ else:
+ raise validation.ValidationError('References in %s, cannot be deleted' % self.domain)
+
+ def validate_openstack(self, openstack_config):
+ if not openstack_config:
+ raise validation.ValidationError('No value for %s' % self.domain)
+
+ self.validate_admin_password(openstack_config)
+
+ @staticmethod
+ def validate_admin_password(openstack_config):
+ password = 'admin_password'
+ passwd = openstack_config.get(password)
+ if not passwd:
+ raise validation.ValidationError('Missing %s' % password)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+
+from cmdatahandlers.api import validation
+from cmframework.apis import cmvalidator
+
+
+class PerformanceProfilesValidation(cmvalidator.CMValidator):
+ SUBSCRIPTION = r'^cloud\.performance_profiles$'
+
+ HUGEPAGESZ = 'hugepagesz'
+ DEFAULT_HUGEPAGESZ = 'default_hugepagesz'
+ HUGEPAGES = 'hugepages'
+ PLATFORM_CPUS = 'platform_cpus'
+ DPDK_CPUS = 'ovs_dpdk_cpus'
+ CAAS_CPU_POOLS = 'caas_cpu_pools'
+ CAAS_CPU_POOL_ATTRIBUTES = ['exclusive_pool_percentage', 'shared_pool_percentage']
+ CAAS_CPU_POOL_SHARE = 'caas_cpu_pool_share'
+
+ NUMA0 = 'numa0'
+ NUMA1 = 'numa1'
+ NUMA_VALUES = [NUMA0, NUMA1]
+
+ HUGEPAGESZ_VALUES = ['2M', '1G']
+
+ INFO_HUGEPAGESZ = 'Valid values: %s' % HUGEPAGESZ_VALUES
+ INFO_HUGEPAGES = 'Must be positive integer'
+ INFO_CPUS = 'Must be zero or positive integer'
+ INFO_PLATFORM_CPUS = 'Platform requires at least one core from NUMA0'
+
+ ERR_MISSING_DATA = 'Performance profiles validation input does not contain {} data'
+ ERR_INVALID_VALUE = 'Invalid %s value in performance profile {}: %s'
+
+ ERR_HUGEPAGESZ = ERR_INVALID_VALUE % (HUGEPAGESZ, INFO_HUGEPAGESZ)
+ ERR_DEFAULT_HUGEPAGESZ = ERR_INVALID_VALUE % (DEFAULT_HUGEPAGESZ, INFO_HUGEPAGESZ)
+ ERR_HUGEPAGES = ERR_INVALID_VALUE % (HUGEPAGES, INFO_HUGEPAGES)
+
+ ERR_NUMA = "Invalid NUMA value in performance profile {}"
+ ERR_CPUS = ERR_INVALID_VALUE % ("platform/ovs_dpdk cpu", INFO_CPUS)
+ ERR_PLATFORM_CPUS = ERR_INVALID_VALUE % ("platform_cpus", INFO_PLATFORM_CPUS)
+ ERR_CPU_POOL_RATIO = 'caas_cpu_pools total cpu percentage exceeded'
+ ERR_CAAS_CPU_POOL_TYPE = 'caas_cpu_pools percentage values should be integer'
+ ERR_CAAS_DEFAULT_POOL = 'caas_cpu_pool_share value should be integer between 0 and 100'
+
+ @staticmethod
+ def raise_error(context, err_type):
+ raise validation.ValidationError(err_type.format(context))
+
+ def get_subscription_info(self):
+ return self.SUBSCRIPTION
+
+ def validate_set(self, props):
+ conf = self.get_conf(props)
+ if isinstance(conf, dict):
+ self.validate(conf)
+
+ def get_conf(self, props):
+ domain = 'cloud.performance_profiles'
+ if not isinstance(props, dict) or domain not in props:
+ self.raise_error(domain, self.ERR_MISSING_DATA)
+ return json.loads(props[domain])
+
+ def validate(self, conf):
+ for profile, entries in conf.iteritems():
+ if isinstance(entries, dict):
+ self.validate_profile(profile, entries)
+
+ def validate_profile(self, profile, entries):
+ for key, value in entries.iteritems():
+ self.validate_value(profile, key, value)
+
+ def validate_value(self, profile, key, value):
+ if key == self.HUGEPAGESZ:
+ self.validate_hugepagesz(profile, value)
+ elif key == self.DEFAULT_HUGEPAGESZ:
+ self.validate_default_hugepagesz(profile, value)
+ elif key == self.HUGEPAGES:
+ self.validate_hugepages(profile, value)
+ elif key == self.PLATFORM_CPUS:
+ self.validate_platform_cpus(profile, value)
+ elif key == self.DPDK_CPUS:
+ self.validate_ovs_dpdk_cpus(profile, value)
+ elif key == self.CAAS_CPU_POOLS:
+ self.validate_caas_cpu_pools(profile, value)
+ elif key == self.CAAS_CPU_POOL_SHARE:
+ self.validate_caas_cpu_pool_share(value)
+
+ def validate_hugepagesz(self, profile, value):
+ if value not in self.HUGEPAGESZ_VALUES:
+ self.raise_error(profile, self.ERR_HUGEPAGESZ)
+
+ def validate_default_hugepagesz(self, profile, value):
+ if value not in self.HUGEPAGESZ_VALUES:
+ self.raise_error(profile, self.ERR_DEFAULT_HUGEPAGESZ)
+
+ def validate_hugepages(self, profile, value):
+ if not (isinstance(value, (int, long)) and value > 0):
+ self.raise_error(profile, self.ERR_HUGEPAGES)
+
+ def validate_numa_names(self, profile, cpus):
+ if isinstance(cpus, dict):
+ for key in cpus.keys():
+ if key not in self.NUMA_VALUES:
+ self.raise_error(profile, self.ERR_NUMA)
+
+ def validate_cpu_values(self, profile, cpus):
+ if isinstance(cpus, dict):
+ for value in cpus.values():
+ if not (isinstance(value, (int, long)) and value >= 0):
+ self.raise_error(profile, self.ERR_CPUS)
+
+ def validate_platform_cpus(self, profile, cpus):
+ self.validate_numa_names(profile, cpus)
+ if cpus.get(self.NUMA1, None) is not None and cpus.get(self.NUMA0, None) is None:
+ self.raise_error(profile, self.ERR_PLATFORM_CPUS)
+ if cpus.get(self.NUMA1, None) is not None and cpus.get(self.NUMA0, None) == 0:
+ self.raise_error(profile, self.ERR_PLATFORM_CPUS)
+ self.validate_cpu_values(profile, cpus)
+
+ def validate_ovs_dpdk_cpus(self, profile, cpus):
+ self.validate_numa_names(profile, cpus)
+ self.validate_cpu_values(profile, cpus)
+
+ def validate_caas_cpu_pools(self, profile, pools):
+ sum_ratio = 0
+ self.allowed_attributes(profile, pools, self.CAAS_CPU_POOL_ATTRIBUTES)
+ self.is_attribute_present(profile, pools, self.CAAS_CPU_POOL_ATTRIBUTES)
+ for value in pools.values():
+ if not isinstance(value, int) or (value > 100) or (value < 0):
+ self.raise_error(profile, self.ERR_CAAS_CPU_POOL_TYPE)
+ sum_ratio += value
+ if sum_ratio > 100:
+ self.raise_error(profile, self.ERR_CPU_POOL_RATIO)
+
+ def allowed_attributes(self, profile, entries, allowed_attributes):
+ for key in entries.keys():
+ if key not in allowed_attributes:
+ self.raise_error(profile, 'Attribute %s is not allowed in profile %s, '
+ 'allowed attributes: \"%s\"' %
+ (key, profile, str(",".join(allowed_attributes))))
+
+ def is_attribute_present(self, profile, entries, attributes):
+ is_present = False
+ for key in entries.keys():
+ if key in attributes:
+ is_present = True
+ if not is_present:
+ self.raise_error(profile, 'Profile: %s should contain at least one of the following '
+ 'attributes: \"%s\"' % (profile, str(",".join(attributes))))
+
+ def validate_caas_cpu_pool_share(self, value):
+ if not isinstance(value, (int)) or (value > 100) or (value < 0):
+ self.raise_error(value, self.ERR_CAAS_DEFAULT_POOL)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class SectionValidation(cmvalidator.CMValidator):
+
+ Required = ['cloud.name', 'cloud.version', 'cloud.time', 'cloud.users', 'cloud.networking',
+ 'cloud.storage', 'cloud.hosts', 'cloud.network_profiles',
+ 'cloud.storage_profiles', 'cloud.host_os']
+
+ filterstr = r'^cloud\.'
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ return self.filterstr
+
+ def validate_set(self, dict_key_value):
+ logging.debug('validate_set called with %s', str(dict_key_value))
+
+ key_list = dict_key_value.keys()
+ self.validate_sections(key_list)
+
+ def validate_delete(self, prop):
+ # Domain specific validators should take care of validating deletion
+ pass
+
+ def validate_sections(self, sections):
+ names = []
+ missing = ''
+ client = self.get_plugin_client()
+
+ for name in self.Required:
+ if name not in sections:
+ names.append(name)
+ properties = client.get_properties(self.filterstr)
+ keys = properties.keys()
+ for name in names:
+ if name not in keys:
+ missing += ', ' + name if missing else name
+ if missing:
+ raise validation.ValidationError('Mandatory sections missing from configuration: %s'
+ % missing)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# pylint: disable=line-too-long
+
+import logging
+import json
+import pytz
+import yaml
+import requests
+from django.core.validators import URLValidator
+from django.core.exceptions import ValidationError
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class TimeValidation(cmvalidator.CMValidator):
+ domain = 'cloud.time'
+ supported_authentication_types = ['none', 'crypto', 'symmetric']
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ return r'^cloud\.time$'
+
+ def validate_set(self, dict_key_value):
+ ntp_attr = 'ntp_servers'
+ logging.debug('validate_set called with %s' % str(dict_key_value))
+
+ for key, value in dict_key_value.iteritems():
+ value_dict = json.loads(value)
+ if not value_dict:
+ raise validation.ValidationError('No value for %s' % key)
+ if not isinstance(value_dict, dict):
+ raise validation.ValidationError('%s value is not a dict' % self.domain)
+
+ if key == self.domain:
+ ntp_list = value_dict.get(ntp_attr)
+
+ self.validate_ntp(ntp_list)
+
+ attr = 'zone'
+ zone = value_dict.get(attr)
+ if zone:
+ self.validate_timezone(zone)
+ else:
+ raise validation.ValidationError('Missing timezone %s' % attr)
+
+ auth_type = value_dict.get('auth_type')
+ if auth_type:
+ self.validate_authtype(auth_type)
+ else:
+ raise validation.ValidationError('Missing authentication type for NTP')
+
+ filepath = value_dict.get('serverkeys_path')
+ if auth_type != 'none' and filepath == '':
+ raise validation.ValidationError('The serverkeys_path is missing')
+ elif auth_type == 'none':
+ pass
+ else:
+ self.validate_filepath(filepath)
+ self.validate_yaml_format(filepath, auth_type)
+ else:
+ raise validation.ValidationError('Unexpected configuration %s' % key)
+
+ def validate_delete(self, dict_key_value):
+ logging.debug('validate_delete called with %s' % str(dict_key_value))
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
+
+ def validate_ntp(self, ntp_list):
+ if not ntp_list:
+ raise validation.ValidationError('Missing NTP configuration')
+
+ if not isinstance(ntp_list, list):
+ raise validation.ValidationError('NTP servers value must be a list')
+ utils = validation.ValidationUtils()
+ for ntp in ntp_list:
+ utils.validate_ip_address(ntp)
+
+ def validate_timezone(self, value):
+ try:
+ pytz.timezone(value)
+ except pytz.UnknownTimeZoneError as exc:
+ raise validation.ValidationError("Invalid time zone: {0}".format(exc))
+
+ def validate_authtype(self, auth_type):
+ if auth_type not in TimeValidation.supported_authentication_types:
+ raise validation.ValidationError(
+ 'The provided authentication method for NTP is not supported')
+
+ def validate_filepath(self, filepath):
+ try:
+ val = URLValidator()
+ val(filepath)
+ except ValidationError:
+ raise validation.ValidationError('The url: "%s" is not a valid url!' % filepath)
+
+ def validate_yaml_format(self, url, auth_type):
+ if url.startswith("file://"):
+ path = url.lstrip("file://")
+ try:
+ with open(path) as f:
+ f_content = f.read()
+ except IOError:
+ raise validation.ValidationError('The file: "%s" is not present on the system!'
+ % url)
+ else:
+ try:
+ r = requests.get(url)
+ if r['status_code'] != 200:
+ raise requests.exceptions.ConnectionError()
+ f_content = r['content']
+ except requests.exceptions.ConnectionError:
+ raise validation.ValidationError('The url: "%s" is not reachable!' % url)
+ try:
+ yaml_content = yaml.load(f_content)
+ except yaml.YAMLError:
+ raise validation.ValidationError('The validation of the yamlfile failed!')
+ for item in yaml_content:
+ srv = item.keys()[0]
+ if auth_type == 'symmetric' and not isinstance(item[srv], str):
+ raise validation.ValidationError('The yamlfile contains invalid data! '
+ '(The authentication method looks like it\'s symmetric.)')
+ elif auth_type == 'crypto' and isinstance(item[srv], dict):
+ if (item[srv]['type'] != 'iff' or item[srv]['type'] != 'gq' or
+ item[srv]['type'] != 'mv')\
+ and (not isinstance(item[srv]['keys'], list)):
+ raise validation.ValidationError('The yamlfile contains invalid data! '
+ '(The authentication method looks like it\'s crypto.)')
+ else:
+ raise validation.ValidationError('The yamlfile contains invalid data!')
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import json
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class UsersValidation(cmvalidator.CMValidator):
+ domain = 'cloud.users'
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ return r'^cloud\.users$'
+
+ def validate_set(self, dict_key_value):
+ user_attr = 'admin_user_name'
+ passwd_attr = 'admin_user_password'
+ init_user_attr = 'initial_user_name'
+ init_passwd_attr = 'initial_user_password'
+
+ logging.debug('validate_set called with %s' % str(dict_key_value))
+
+ value_str = dict_key_value.get(self.domain)
+ value_dict = {} if not value_str else json.loads(value_str)
+ if not value_dict:
+ raise validation.ValidationError('No value for %s' % self.domain)
+ if not isinstance(value_dict, dict):
+ raise validation.ValidationError('%s value is not a dict' % self.domain)
+
+ utils = validation.ValidationUtils()
+ user = value_dict.get(user_attr)
+ if user:
+ utils.validate_username(user)
+ else:
+ raise validation.ValidationError('Missing %s' % user_attr)
+ um_user = value_dict.get(init_user_attr)
+ if um_user:
+ utils.validate_username(um_user)
+ else:
+ raise validation.ValidationError('Missing %s' % init_user_attr)
+
+ if not value_dict.get(passwd_attr):
+ raise validation.ValidationError('Missing %s' % passwd_attr)
+ if not value_dict.get(init_passwd_attr):
+ raise validation.ValidationError('Missing %s' % init_passwd_attr)
+
+ def validate_delete(self, dict_key_value):
+ logging.debug('validate_delete called with %s' % str(dict_key_value))
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import json
+
+from cmframework.apis import cmvalidator
+from cmdatahandlers.api import validation
+
+
+class VersionValidation(cmvalidator.CMValidator):
+ domain = 'cloud.version'
+ version = [2, 0, 0]
+
+ # Should be same as 'version' in release build
+ devel_version = [2, 0, 0]
+
+ # Example:
+ # {1: 'This is the first change requiring new template version (1.1.0)',
+ # 2: 'This is the second change requiring new template version (1.2.0)',
+ # 3: 'This is the third change requiring new template version (1.3.0)'}
+ change_log = {}
+
+ def get_subscription_info(self):
+ logging.debug('get_subscription info called')
+ return r'^cloud\.version$'
+
+ def validate_set(self, dict_key_value):
+ logging.debug('validate_set called with %s' % str(dict_key_value))
+
+ for key, value in dict_key_value.iteritems():
+ version = json.loads(value)
+ if key == self.domain:
+ self.validate_version(version)
+ else:
+ raise validation.ValidationError('Unexpected configuration %s' % key)
+
+ def validate_delete(self, prop):
+ logging.debug('validate_delete called with %s' % str(prop))
+ raise validation.ValidationError('%s cannot be deleted' % self.domain)
+
+ def validate_version(self, version_str):
+ if not version_str:
+ raise validation.ValidationError('Missing configuration template version')
+ if not isinstance(version_str, basestring):
+ raise validation.ValidationError('Version configuration should be a string')
+ data = version_str.split('.')
+ if len(data) != 3:
+ raise validation.ValidationError('Invalid version data syntax in configuration')
+ version = []
+ for i in data:
+ if not i.isdigit():
+ raise validation.ValidationError('Version data does not consist of numbers')
+ version.append(int(i))
+ if self.version != self.devel_version and version == self.devel_version:
+ msg = 'Accepting development version %s' % version_str
+ logging.warning(msg)
+ elif version[0] != self.version[0]:
+ reason = 'Major configuration template version mismatch (%s does not match with %s)' \
+ % (version_str, str(self.version))
+ raise validation.ValidationError(reason)
+ elif version[1] != self.version[1]:
+ reason = 'Configuration template version mismatch (%s does not match with %s)' \
+ % (version_str, str(self.version))
+ self.log_changes(version[1])
+ raise validation.ValidationError(reason)
+ elif version[2] != self.version[2]:
+ msg = 'Minor configuration template version mismatch, check the latest template changes'
+ logging.warning(msg)
+
+ def log_changes(self, version):
+ for key, log in self.change_log.iteritems():
+ if key > version:
+ logging.warning('Changes in template version %s.%s.0: %s' % (str(self.version[0]),
+ str(key), log))
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmvalidator
+
+class managelinuxuservalidator(cmvalidator.CMValidator):
+
+ def __init__(self):
+ super(managelinuxuservalidator, self).__init__()
+
+ def get_subscription_info(self):
+ return r'^cloud\.linuxuser$'
+
+ def validate_set(self, props):
+ pass
+
+ def validate_delete(self, props):
+ pass
+
+ def get_plugin_client(self):
+ return self.plugin_client
\ No newline at end of file
--- /dev/null
+#!/usr/bin/python
+# Copyright 2019 Nokia
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from cmframework.apis import cmvalidator
+
+class manageuservalidator(cmvalidator.CMValidator):
+
+ def __init__(self):
+ super(manageuservalidator, self).__init__()
+
+ def get_subscription_info(self):
+ return r'^cloud\.chroot$'
+
+ def validate_set(self, props):
+ pass
+
+ def validate_delete(self, props):
+ pass
+
+ def get_plugin_client(self):
+ return self.plugin_client
\ No newline at end of file