From: James E. Blair Date: Fri, 13 Nov 2020 23:06:18 +0000 (-0800) Subject: Run podman cleanup at start of baremetal CI job X-Git-Url: https://gerrit.akraino.org/r/gitweb?a=commitdiff_plain;h=0fbcc122c9f56b87b43388ebb23cf30a338b3087;p=kni%2Finstaller.git Run podman cleanup at start of baremetal CI job The filesystem mounts for the containers running dnsmasq, etc, in support of the virtual baremetal system can end up in the mounts of the virtual machines, which means cleaning them all up needs to be sequenced correctly: first the VMs should be removed, then the containers. There's an edge case where if a container is removed while the vm is running, it will fail to remove the underlying storage, but no longer appear in the list of containers. This will prevent a replacement with the same name from being created. To handle this case as well, we run 'podman rm --storage' after removal. This should allow us to clean up the host correctly even if an attempt to clean up in the wrong order was made. Signed-off-by: James E. Blair Change-Id: Ie089ebab65f8a70732def5c882abf465561bebec --- diff --git a/ci/kni_deploy_baremetal.sh b/ci/kni_deploy_baremetal.sh index 8002757..ab78741 100755 --- a/ci/kni_deploy_baremetal.sh +++ b/ci/kni_deploy_baremetal.sh @@ -24,10 +24,38 @@ LANG="en_US.UTF-8" LC_ALL="en_US.UTF-8" PRESERVE_CLUSTER="${PRESERVE_CLUSTER:-true}" +# Stop the VMs before the containers because the container filesystems +# appear in the VM filesystem mounts. wget https://raw.githubusercontent.com/openshift/installer/master/scripts/maintenance/virsh-cleanup.sh chmod a+x ./virsh-cleanup.sh sudo -E bash -c "yes Y | ./virsh-cleanup.sh" +# Stop the containers so they can be removed and their names re-used later. +podman stop kni-dnsmasq-prov || /bin/true +podman stop kni-dnsmasq-bm || /bin/true +podman stop kni-haproxy || /bin/true +podman stop kni-coredns || /bin/true +podman stop kni-matchbox || /bin/true + +# Removed the stopped containers. +podman rm kni-dnsmasq-prov || /bin/true +podman rm kni-dnsmasq-bm || /bin/true +podman rm kni-haproxy || /bin/true +podman rm kni-coredns || /bin/true +podman rm kni-matchbox || /bin/true + +# In case a container removal happened while a VM was still running, +# it will no longer appear in CLI output as a container that podman +# knows about, but the storage will remain and an entry will still be +# present in containers.json which will prevent creating a container +# with the same name. This should clean up that situation, but +# otherwise is not normally necessary. +podman rm -f --storage kni-dnsmasq-prov || /bin/true +podman rm -f --storage kni-dnsmasq-bm || /bin/true +podman rm -f --storage kni-haproxy || /bin/true +podman rm -f --storage kni-coredns || /bin/true +podman rm -f --storage kni-matchbox || /bin/true + rm -rf $HOME/.kni/$SITE_NAME || true pushd $HOME/go/src/gerrit.akraino.org/kni/installer ./bin/knictl fetch_requirements file://${WORKSPACE}/kni-blueprint-pae/sites/$SITE_NAME