Add Rook operator and Ceph cluster bring up script
[icn.git] / deploy / kud-plugin-addons / rook / README.md
1 ## Intel Rook infrastructure for Ceph cluster deployment
2
3 By default create osd on folder /var/lib/rook/storage-dir, and Ceph cluster
4 information on /var/lib/rook.
5
6 # Precondition
7
8 1. Compute node disk space: 20GB+ free disk space.
9
10 2. Kubernetes version: Kubernetes version >= 1.13 required by Ceph CSI v1.0.
11 Following is the upgrade patch in kud github: https://github.com/onap/multicloud-k8s
12
13 ```
14 $ git diff
15 diff --git a/kud/deployment_infra/playbooks/kud-vars.yml b/kud/deployment_infra/playbooks/kud-vars.yml
16 index 9b36547..5c29fa4 100644
17 --- a/kud/deployment_infra/playbooks/kud-vars.yml
18 +++ b/kud/deployment_infra/playbooks/kud-vars.yml
19 @@ -58,7 +58,7 @@ ovn4nfv_version: adc7b2d430c44aa4137ac7f9420e14cfce3fa354
20  ovn4nfv_url: "https://git.opnfv.org/ovn4nfv-k8s-plugin/"
21
22  go_version: '1.12.5'
23 -kubespray_version: 2.8.2
24 -helm_client_version: 2.9.1
25 +kubespray_version: 2.9.0
26 +helm_client_version: 2.13.1
27  # kud playbooks not compatible with 2.8.0 - see MULTICLOUD-634
28  ansible_version: 2.7.10
29 diff --git a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
30 index 9966ba8..cacb4b3 100644
31 --- a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
32 +++ b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
33 @@ -48,7 +48,7 @@ local_volumes_enabled: true
34  local_volume_provisioner_enabled: true
35
36  ## Change this to use another Kubernetes version, e.g. a current beta release
37 -kube_version: v1.12.3
38 +kube_version: v1.13.5
39
40  # Helm deployment
41  helm_enabled: true
42 ```
43
44 After upgraded, the Kubernetes version as following:
45 ```
46 $ kubectl version
47 Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
48 Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
49 ```
50
51 If something is wrong with Kubectl server version, you can manually upgrade as
52 command:
53 ```console
54 $ kubeadm upgrade apply v1.13.5
55 ```
56
57 # Deployment
58
59 To bring up Rook operator(v1.0) and Ceph cluster(Mimic 13.2.2) as following:
60
61 ```console
62 cd yaml
63 ./install.sh
64 ```
65
66 # Test
67
68 If you want to make a test on the ceph sample workload, check as following:
69
70 1. Bring up Rook operator and Ceph cluster.
71 2. Goto Create storage class.
72
73 ```console
74 kubectl create -f ./test/rbd/storageclass.yaml
75 ```
76
77 3. Create RBD secret.
78 ```console
79 kubectl exec -ti -n rook-ceph rook-ceph-operator-948f8f84c-749zb -- bash -c 
80 "ceph -c /var/lib/rook/rook-ceph/rook-ceph.config auth get-or-create-key client.kube mon \"allow profile rbd\" osd \"profile rbd pool=rbd\""
81 ```
82    You need to replace the pod name with your own rook-operator, refer: kubetl get pod -n rook-ceph
83    Then get secret of admin and client user key by go into operator pod and execute:
84 ```console
85 ceph auth get-key client.admin|base64
86 ceph auth get-key client.kube|base64
87 ```
88   Then fill the key into secret.yaml
89 ```console
90 kubectl create -f ./test/rbd/secret.yaml
91 ```
92 4. Create RBD Persistent Volume Claim
93 ```console
94 kubectl create -f ./test/rbd/pvc.yaml
95 ```
96 5. Create RBD demo pod
97 ```console
98 kubectl creaet -f ./test/rbd/pod.yaml
99 ```
100 6. Check the Volumes created and application mount status
101 ```console
102 tingjie@ceph4:~/bohemian/workspace/rook/Documentation$ kubectl get pvc
103 NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
104 rbd-pvc   Bound    pvc-98f50bec-8a4f-434d-8def-7b69b628d427   1Gi        RWO            csi-rbd        84m
105 tingjie@ceph4:~/bohemian/workspace/rook/Documentation$ kubectl get pod
106 NAME              READY   STATUS    RESTARTS   AGE
107 csirbd-demo-pod   1/1     Running   0          84m
108 tingjie@ceph4:~/bohemian/workspace/rook/Documentation$ kubectl exec -ti csirbd-demo-pod -- bash
109 root@csirbd-demo-pod:/# df -h
110 Filesystem      Size  Used Avail Use% Mounted on
111 overlay         733G   35G  662G   5% /
112 tmpfs            64M     0   64M   0% /dev
113 tmpfs            32G     0   32G   0% /sys/fs/cgroup
114 /dev/sda2       733G   35G  662G   5% /etc/hosts
115 shm              64M     0   64M   0% /dev/shm
116 /dev/rbd0       976M  2.6M  958M   1% /var/lib/www/html
117 tmpfs            32G   12K   32G   1% /run/secrets/kubernetes.io/serviceaccount
118 tmpfs            32G     0   32G   0% /proc/acpi
119 tmpfs            32G     0   32G   0% /proc/scsi
120 tmpfs            32G     0   32G   0% /sys/firmware
121 ```
122 7. Create RBD snapshot-class
123 ```console
124 kubectl create -f ./test/rbd/snapshotclass.yaml
125 ```
126 8. Create Volume snapshot and verify
127 ```console
128 kubectl create -f ./test/rbd/snapshot.yaml
129
130 $ kubectl get volumesnapshotclass
131 NAME                      AGE
132 csi-rbdplugin-snapclass   51s
133 $ kubectl get volumesnapshot
134 NAME               AGE
135 rbd-pvc-snapshot   33s
136
137 ```
138 9. Restore the snapshot to a new PVC and verify
139 ```console
140 kubectl create -f ./test/rbd/pvc-restore.yaml
141
142 $ kubectl get pvc
143 NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
144 rbd-pvc           Bound    pvc-98f50bec-8a4f-434d-8def-7b69b628d427   1Gi        RWO            csi-rbd        42h
145 rbd-pvc-restore   Bound    pvc-530a4939-e4c0-428d-a072-c9c39d110d7a   1Gi        RWO            csi-rbd        5s
146 ```
147