openshift-ansible: Gluster Block missing deployment steps
Description
When installing gluster-block with the CNS some of the steps is missing and needs to be “automated”.
if these are set:
openshift_storage_glusterfs_registry_block_deploy: true
openshift_storage_glusterfs_block_deploy: true
We expect glusterfs to de deployed with the gluster block functionality.
The result is that we have:
- gluster block provisioner is deployed.
- gluster is with a glusterblock enabled.
To have working solution we missing a few configuration details:
- SC secret for heketi need to be created with gluster.org type (ansible change):
apiVersion: v1
data:
key: NkJqZVRtMG5EcFhYRC9EdWJpMDc2YnZONCtRNldQZEw3UjhTWVFFSzdEZz0=
kind: Secret
metadata:
creationTimestamp: 2018-01-12T19:14:32Z
name: heketi-storage-admin-secret-block
namespace: glusterfs
resourceVersion: "7074"
selfLink: /api/v1/namespaces/glusterfs/secrets/heketi-storage-admin-secret-block
uid: d22bc846-f7cc-11e7-ae9c-028542dcc35e
type: gluster.org/glusterblock
- Storage class generated with cluster ID and secret (ansible change):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterblock
selfLink: /apis/storage.k8s.io/v1/storageclasses/glusterblock
parameters:
chapauthenabled: "true"
clusterids: 4690bc83f8c06bc09d840ede8e2f3784
hacount: "3"
restsecretname: heketi-storage-admin-secret-block
restsecretnamespace: glusterfs
resturl: http://heketi-storage-glusterfs.apps.example.com
restuser: admin
provisioner: gluster.org/glusterblock
- IP tables/firewalld need to be configured to have ports open (ansible change):
# firewall-cmd --zone=public --add-port=24010/tcp --add-port=3260/tcp --add-port=111/tcp --add-port=22/tcp --add-port=24007/tcp --add-port=24008/tcp --add-port=49152-49664/tcp --permanent
# firewall-cmd --reload
- Add systemd dependencies:
systemctl add-wants multi-user rpcbind.service
- Packages installed on the nodes (I think this should be part of the image and we need enhance mountpoints). TBD how this might work on atomic and rhel:
yum install iscsi-initiator-utils device-mapper-multipath
- Multipath need to be enabled:
mpathconf --enable
- Mutipath config needs to be created with default settings and potential tunable:
cat > /etc/multipath.conf <<EOF
# LIO iSCSI
devices {
device {
vendor "LIO-ORG"
user_friendly_names "yes" # names like mpatha
path_grouping_policy "failover" # one path per group
path_selector "round-robin 0"
failback immediate
path_checker "tur"
prio "const"
no_path_retry 120
rr_weight "uniform"
}
}
EOF
- Kernel module loaded (requires one of the newest rhel 7.3 cuts as previous kernels didnt had this one, so ansible check might be needed):
#load for first install
modprobe dm_thin_pool #existing module in older kernel
modprobe dm_multipath #existing module in older kernel
modprobe target_core_user #new one so new rhel is needed for this one.
#create /etc/modules-load.d/<name>.conf for each module to get this after restart.
I would think that points 1-3 is ansible change. Points 5-7 would be image change. point 8 is dont sure. Are we doing any "kernel module loading from containers? or we modifying host?
Ping @jarrpa @ckyriakidou as per email
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 24 (23 by maintainers)
Commits related to this issue
- Merge pull request #1 from ckyriakidou/issue-#6801 Issue #6801 — committed to mjudeikis/openshift-ansible by deleted user 6 years ago
Cherry-pick prepared https://github.com/openshift/openshift-ansible/pull/7213
@dmesser Hmm… I think this should be sufficient to make use of then, yes?
https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_storage_glusterfs/templates/v3.9/heketi.json.j2#L36-L40