清理rookceph

官方步骤文档:https://rook.io/docs/rook/v1.8/ceph-teardown.html

请注意需要清理的以下资源:

  • rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD)
  • /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds

Delete the Block and File artifacts

# 如下这些文件是官方文档中演示使用到的,若是没有操作过则可以跳过这一步
kubectl delete -f ../wordpress.yaml
kubectl delete -f ../mysql.yaml
kubectl delete -n rook-ceph cephblockpool replicapool
kubectl delete storageclass rook-ceph-block
kubectl delete -f csi/cephfs/kube-registry.yaml
kubectl delete storageclass csi-cephfs

Delete the CephCluster CRD

kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{"spec":{"cleanupPolicy":{"confirmation":"yes-really-destroy-data"}}}'

kubectl -n rook-ceph delete cephcluster rook-ceph
kubectl -n rook-ceph get cephcluster
  • 删除所有节点上的目录/var/lib/rook(或dataDirHostPath指定的路径)
  • 擦除此群集中运行OSD的所有节点上驱动器上的数据

Delete the Operator and related Resources

kubectl delete -f operator.yaml
kubectl delete -f common.yaml
kubectl delete -f crds.yaml

Delete the data on hosts

连接到每台计算机并删除/var/lib/rook或dataDirHostPath指定的路径。

擦除磁盘数据

#!/usr/bin/env bash
DISK="/dev/sdb" # 根据实际情况修改(裸磁盘)

# Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean)

# You will have to run this step for all disks.
sgdisk --zap-all $DISK

# Clean hdds with dd
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync

# Clean disks such as ssd with blkdiscard instead of dd
blkdiscard $DISK

# These steps only have to be run once on each node
# If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks.
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %

# ceph-volume setup can leave ceph-<UUID> directories in /dev and /dev/mapper (unnecessary clutter)
rm -rf /dev/ceph-*
rm -rf /dev/mapper/ceph--*

# Inform the OS of partition table changes
partprobe $DISK
原文地址:https://www.cnblogs.com/sanduzxcvbnm/p/15718855.html