WebNov 27, 2015 · While looking at your ceph health detail you only see where the PGs are acting or on which OSD you have slow requests. Given that you might have tons of OSDs located on a lot of node, it is not straightforward to find and restart them. You will find bellow a simple script that can do this for you. WebApr 7, 2024 · After the container images have been pulled and validated, then restart appropriate services. saltmaster:~ # ceph orch restart osd saltmaster:~ # ceph orch restart mds Use "ceph orch ps grep error" to look for process that could be affected. saltmaster:~ # ceph -s cluster: id: c064a3f0-de87-4721-bf4d-f44d39cee754 health: HEALTH_OK
Ceph集群修复 osd 为 down 的问题_没刮胡子的博客-CSDN博客
WebMar 17, 2024 · You may need to restore the metadata of a Ceph OSD node after a failure. For example, if the primary disk fails or the data in the Ceph-related directories, such as /var/lib/ceph/, on the OSD node disappeared. To restore the metadata of a Ceph OSD node: Verify that the Ceph OSD node is up and running and connected to the Salt … WebOct 25, 2016 · After restart the server, the osd container status stuck at Restarting, below is the container log: static: does not generate config HEALTH_ERR 128 pgs are stuck inactive for more than 300 seconds; 16 pgs degraded; 144 pgs stale; 128 pgs stuck stale; 16 pgs stuck unclean; 16 pgs undersized; recovery 243/486 objects degraded (50.000%); … swedish baked beans recipe
ceph status reports OSD "down" even though OSD process is ... - Github
WebFeb 14, 2024 · Frequently performed full cluster shutdown and power ON. After one such cluster shutdown & power ON, even though all OSD pods came UP, ceph status kept reporting one OSD as "DOWN". OS (e.g. from /etc/os-release): RHEL 7.6 Kernel (e.g. uname -a ): 3.10.0-957.5.1.el7.x86_64 Cloud provider or hardware configuration: WebApr 2, 2024 · Kubernetes version (use kubectl version):; 1.20. Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): bare metal (provisioned by k0s). Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):; Dashboard is in HEALTH_WARN, but I assume they are benign for the following reasons: WebOct 25, 2016 · I checked the source code, seems like using osd_ceph_disk will execute below steps: set OSD_TYPE="disk" call function start_osd; In function start_osd, call osd_disk; In function osd_disk, call osd_disk_prepare; In fucntion osd_disk_prepare, below will always be executed: swedish baby names