0 votes
144 views
in Cloud by

I have proxmox cluster with 5nodes all running on the latest 7.3 version and Ceph v17.2.5 Quincy.
 

Today, suddenly all OSDs on the node 3 went down, I tried to start but not working, I  find the below errors in the logs
 

ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-62: (2) No such file or directory

bluestore(/var/lib/ceph/osd/ceph-61/block) _read_bdev_label failed to read from /var/lib/ceph/osd/ceph-61/block: (5) Input/output error

Thank you

2 Answers

0 votes
by
Migrate all VMs and Containers out and reboot the node
by
It worked!!  thank you so much
0 votes
by

If restarting the node resolved the issue, it suggests that there may have been a temporary problem or misconfiguration on the node that was causing the error. Here are a few potential reasons why restarting the node might have fixed the problem:

  1. Temporary system glitch: It's possible that a temporary system glitch or software hiccup occurred on the node, which caused the I/O error or the inability to read the block device label. Restarting the node can clear any temporary issues and restore the proper functionality of the system components.

  2. Resource contention: If the node was experiencing resource contention, such as high CPU or memory usage, it could have affected the I/O operations and resulted in the error. Restarting the node releases any locked resources and allows the system to start fresh with available resources.

  3. Block device initialization: Restarting the node may trigger the initialization process of the block device or the Ceph OSD service, which could resolve any inconsistencies or misconfigurations that were causing the error.

  4. Network connectivity: In some cases, network connectivity issues between the node and the storage devices can lead to I/O errors. Restarting the node might have reestablished the network connection and resolved the problem.

  5. Ceph OSD service restart: Restarting the Ceph OSD service during the node restart can help resolve certain issues related to the OSD's operation and communication with other components of the Ceph cluster.

...