-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSD prepare fails for plain partitions #14503
Comments
When setting |
Thanks for the tip. Yes setting it to 1 allowed the cluster to be created and its now healthy. |
For other readers: this is intended behavior to ensure Rook does not overwrite user data. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Is this a bug report or feature request?
Deviation from expected behavior:
When using a plain partition (e.g. /dev/nvme0n1p2) OSD prepare will fail at the "lvm batch" command. (see logs below)
The error being that either a PV or Raw block device must be given.
Alternatively: If one makes the partition an empty/unused PV with pvcreate, the the OSD prepare script will incorrectly skip/ingore the patition beleiving it to already be in use (see logs below).
Expected behavior:
Be able to configure an OSD to use a plain partition
How to reproduce it (minimal and precise):
I will document my setup, which is not the minimal setup. In my case in addition to the NVME I have a USB SSD; to have tiered storage one fast and one slow but higher capacity for long term stuff.
The NVME has two partitions, the first is a PV for another Volgroup unrelated to Rook/Ceph. The second is the one assigned to Rook/Ceph. The USB SDD has one parition, all assigned to Rook/Ceph
Relevant config snippet from Helm values file for the cluster:
For reference:
Logs to submit:
When "nvme0n1p2" is empty (using wipefs -a)
Fix for this LVM error would be to run pvcreate before the lvm command.
However if I use pvcreate in advance of installing the cluster CR, I get:
i.e. The PV "nvme0n1p2" is skipped/ignored
Cluster Status to submit:
The prepare jobs all fail
Environment:
** Further info **
I did try to seek help with this in Slack about a week ago: https://rook-io.slack.com/archives/CK9CF5H2R/p1721549460098239
In both #ceph and #general, but got no response, so decided to file this bug report.
The text was updated successfully, but these errors were encountered: