Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CephFilesystem CRD with CephBlockPoolRados Namespace #14506

Open
NoctivagusObitus opened this issue Jul 28, 2024 · 2 comments
Open

CephFilesystem CRD with CephBlockPoolRados Namespace #14506

NoctivagusObitus opened this issue Jul 28, 2024 · 2 comments
Labels

Comments

@NoctivagusObitus
Copy link

Is this a bug report or feature request?

  • Feature Request

What should the feature do:
As stated by Ceph documentation it is possible to have multiple CephFs properly isolated on a single pool only if they are related to there dedicated RADOS namespace. I would like to see the option to specify such a RADOS namespace in the CephFs CRD similar to how it is done within the StorageClass examepl.

What is use case behind this feature:
This request has a classic X -> Y relation. My actual desire is to use a rook cluster and it's (very nice) declarative nature to manage cloud like storage for multiple users. This means have an arbitrary amount of users connect to the Ceph cluster via the internet and mount there own CephFs with no option to access any objects of any other CephFs / user. Other suggestion to solve this will be appreciated as well.

Environment:
None

@travisn
Copy link
Member

travisn commented Jul 29, 2024

@NoctivagusObitus I'd suggest considering subvolume groups.

@NoctivagusObitus
Copy link
Author

@travisn tahnks for your response.
it took me some time reading up an the subvolumes groups. from what I understand they are an abstraction one level above the subvolumes as stated here. what I do not yet understand is if it is enough to use such an hierarchically filesystem based permissioning. my reluctance stems from reading this part of the ceph documentation. as far as I understand the only reliable way of protecting data from unwanted access are some pool level configurations. I may miss understand this... apparently in the end file layouts are relevant to put the data object in proper places inside RADOS. the subvolume groups do manage some of these concepts but from the cli documentation it seams to primarily handle quotas. there is some --uid option but so far I have not seen how this is used and if suits my case.

the point of writing down all of the info I read up on is me asking for some help to better understand these concepts. I have not yet given up on the subvolume groups. could someone give me a hint on how I would properly configure those given I want 2 subvolume groups and have 2 users each being able to exclusively access only one of the subvolume groups? (I do have a local test cluster running by now.)

thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants