Ceph Storage is a free and open source software-defined, distributed storage solution designed to be massively scalable for modern data analytics, artificial intelligence(AI), machine learning (ML), data analytics and emerging mission critical workloads. In this article, we will talk about how you can create Ceph Pool with a custom number of placement groups(PGs).
In Ceph terms, Placement groups (PGs) are shards or fragments of a logical object pool that place objects as a group into OSDs. Placement groups reduce the amount of per-object metadata when Ceph stores the data in OSDs.
A larger number of placement groups (e.g., 100 per OSD) leads to better balancing. The Ceph client will calculate which placement group an object should be in. It does this by hashing the object ID and applying an operation based on the number of PGs in the defined pool and the ID of the pool. See Mapping PGs to OSDs for details.
Calculating total number of Placement Groups.
(OSDs * 100) Total PGs = ------------ pool size
For example, let’s say your cluster has 9 OSDs, and default pool size of 3. So your PGs will be
9 * 100 Total PGs = ------------ = 300 3
Create a Pool
To syntax for creating a pool is:
ceph osd pool create pool-name pg-num
Where:
- pool-name – The name of the pool. It must be unique.
- pg-num – The total number of placement groups for the pool.
I’ll create a new pool named k8s-uat with placement groups count of 100.
$ sudo ceph osd pool create k8s-uat 100 pool 'k8s-uat' created
Now list available pools to confirm it was created.
$ sudo ceph osd lspools 1 .rgw.root 2 default.rgw.control 3 default.rgw.meta 4 default.rgw.log 5 k8s-uat
Associate Pool to Application
Pools need to be associated with an application before use. Pools that will be used with CephFS or pools that are automatically created by RGW are automatically associated.
--- Ceph Filesystem ---
$ sudo ceph osd pool application enable cephfs
--- Ceph Block Device ---
$ sudo ceph osd pool application enable rbd
--- Ceph Object Gateway ---
$ sudo ceph osd pool application enable rgw
Example:
$ sudo ceph osd pool application enable k8s-uat-rbd rbd
enabled application 'rbd' on pool 'k8s-uat-rbd'
Pools that are intended for use with RBD should be initialized using the rbd
tool:
sudo rbd pool init k8s-uat-rbd
To disable app, use:
ceph osd pool application disable --yes-i-really-mean-it
To obtain I/O information for a specific pool or all, execute:
$ sudo ceph osd pool stats [pool-name]
Doing it from Ceph Dashboard
Login to your Ceph Management Dashboard and create a new Pool – Pools > Create
Delete a Pool
To delete a pool, execute:
sudo ceph osd pool delete pool-name [pool-name --yes-i-really-really-mean-it]
More articles on Ceph will be published in few weeks to come. Stay connected.