Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Use Rook to Orchestrate Distributed Open Source Storage
Traducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Rook provides cloud-native storage orchestration for the Ceph distributed open source storage system. Ceph runs on Kubernetes and is used to provide object, block, and file interfaces from a single cluster. Together, these provide a method to automate and manage large blocks of storage, typical of immense data storage centers. This guide demonstrates how to install and use Rook to orchestrate open source storage on a Kubernetes cluster.
What Is Rook and How Does It Work?
Rook automates the deployment and management of Ceph to create self-managing, self-scaling, and self-healing storage services. This combination can make large storage management easier than managing storage manually.
CRUSH: Ceph provides file system, object, and block storage on a single cluster that is controlled using the Controlled Replication Under Scalable Hashing (CRUSH) algorithm. CRUSH ensures that the load placed on storage locations such as rows, racks, chassis, hosts, and devices remains consistent. A CRUSH map defines rules that specify how CRUSH stores and retrieves data.
OSDs: Object Storage Daemons (OSDs) each manage a single device. Rook simplifies device management and performs tasks such as verifying OSD health.
CRDs: Rook can also create and customize storage clusters through Custom Resource Definitions (CRDs) . There are four different modes in which to create a cluster:
- Host Storage Cluster consumes storage from host paths and raw devices.
- Persistent Volume Claim (PVC) Storage Cluster dynamically provisions storage underneath Rook by specifying the storage class for Rook to consume storage via PVCs.
- Stretched Storage Cluster distributes Ceph Monitors (MONs) across three zones, while storage (i.e. OSDs) is only configured in two zones.
- External Ceph Cluster connects your Kubernetes applications to an external Ceph cluster.
Before You Begin
Before you can work with Ceph and Rook, you need a Kubernetes cluster running Kubernetes version 1.28 or later with
kubectl
configured to communicate with your cluster. The recommended minimum Kubernetes setup for Rook includes three nodes with 4 GB memory and 2 CPUs each.To create a cluster and configure
kubectl
, follow the instructions in our Linode Kubernetes Engine - Get Started guide.The Ceph storage cluster configured by Rook requires one of the following local storage options as a prerequisite :
- Raw Devices
- Raw Partitions
- LVM Logical Volumes
- Encrypted Devices
- Multipath Devices
- Persistent Volumes
This guide utilizes Linode Block Storage Volumes attached to individual cluster nodes to demonstrate initial Rook and Ceph configuration. For production environments, we recommend utilizing persistent volumes or another local storage option.
sudo
. If you’re not familiar with the sudo
command, see the Users and Groups
guide.Creating and Attaching Volumes
Once your Kubernetes cluster is set up, use the steps below to create and attach a volume for each node. This is to satisfy one of the local storage prerequisites required by Rook for Ceph configuration.
Open the Cloud Manager and select Linodes from the left menu.
Select one of the Kubernetes nodes in your cluster. The node should have a name such as
lke116411-172761-649b48bf69f8
.Open the Storage tab.
Click Create Volume.
In the Label field, enter a name for the volume such as
rook-volume-1
.Select a volume size of at least
40 GB
.Click Create Volume. After several minutes, the volume should show up as Active:
Repeat Steps 1 through 7 for remaining nodes in your cluster.
Rook and Ceph Installation
Once the local storage requirement has been met for the cluster, the next step is to install Rook and Ceph.
Use the following commands to install Rook and Ceph on the cluster:
git clone --single-branch --branch v1.14.8 https://github.com/rook/rook.git cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml
Once complete, you should see:
... cephcluster.ceph.rook.io/rook-ceph created
Verify the status of the Rook-Ceph cluster:
kubectl -n rook-ceph get pod
Sample output:
NAME READY STATUS RESTARTS AGE csi-cephfsplugin-2rqlv 2/2 Running 0 2m57s csi-cephfsplugin-4dl9l 2/2 Running 0 2m57s csi-cephfsplugin-provisioner-868bf46b56-2kt4d 5/5 Running 0 2m58s csi-cephfsplugin-provisioner-868bf46b56-sx7fj 5/5 Running 1 (2m21s ago) 2m58s csi-cephfsplugin-wjmmw 2/2 Running 0 2m57s csi-rbdplugin-4bq8k 2/2 Running 0 2m57s csi-rbdplugin-58wvf 2/2 Running 0 2m57s csi-rbdplugin-gdvjk 2/2 Running 0 2m57s csi-rbdplugin-provisioner-d9b9d694c-bl94s 5/5 Running 2 (2m ago) 2m58s csi-rbdplugin-provisioner-d9b9d694c-vmhw2 5/5 Running 0 2m58s rook-ceph-crashcollector-lke199763-288009-0c458f4a0000-7fcwlc5s 1/1 Running 0 81s rook-ceph-crashcollector-lke199763-288009-1f9ed47e0000-85b788df 1/1 Running 0 84s rook-ceph-crashcollector-lke199763-288009-28c3e5450000-978tzmcx 1/1 Running 0 69s rook-ceph-exporter-lke199763-288009-0c458f4a0000-7ffb6fbc5kmlbn 1/1 Running 0 81s rook-ceph-exporter-lke199763-288009-1f9ed47e0000-6dc9d57cbwj8m6 1/1 Running 0 84s rook-ceph-exporter-lke199763-288009-28c3e5450000-78cd58d665667r 1/1 Running 0 69s rook-ceph-mgr-a-b75b4b47-xxs4p 3/3 Running 0 85s rook-ceph-mgr-b-d77c5f556-nxw4w 3/3 Running 0 83s rook-ceph-mon-a-6f5865b7f8-577jw 2/2 Running 0 2m37s rook-ceph-mon-b-75ff6d4875-qq9nb 2/2 Running 0 106s rook-ceph-mon-c-6c7d4864b4-blnr8 2/2 Running 0 96s rook-ceph-operator-7d8898f668-nfp8w 1/1 Running 0 3m45s rook-ceph-osd-prepare-lke199763-288009-0c458f4a0000-l665b 0/1 Completed 0 28s rook-ceph-osd-prepare-lke199763-288009-1f9ed47e0000-qlv2s 0/1 Completed 0 25s rook-ceph-osd-prepare-lke199763-288009-28c3e5450000-lm8ds 0/1 Completed 0 22s
Installing and Using the Rook-Ceph Toolbox
Once you have Rook and Ceph installed and configured, you can install and use the Ceph Toolbox :
Create the toolbox deployment:
kubectl create -f toolbox.yaml
deployment.apps/rook-ceph-tools created
Check the deployment status:
kubectl -n rook-ceph rollout status deploy/rook-ceph-tools
deployment "rook-ceph-tools" successfully rolled out
Access the toolbox pod:
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
Check the Ceph status:
ceph status
At this point, your Rook-Ceph cluster should be in the
HEALTH_OK
state:cluster: id: 21676ecd-7f25-466d-9b90-a4ff13d2c0b5 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 103m) mgr: a(active, since 101m), standbys: b osd: 3 osds: 3 up (since 101m), 3 in (since 102m) data: pools: 1 pools, 1 pgs objects: 2 objects, 577 KiB usage: 26 MiB used, 120 GiB / 120 GiB avail pgs: 1 active+clean
Cluster HEALTH Statuses Alternatively, theHEALTH_WARN
state indicates that the cluster has no storage objects. While this doesn’t mean it won’t work, all I/O stops if it goes into aHEALTH_ERROR
state. Consult the Ceph common issues troubleshooting guide to run diagnostics.Check the Object Storage Daemon (OSD) status for Ceph’s distributed file system:
ceph osd status
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 lke195367-280905-1caa0a3f0000 9004k 39.9G 0 0 0 0 exists,up 1 lke195367-280905-1d23f6860000 9004k 39.9G 0 0 0 0 exists,up 2 lke195367-280905-55236f3a0000 8940k 39.9G 0 0 0 0 exists,up
Check the Ceph disk usage:
ceph df
--- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED ssd 120 GiB 120 GiB 26 MiB 26 MiB 0.02 TOTAL 120 GiB 120 GiB 26 MiB 26 MiB 0.02 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 577 KiB 2 1.7 MiB 0 38 GiB
You can also use RADOS , a utility used to interact with Ceph object storage clusters, to check disk usage:
rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR .mgr 1.7 MiB 2 0 6 0 0 0 222 315 KiB 201 2.9 MiB 0 B 0 B total_objects 2 total_used 26 MiB total_avail 120 GiB total_space 120 GiB
When complete, exit the toolbox pod, and return to the regular terminal prompt:
exit
If you wish to remove the Rook-Ceph Toolbox, use the following command:
kubectl -n rook-ceph delete deploy/rook-ceph-tools
deployment.apps "rook-ceph-tools" deleted
Testing the Rook-Ceph Cluster
With the Rook-Ceph cluster set up, the links below offer some options for testing your storage:
The Block Storage walkthrough is a recommended starting point for testing. This walkthrough creates a storage class then starts mysql
and wordpress
in your Kubernetes cluster, allowing access to WordPress from a browser.
kubectl expose deployment
command to accomplish this.This page was originally published on