写在最前

1. docker 部署

2. kubernetes 部署

3. 常用命令

3.1 ceph -s

sh-4.4$ ceph -s
  cluster:
    id:     45492291-00c5-4f7d-a691-96774a85a6d0
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum c,f,gu (age 2w)
    mgr: a(active, since 2w)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 9w), 3 in (since 9w)

  data:
    volumes: 1/1 healthy
    pools:   5 pools, 225 pgs
    objects: 120.47k objects, 198 GiB
    usage:   595 GiB used, 305 GiB / 900 GiB avail
    pgs:     225 active+clean

  io:
    client:   173 MiB/s rd, 5.1 MiB/s wr, 2.49k op/s rd, 249 op/s wr

3.2 ceph df

sh-4.4$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    900 GiB  306 GiB  594 GiB   594 GiB      66.03
TOTAL  900 GiB  306 GiB  594 GiB   594 GiB      66.03

--- POOLS ---
POOL                   ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
device_health_metrics   1    1  1.1 MiB        3  3.3 MiB      0     87 GiB
.nfs                    2   32     16 B       13   12 KiB      0     87 GiB
bxrbd                   3  128  185 GiB   48.51k  556 GiB  68.16     87 GiB
bxfs-metadata           4   32  275 MiB   14.16k  825 MiB   0.31     87 GiB
bxfs-replicated         5   32   11 GiB   57.79k   34 GiB  11.67     87 GiB

3.3 ceph osd df

sh-4.4$ ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE   VAR   PGS  STATUS
 2    hdd  0.29300   1.00000  300 GiB  198 GiB  197 GiB   39 MiB  1.2 GiB  102 GiB  66.06  1.00  225      up
 1    hdd  0.29300   1.00000  300 GiB  199 GiB  197 GiB   39 MiB  1.5 GiB  101 GiB  66.17  1.00  225      up
 0    hdd  0.29300   1.00000  300 GiB  198 GiB  197 GiB   39 MiB  717 MiB  102 GiB  65.89  1.00  225      up
                       TOTAL  900 GiB  594 GiB  591 GiB  118 MiB  3.4 GiB  306 GiB  66.04
MIN/MAX VAR: 1.00/1.00  STDDEV: 0.11

3.4 ceph crash ls

sh-4.4$ ceph crash ls
ID                                                                ENTITY  NEW
2025-06-27T13:05:27.160559Z_f114aa04-d7fe-4e6f-8a02-4a6b17469e3b  osd.0    *
2025-06-27T13:05:27.886837Z_b95da4d9-c92e-4666-8062-750dffdec27d  osd.0    *

3.5 ceph crash info

ceph crash info 2025-06-27T13:05:27.160559Z_f114aa04-d7fe-4e6f-8a02-4a6b17469e3b

3.x ceph health detail

sh-4.4$ ceph health detail
HEALTH_WARN 2 daemons have recently crashed
[WRN] RECENT_CRASH: 2 daemons have recently crashed
    osd.0 crashed on host rook-ceph-osd-0-7b8d55778d-h9vqj at 2025-06-27T13:05:27.160559Z
    osd.0 crashed on host rook-ceph-osd-0-7b8d55778d-h9vqj at 2025-06-27T13:05:27.886837Z

4. 异常解决

4.1 Ceph cluster health is not OK: HEALTH_WARN mon o is low on available space

https://forum.proxmox.com/threads/solved-mon-is-low-on-available-space.134086/