Onsite at customer they had a 36bays OSD node down in there 500TB cluster build with 4TB HDDs. When it came back online the Ceph cluster started to recover from it and rebalance the cluster.
Problem was, it was dead slow. 78Mb/s is not much when you have a 500TB Cluster. So what to do?
There a several settings within Ceph you can adjust. Here are the two settings that worked for me.
osd max backfills:
Description: The maximum number of backfills allowed to or from a single OSD.
Default value: 1
I set it to 8, and the recovery went to 350Mb/s. Set it to 16 and recovery was 700Mb/s, but clients where also affected. So 8 was a more moderat setting.
ceph tell 'osd.*' injectargs '--osd-max-backfills 16'
osd recovery max active
Description: The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests places an increased load on the cluster.
Default value: 3
Set it up a notch to 4.
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'