Docs.ceph.com Docs Master Install Manual Freebsd Deployment
User Manual:
Open the PDF directly: View PDF .
Page Count: 10

ThisalargelyacopyoftheregularManualDeploymentwithFreeBSDspecifics.Thedifferenceliesintwoparts:Theunderlying
diskformat,andthewaytousethetools.
AllCephclustersrequireatleastonemonitor,andatleastasmanyOSDsascopiesofanobjectstoredonthecluster.
Bootstrappingtheinitialmonitor(s)isthefirststepindeployingaCephStorageCluster.Monitordeploymentalsosets
importantcriteriafortheentirecluster,suchasthenumberofreplicasforpools,thenumberofplacementgroupsperOSD,the
heartbeatintervals,whetherauthenticationisrequired,etc.Mostofthesevaluesaresetbydefault,soit’susefultoknow
aboutthemwhensettingupyourclusterforproduction.
FollowingthesameconfigurationasInstallation(Quick),wewillsetupaclusterwithnode1asthemonitornode,andnode2
andnode3forOSDnodes.
CurrentimplementationworksonZFSpools
AllCephdataiscreatedin/var/lib/ceph
Logfilesgointo/var/log/ceph
PIDfilesgointo/var/log/run
OneZFSpoolisallocatedperOSD,like:
gpartcreate-sGPTada1
gpartadd-tfreebsd-zfs-losd1ada1
zpoolcreate-omountpoint=/var/lib/ceph/osd/osd.1osd
Somecacheandlog(ZIL)canbeattached.PleasenotethatthisisdifferentfromtheCephjournals.Cacheandlogare
totallytransparentforCeph,andhelpthefilesystemtokeepthesystemconsistantandhelpperformance.Assumingthat
ada2isanSSD:
gpartcreate-sGPTada2
gpartadd-tfreebsd-zfs-losd1-log-s1Gada2
zpooladdosd1loggpt/osd1-log
gpartadd-tfreebsd-zfs-losd1-cache-s10Gada2
zpooladdosd1loggpt/osd1-cache
Note:UFS2doesnotallowlargexattribs
AsperFreeBSDdefaultpartsofextrasoftwaregointo/usr/local/.Whichmeansthatfor/etc/ceph.confthedefault
locationis/usr/local/etc/ceph/ceph.conf.Smartestthingtodoistocreateasoftlinkfrom/etc/cephto
/usr/local/etc/ceph:

ln-s/usr/local/etc/ceph/etc/ceph
Asamplefileisprovidedin/usr/local/share/doc/ceph/sample.ceph.confNotethat/usr/local/etc/ceph/ceph.confwill
befoundbymosttools,linkingitto/etc/ceph/ceph.confwillhelpwithanyscriptsthatarefoundinextratools,scripts,
and/ordiscussionlists.
Bootstrappingamonitor(aCephStorageCluster,intheory)requiresanumberofthings:
UniqueIdentifier:Thefsidisauniqueidentifierforthecluster,andstandsforFileSystemIDfromthedayswhenthe
CephStorageClusterwasprincipallyfortheCephFilesystem.Cephnowsupportsnativeinterfaces,blockdevices,and
objectstoragegatewayinterfacestoo,sofsidisabitofamisnomer.
ClusterName:Cephclustershaveaclustername,whichisasimplestringwithoutspaces.Thedefaultclusternameis
ceph,butyoumayspecifyadifferentclustername.Overridingthedefaultclusternameisespeciallyusefulwhenyouare
workingwithmultipleclustersandyouneedtoclearlyunderstandwhichclusteryourareworkingwith.
Forexample,whenyourunmultipleclustersinafederatedarchitecture,theclustername(e.g.,us-west,us-east)
identifiestheclusterforthecurrentCLIsession.Note:Toidentifytheclusternameonthecommandlineinterface,
specifytheaCephconfigurationfilewiththeclustername(e.g.,ceph.conf,us-west.conf,us-east.conf,etc.).Also
seeCLIusage(ceph--cluster{cluster-name}).
MonitorName:Eachmonitorinstancewithinaclusterhasauniquename.Incommonpractice,theCephMonitorname
isthehostname(werecommendoneCephMonitorperhost,andnocomminglingofCephOSDDaemonswithCeph
Monitors).Youmayretrievetheshorthostnamewithhostname-s.
MonitorMap:Bootstrappingtheinitialmonitor(s)requiresyoutogenerateamonitormap.Themonitormaprequires
thefsid,theclustername(orusesthedefault),andatleastonehostnameanditsIPaddress.
MonitorKeyring:Monitorscommunicatewitheachotherviaasecretkey.Youmustgenerateakeyringwithamonitor
secretandprovideitwhenbootstrappingtheinitialmonitor(s).
AdministratorKeyring:TousethecephCLItools,youmusthaveaclient.adminuser.Soyoumustgeneratethe
adminuserandkeyring,andyoumustalsoaddtheclient.adminusertothemonitorkeyring.
TheforegoingrequirementsdonotimplythecreationofaCephConfigurationfile.However,asabestpractice,werecommend
creatingaCephconfigurationfileandpopulatingitwiththefsid,themoninitialmembersandthemonhostsettings.
Youcangetandsetallofthemonitorsettingsatruntimeaswell.However,aCephConfigurationfilemaycontainonlythose
settingsthatoverridethedefaultvalues.WhenyouaddsettingstoaCephconfigurationfile,thesesettingsoverridethe
defaultsettings.MaintainingthosesettingsinaCephconfigurationfilemakesiteasiertomaintainyourcluster.
Theprocedureisasfollows:
1. Logintotheinitialmonitornode(s):
ssh{hostname}
Forexample:
sshnode1
2. EnsureyouhaveadirectoryfortheCephconfigurationfile.Bydefault,Cephuses/etc/ceph.Whenyouinstallceph,the
installerwillcreatethe/etc/cephdirectoryautomatically.
ls/etc/ceph
Note:Deploymenttoolsmayremovethisdirectorywhenpurgingacluster(e.g.,ceph-deploypurgedata{node-name},
ceph-deploypurge{node-name}).
3. CreateaCephconfigurationfile.Bydefault,Cephusesceph.conf,wherecephreflectstheclustername.

sudovim/etc/ceph/ceph.conf
4. GenerateauniqueID(i.e.,fsid)foryourcluster.
uuidgen
5. AddtheuniqueIDtoyourCephconfigurationfile.
fsid={UUID}
Forexample:
fsid=a7f64266-0894-4f1e-a635-d0aeaca0e993
6. Addtheinitialmonitor(s)toyourCephconfigurationfile.
moninitialmembers={hostname}[,{hostname}]
Forexample:
moninitialmembers=node1
7. AddtheIPaddress(es)oftheinitialmonitor(s)toyourCephconfigurationfileandsavethefile.
monhost={ip-address}[,{ip-address}]
Forexample:
monhost=192.168.0.1
Note:YoumayuseIPv6addressesinsteadofIPv4addresses,butyoumustsetmsbindipv6totrue.SeeNetwork
ConfigurationReferencefordetailsaboutnetworkconfiguration.
8. Createakeyringforyourclusterandgenerateamonitorsecretkey.
ceph-authtool--create-keyring/tmp/ceph.mon.keyring--gen-key-nmon.--capmon'allow*'
9. Generateanadministratorkeyring,generateaclient.adminuserandaddtheusertothekeyring.
sudoceph-authtool--create-keyring/etc/ceph/ceph.client.admin.keyring--gen-key-nclient.admin
10. Addtheclient.adminkeytotheceph.mon.keyring.
ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring
11. Generateamonitormapusingthehostname(s),hostIPaddress(es)andtheFSID.Saveitas/tmp/monmap:
monmaptool--create--add{hostname}{ip-address}--fsid{uuid}/tmp/monmap
Forexample:

monmaptool--create--addnode1192.168.0.1--fsida7f64266-0894-4f1e-a635-d0aeaca0e993/tmp/monmap
12. Createadefaultdatadirectory(ordirectories)onthemonitorhost(s).
sudomkdir/var/lib/ceph/mon/{cluster-name}-{hostname}
Forexample:
sudomkdir/var/lib/ceph/mon/ceph-node1
SeeMonitorConfigReference-Datafordetails.
13. Populatethemonitordaemon(s)withthemonitormapandkeyring.
sudo-ucephceph-mon[--cluster{cluster-name}]--mkfs-i{hostname}--monmap/tmp/monmap--keyring
Forexample:
sudo-ucephceph-mon--mkfs-inode1--monmap/tmp/monmap--keyring/tmp/ceph.mon.keyring
14. ConsidersettingsforaCephconfigurationfile.Commonsettingsincludethefollowing:
[global]
fsid={cluster-id}
moninitialmembers={hostname}[,{hostname}]
monhost={ip-address}[,{ip-address}]
publicnetwork={network}[,{network}]
clusternetwork={network}[,{network}]
authclusterrequired=cephx
authservicerequired=cephx
authclientrequired=cephx
osdjournalsize={n}
osdpooldefaultsize={n}#Writeanobjectntimes.
osdpooldefaultminsize={n}#Allowwritingncopyinadegradedstate.
osdpooldefaultpgnum={n}
osdpooldefaultpgpnum={n}
osdcrushchooseleaftype={n}
Intheforegoingexample,the[global]sectionoftheconfigurationmightlooklikethis:
[global]
fsid=a7f64266-0894-4f1e-a635-d0aeaca0e993
moninitialmembers=node1
monhost=192.168.0.1
publicnetwork=192.168.0.0/24
authclusterrequired=cephx
authservicerequired=cephx
authclientrequired=cephx
osdjournalsize=1024
osdpooldefaultsize=3
osdpooldefaultminsize=2
osdpooldefaultpgnum=333
osdpooldefaultpgpnum=333
osdcrushchooseleaftype=1
15. Touchthedonefile.
Markthatthemonitoriscreatedandreadytobestarted:

sudotouch/var/lib/ceph/mon/ceph-node1/done
16. AndforFreeBSDanentryforeverymonitorneedstobeaddedtotheconfigfile.(Therequirementwillberemovedin
futurereleases).
Theentryshouldlooklike:
[mon]
[mon.node1]
host=node1#thisnamecanberesolve
17. Startthemonitor(s).
ForUbuntu,useUpstart:
sudostartceph-monid=node1[cluster={cluster-name}]
Inthiscase,toallowthestartofthedaemonateachrebootyoumustcreatetwoemptyfileslikethis:
sudotouch/var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
Forexample:
sudotouch/var/lib/ceph/mon/ceph-node1/upstart
ForDebian/CentOS/RHEL,usesysvinit:
sudo/etc/init.d/cephstartmon.node1
ForFreeBSDweusetherc.dinitscripts(calledbsdrcinCeph):
sudoservicecephstartstartmon.node1
Forthistowork/etc/rc.confalsoneedstheentrytoenableceph::
cat‘ceph_enable=”YES”’>>/etc/rc.conf
18. VerifythatCephcreatedthedefaultpools.
cephosdlspools
Youshouldseeoutputlikethis:
0data,1metadata,2rbd,
19. Verifythatthemonitorisrunning.
ceph-s
Youshouldseeoutputthatthemonitoryoustartedisupandrunning,andyoushouldseeahealtherrorindicatingthat
placementgroupsarestuckinactive.Itshouldlooksomethinglikethis:
clustera7f64266-0894-4f1e-a635-d0aeaca0e993
healthHEALTH_ERR192pgsstuckinactive;192pgsstuckunclean;noosds
monmape1:1monsat{node1=192.168.0.1:6789/0},electionepoch1,quorum0node1
osdmape1:0osds:0up,0in

pgmapv2:192pgs,3pools,0bytesdata,0objects
0kBused,0kB/0kBavail
192creating
Note:OnceyouaddOSDsandstartthem,theplacementgrouphealtherrorsshoulddisappear.Seethenextsectionfor
details.
Onceyouhaveyourinitialmonitor(s)running,youshouldaddOSDs.Yourclustercannotreachanactive+cleanstateuntil
youhaveenoughOSDstohandlethenumberofcopiesofanobject(e.g.,osdpooldefaultsize=2requiresatleasttwo
OSDs).Afterbootstrappingyourmonitor,yourclusterhasadefaultCRUSHmap;however,theCRUSHmapdoesn’thaveany
CephOSDDaemonsmappedtoaCephNode.
Cephprovidestheceph-diskutility,whichcanprepareadisk,partitionordirectoryforusewithCeph.Theceph-diskutility
createstheOSDIDbyincrementingtheindex.Additionally,ceph-diskwilladdthenewOSDtotheCRUSHmapunderthehost
foryou.Executeceph-disk-hforCLIdetails.Theceph-diskutilityautomatesthestepsoftheLongFormbelow.Tocreate
thefirsttwoOSDswiththeshortformprocedure,executethefollowingonnode2andnode3:
1. PreparetheOSD.
OnFreeBSDonlyexistingdirectoriescanbeusetocreateOSDsin:
ssh{node-name}
sudoceph-diskprepare--cluster{cluster-name}--cluster-uuid{uuid}{path-to-ceph-osd-directory
Forexample:
sshnode1
sudoceph-diskprepare--clusterceph--cluster-uuida7f64266-0894-4f1e-a635-d0aeaca0e993/var/
2. ActivatetheOSD:
sudoceph-diskactivate{data-path}[--activate-key{path}]
Forexample:
sudoceph-diskactivate/var/lib/ceph/osd/osd.1
Note:Usethe--activate-keyargumentifyoudonothaveacopyof/var/lib/ceph/bootstrap-
osd/{cluster}.keyringontheCephNode.
FreeBSDdoesnotautostarttheOSDs,butalsorequiresaentryinceph.conf.OneforeachOSD:
[osd]
[osd.1]
host=node1#thisnamecanberesolve
Withoutthebenefitofanyhelperutilities,createanOSDandaddittotheclusterandCRUSHmapwiththefollowing
procedure.TocreatethefirsttwoOSDswiththelongformprocedure,executethefollowingonnode2andnode3:
1. ConnecttotheOSDhost.

ssh{node-name}
2. GenerateaUUIDfortheOSD.
uuidgen
3. CreatetheOSD.IfnoUUIDisgiven,itwillbesetautomaticallywhentheOSDstartsup.Thefollowingcommandwill
outputtheOSDnumber,whichyouwillneedforsubsequentsteps.
cephosdcreate[{uuid}[{id}]]
4. CreatethedefaultdirectoryonyournewOSD.
ssh{new-osd-host}
sudomkdir/var/lib/ceph/osd/{cluster-name}-{osd-number}
AbovearetheZFSinstructionstodothisforFreeBSD.
5. IftheOSDisforadriveotherthantheOSdrive,prepareitforusewithCeph,andmountittothedirectoryyoujust
created.
6. InitializetheOSDdatadirectory.
ssh{new-osd-host}
sudoceph-osd-i{osd-num}--mkfs--mkkey--osd-uuid[{uuid}]
Thedirectorymustbeemptybeforeyoucanrunceph-osdwiththe--mkkeyoption.Inaddition,theceph-osdtool
requiresspecificationofcustomclusternameswiththe--clusteroption.
7. RegistertheOSDauthenticationkey.Thevalueofcephforceph-{osd-num}inthepathisthe$cluster-$id.Ifyour
clusternamediffersfromceph,useyourclusternameinstead.:
sudocephauthaddosd.{osd-num}osd'allow*'mon'allowprofileosd'-i/var/lib/ceph/osd/{cluster
8. AddyourCephNodetotheCRUSHmap.
ceph[--cluster{cluster-name}]osdcrushadd-bucket{hostname}host
Forexample:
cephosdcrushadd-bucketnode1host
9. PlacetheCephNodeundertherootdefault.
cephosdcrushmovenode1root=default
10. AddtheOSDtotheCRUSHmapsothatitcanbeginreceivingdata.YoumayalsodecompiletheCRUSHmap,addthe
OSDtothedevicelist,addthehostasabucket(ifit’snotalreadyintheCRUSHmap),addthedeviceasaniteminthe
host,assignitaweight,recompileitandsetit.
ceph[--cluster{cluster-name}]osdcrushadd{id-or-name}{weight}[{bucket-type}={bucket-name
Forexample:

cephosdcrushaddosd.01.0host=node1
11. AfteryouaddanOSDtoCeph,theOSDisinyourconfiguration.However,itisnotyetrunning.TheOSDisdownandin.
YoumuststartyournewOSDbeforeitcanbeginreceivingdata.
ForUbuntu,useUpstart:
sudostartceph-osdid={osd-num}[cluster={cluster-name}]
Forexample:
sudostartceph-osdid=0
sudostartceph-osdid=1
ForDebian/CentOS/RHEL,usesysvinit:
sudo/etc/init.d/cephstartosd.{osd-num}[--cluster{cluster-name}]
Forexample:
sudo/etc/init.d/cephstartosd.0
sudo/etc/init.d/cephstartosd.1
Inthiscase,toallowthestartofthedaemonateachrebootyoumustcreateanemptyfilelikethis:
sudotouch/var/lib/ceph/osd/{cluster-name}-{osd-num}/sysvinit
Forexample:
sudotouch/var/lib/ceph/osd/ceph-0/sysvinit
sudotouch/var/lib/ceph/osd/ceph-1/sysvinit
OnceyoustartyourOSD,itisupandin.
ForFreeBSDusingrc.dinit.
AfteraddingtheOSDtoceph.conf:
sudoservicecephstartosd.{osd-num}
Forexample:
sudoservicecephstartosd.0
sudoservicecephstartosd.1
Inthiscase,toallowthestartofthedaemonateachrebootyoumustcreateanemptyfilelikethis:
sudotouch/var/lib/ceph/osd/{cluster-name}-{osd-num}/bsdrc
Forexample:
sudotouch/var/lib/ceph/osd/ceph-0/bsdrc
sudotouch/var/lib/ceph/osd/ceph-1/bsdrc

OnceyoustartyourOSD,itisupandin.
Inthebelowinstructions,{id}isanarbitraryname,suchasthehostnameofthemachine.
1. Createthemdsdatadirectory.:
mkdir-p/var/lib/ceph/mds/{cluster-name}-{id}
2. Createakeyring.:
ceph-authtool--create-keyring/var/lib/ceph/mds/{cluster-name}-{id}/keyring--gen-key-nmds.{
3. Importthekeyringandsetcaps.:
cephauthaddmds.{id}osd"allowrwx"mds"allow"mon"allowprofilemds"-i/var/lib/ceph/mds
4. Addtoceph.conf.:
[mds.{id}]
host={id}
5. Startthedaemonthemanualway.:
ceph-mds--cluster{cluster-name}-i{id}-m{mon-hostname}:{mon-port}[-f]
6. Startthedaemontherightway(usingceph.confentry).:
servicecephstart
7. Ifstartingthedaemonfailswiththiserror:
mds.-1.0ERROR:failedtoauthenticate:(22)Invalidargument
Thenmakesureyoudonothaveakeyringsetinceph.confintheglobalsection;moveittotheclientsection;oradda
keyringsettingspecifictothismdsdaemon.Andverifythatyouseethesamekeyinthemdsdatadirectoryandceph
authgetmds.{id}output.
8. NowyouarereadytocreateaCephfilesystem.
OnceyouhaveyourmonitorandtwoOSDsupandrunning,youcanwatchtheplacementgroupspeerbyexecutingthe
following:
ceph-w
Toviewthetree,executethefollowing:
cephosdtree

Youshouldseeoutputthatlookssomethinglikethis:
#idweighttypenameup/downreweight
-12rootdefault
-22hostnode1
01osd.0up1
-31hostnode2
11osd.1up1
Toadd(orremove)additionalmonitors,seeAdd/RemoveMonitors.Toadd(orremove)additionalCephOSDDaemons,see
Add/RemoveOSDs.