未加星标

企业私有云之共享存储ceph在centos7安装与应用

字体大小 | |
[系统(linux) 所属分类 系统(linux) | 发布者 店小二05 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

构建云设施,存储是一个重要组件,所以本文主要介绍一下我这里如何使用ceph的。

云软件选择openstack,版本是Mitaka,部署系统是centos 7.1,ceph版本是10.2.2.

选择ceph的原因是,免费、开源、支持多,并且市面上大部分都是选择ceph做云存储。

另外本文是参考了 http://www.vpsee.com/2015/07/install-ceph-on-centos-7/

目录

一、ceph安装

二、openstack里应用ceph集群

三、glance应用ceph

四、删除osd节点

五、ceph使用混合磁盘

下面是开始安装

可以参考官方的http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

一、ceph安装

主机环境

一个adm,3个mon,3个osd,复制2份

下面是hosts配置(每个主机都有)

10.10.128.18 ck-ceph-adm 10.10.128.19 ck-ceph-mon1 10.10.128.20 ck-ceph-mon2 10.10.128.21 ck-ceph-mon3 10.10.128.22 ck-ceph-osd1 10.10.128.23 ck-ceph-osd2 10.10.128.24 ck-ceph-osd3

另外需要对mon与osd节点进行一些优化

绑定盘符 ll /sys/block/sd*|awk '{print $NF}'|sed 's/..//'|awk -F '/' '{print "DEVPATH==\""$0"\", NANE=\""$NF"\", MODE=\"0660\""}'>/etc/udev/rules.d/90-ceph-disk.rules #关闭节能模式 for CPUFREQ in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do [ -f $CPUFREQ ] || continue; echo -n performance > $CPUFREQ; done #增加pid数量 echo "kernel.pid_max = 4194303"|tee -a /etc/sysctl.conf #增加最大打开文件数量 echo "fs.file-max = 26234859"|tee -a /etc/sysctl.conf #增加顺序度 for READ_KB in /sys/block/sd*/queue/read_ahead_kb; do [ -f $READ_KB ] || continue; echo 8192 > $READ_KB; done #增加IO调度队列 for REQUEST in /sys/block/sd*/queue/nr_requests; do [ -f $REQUEST ] || continue; echo 20480 > $REQUEST; done #配置IO调度器 for SCHEDULER in /sys/block/sd*/queue/scheduler; do [ -f $SCHEDULER ] || continue; echo deadline > $SCHEDULER; done #关闭swwap echo "vm.swappiness = 0" | tee -a /etc/sysctl.conf

每个主机也最好是把主机名修改跟hosts里一致

1、创建用户

useradd -m ceph-admin su - ceph-admin mkdir -p ~/.ssh chmod 700 ~/.ssh cat << EOF > ~/.ssh/config Host * Port 50020 StrictHostKeyChecking no UserKnownHostsFile=/dev/null EOF chmod 600 ~/.ssh/config

做ssh信任

ssh-keygen -t rsa -b 2048

之后一路回车就行

复制id_rsa.pub到其他节点的/home/ceph/.ssh/authorized_keys

chmod 600 .ssh/authorized_keys

之后给予ceph sudo权限

修改/etc/sudoers

ceph-admin ALL=(root) NOPASSWD:ALL

之后在这个配置文件里关闭

Defaults requiretty

在这行前加#

对osd组服务器进行磁盘格式化

如果只是测试,可以直接使用目录,正式使用,还是直接裸设备格式化

cat auto_parted.sh #!/bin/bash name="b c d e f g h i" for i in ${name}; do echo "Creating partitions on /dev/sd${i} ..." parted -a optimal --script /dev/sd${i} -- mktable gpt parted -a optimal --script /dev/sd${i} -- mkpart primary xfs 0% 100% sleep 1 mkfs.xfs -f /dev/sd${i}1 & done

然后运行

2、安装epel(所有节点)

yum -y install epel-release

3、安装ceph源(所有节点,如果是不使用ceph-deploy安装使用,否则使用ceph-deploy自动安装)

yum -y install yum-plugin-priorities rpm --import https://download.ceph.com/keys/release.asc rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm cd /etc/yum.repos.d/ sed -i [email protected]@[email protected]' ceph.repo yum -y install ceph ceph-radosgw

4、管理节点配置

安装定制软件

yum install ceph-deploy -y

进行初始化

su - ceph-admin mkdir ck-ceph-cluster cd ck-ceph-cluster ceph-deploy new ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3

有几个mon节点就写几个

配置

echo "osd pool default size = 2">>ceph.conf echo "osd pool default min size = 2">>ceph.conf echo "public network = 10.10.0.0/16">>ceph.conf echo "cluster network = 172.16.0.0/16">>ceph.conf

请注意如果是多个网卡的话,最好把public与cluster单独区分出来,cluster是集群通信与同步数据网络,public是供监控与客户端连接网络。

在所有节点安装ceph(如果是想使用ceph-deploy安装就进行,如果使用了第3步,可以忽略这步)

ceph-deploy install ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3

监控节点初始化

ceph-deploy mon create-initial

对osd节点进行数据盘初始化

ceph-deploy disk zap ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi ceph-deploy osd create ck-ceph-osd1:sdb ck-ceph-osd1:sdc ck-ceph-osd1:sdd ck-ceph-osd1:sde ck-ceph-osd1:sdf ck-ceph-osd1:sdg ck-ceph-osd1:sdh ck-ceph-osd1:sdi ceph-deploy disk zap ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi ceph-deploy osd create ck-ceph-osd2:sdb ck-ceph-osd2:sdc ck-ceph-osd2:sdd ck-ceph-osd2:sde ck-ceph-osd2:sdf ck-ceph-osd2:sdg ck-ceph-osd2:sdh ck-ceph-osd2:sdi ceph-deploy disk zap ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi ceph-deploy osd create ck-ceph-osd3:sdb ck-ceph-osd3:sdc ck-ceph-osd3:sdd ck-ceph-osd3:sde ck-ceph-osd3:sdf ck-ceph-osd3:sdg ck-ceph-osd3:sdh ck-ceph-osd3:sdi

同步配置

ceph-deploy admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon2 ck-ceph-osd1 ck-ceph-osd2 ck-ceph-osd3 ceph-deploy --overwrite-conf admin ck-ceph-adm ck-ceph-mon1 ck-ceph-mon2 ck-ceph-mon3 ck-ceph-osd1 ck-ceph-osd2 sudo chmod +r /etc/ceph/ceph.client.admin.keyring

对所有节点/etc/ceph修改权限

sudo chown -R ceph:ceph /etc/ceph

查看集群信息

[[email protected] ~]$ ceph -s cluster 2aafe304-2dd1-48be-a0fa-cb9c911c7c3b health HEALTH_OK monmap e1: 3 mons at {ck-ceph-mon1=10.10.128.19:6789/0,ck-ceph-mon2=10.10.128.20:6789/0,ck-ceph-mon3=10.10.128.21:6789/0} election epoch 6, quorum 0,1,2 ck-ceph-mon1,ck-ceph-mon2,ck-ceph-mon3 osdmap e279: 40 osds: 40 up, 40 in flags sortbitwise pgmap v96866: 2112 pgs, 3 pools, 58017 MB data, 13673 objects 115 GB used, 21427 GB / 21543 GB avail 2112 active+clean

二、openstack里应用ceph集群

可以参考官网 http://docs.ceph.com/docs/master/rbd/rbd-openstack/

1、创建池子

ceph osd pool create volumes 1024 1024

这个1024的pg_num与pgp_num的值,大家参考 http://docs.ceph.com/docs/master/rados/operations/placement-groups/

2、安装ceph客户端工具

在所有cinder节点与计算节点都安装

rpm -Uvh --replacepkgs http://mirrors.ustc.edu.cn/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm yum install ceph-common

3、同步配置

同步/etc/ceph/ceph.conf

把adm里的同步到cinder节点与计算节点

4、安全认证(ceph管理节点)

运行cinder用户访问ceph权限

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes'

5、把key加入节点(管理节点)

ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring

6、密钥文件管理(管理节点)

ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key

把密钥加入到libvirt使用

获取uuid

uuidgen 457eb676-33da-42ec-9a8c-9293d545c337

登陆计算节点,把uuid改为上面的

cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret 457eb676-33da-42ec-9a8c-9293d545c337 created sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

重启服务

systemctl restart openstack-nova-compute.service

使用virsh secret-list查看是否有此密钥

如果不在所有节点使用,那么在把云硬盘挂载到实例的时候出现/var/log/nova/nova-compute.log

2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher rv = meth(*args, **kwargs) 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib64/python2.7/site-packages/libvirt.py", line 554, in attachDeviceFlags 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) 2016-06-02 11:58:11.193 3004 ERROR oslo_messaging.rpc.dispatcher libvirtError: Secret not found: rbd no secret matches uuid '9c0e4528-bd0f-4fe8-a3cd-7b1b9bb21d63'

7、配置cinder(在cinder节点)

修改/etc/cinder/cinder.conf配置

[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 rbd_user = cinder rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

并把下面修改

enabled_backends = ceph

重启服务

systemctl restart openstack-cinder-volume.service target.service

三、glance应用ceph

1、创建池子(在ceph管理节点操作)

ceph osd pool create images 128

2、设置权限(在ceph管理节点操作)

ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

3、在glance主机里安装ceph

yum install ceph-common

4、复制ceph配置文件到glance节点

同步/etc/ceph/ceph.conf

5、配置认证

ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring

6、配置glance文件

修改/etc/glance/glance-api.conf

[glance_store] stores = rbd default_store = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8

7、重启服务

systemctl restart openstack-glance-api.service openstack-glance-registry.service

8、上传镜像并测试

glance image-create --name centos64-test1 --disk-format qcow2 --container-format bare --visibility public --file /tmp/CentOS-6.4-x86_64.qcow2 --progress [[email protected] ceph]# rados -p images ls rbd_header.7eca70122ade rbd_data.7eca70122ade.0000000000000000 rbd_directory rbd_data.7eca70122ade.0000000000000001 rbd_data.7ee831dac577.0000000000000000 rbd_header.7ee831dac577 rbd_id.c7a81292-773f-457a-859c-2784d780544c rbd_data.7ee831dac577.0000000000000001 rbd_data.7ee831dac577.0000000000000002 rbd_id.a5ae8722-698a-4a84-aa29-500144616001

四、删除osd节点

1、移出集群(管理节点执行)

ceph osd out 7 (ceph osd tree中,REWEIGHT值变为0)

2、停止服务(目标节点执行)

systemctl stop [email protected] (ceph osd tree中,状态变为DOWN)

3、移出crush

ceph osd crush remove osd.7

4、删除key

ceph auth del osd.7

5、移除osd

ceph osd rm 7

6、查找其所在主机是否还有osd,若有,进入第7步骤,否则

ceph osd crush remove `hostname`

7、修改并同步ceph.conf文件

vi /etc/ceph/ceph.conf

8、删除目录文件

rm Crf * /var/lib/ceph/osd/ceph-7

五、ceph使用混合磁盘

下面是使用sas 15k 600g与sas 7.2k 4T做混合存储

下面是修改前的

[[email protected] ck-ceph-cluster]$ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 44436G 44434G 1844M 0 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 0 0 22216G 0 [[email protected] ck-ceph-cluster]$ rados df pool name KB objects clones degraded unfound rd rd KB wr wr KB rbd 0 0 0 0 0 0 0 0 0 total used 1888268 0 total avail 46592893204 total space 46594781472

1、获取当前crush map,反编译它

[[email protected] ck-ceph-cluster]$ ceph osd getcrushmap -o default-crushmapdump got crush map from osdmap epoch 238 [[email protected] ck-ceph-cluster]$ crushtool -d default-crushmapdump -o default-crushmapdump-decompiled [[email protected] ck-ceph-cluster]$ cat default-crushmapdump-decompiled # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 device 6 osd.6 device 7 osd.7 device 8 osd.8 device 9 osd.9 device 10 osd.10 device 11 osd.11 device 12 osd.12 device 13 osd.13 device 14 osd.14 device 15 osd.15 device 16 osd.16 device 17 osd.17 device 18 device18 device 19 osd.19 device 20 osd.20 device 21 osd.21 device 22 osd.22 device 23 osd.23 device 24 osd.24 device 25 osd.25 device 26 osd.26 device 27 osd.27 device 28 osd.28 device 29 osd.29 device 30 osd.30 device 31 osd.31 device 32 osd.32 device 33 osd.33 device 34 osd.34 device 35 osd.35 device 36 osd.36 device 37 osd.37 device 38 osd.38 device 39 osd.39 device 40 osd.40 device 41 osd.41 device 42 osd.42 device 43 osd.43 device 44 osd.44 device 45 osd.45 device 46 osd.46 # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ck-ceph-osd1 { id -2 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.0 weight 0.540 item osd.1 weight 0.540 item osd.2 weight 0.540 item osd.3 weight 0.540 item osd.4 weight 0.540 item osd.5 weight 0.540 item osd.6 weight 0.540 item osd.7 weight 0.540 item osd.8 weight 0.540 item osd.9 weight 0.540 item osd.10 weight 0.540 item osd.11 weight 0.540 } host ck-ceph-osd2 { id -3 # do not change unnecessarily # weight 8.641 alg straw hash 0 # rjenkins1 item osd.12 weight 0.540 item osd.13 weight 0.540 item osd.14 weight 0.540 item osd.15 weight 0.540 item osd.16 weight 0.540 item osd.17 weight 0.540 item osd.19 weight 0.540 item osd.20 weight 0.540 item osd.21 weight 0.540 item osd.22 weight 0.540 item osd.23 weight 0.540 item osd.24 weight 0.540 item osd.25 weight 0.540 item osd.26 weight 0.540 item osd.27 weight 0.540 item osd.28 weight 0.540 } host ck-ceph-osd3 { id -4 # do not change unnecessarily # weight 6.481 alg straw hash 0 # rjenkins1 item osd.29 weight 0.540 item osd.30 weight 0.540 item osd.31 weight 0.540 item osd.32 weight 0.540 item osd.33 weight 0.540 item osd.34 weight 0.540 item osd.35 weight 0.540 item osd.36 weight 0.540 item osd.37 weight 0.540 item osd.38 weight 0.540 item osd.39 weight 0.540 item osd.40 weight 0.540 } host ck-ceph-osd4 { id -5 # do not change unnecessarily # weight 21.789 alg straw hash 0 # rjenkins1 item osd.41 weight 3.631 item osd.42 weight 3.631 item osd.43 weight 3.631 item osd.44 weight 3.631 item osd.45 weight 3.631 item osd.46 weight 3.631 } root default { id -1 # do not change unnecessarily # weight 43.392 alg straw hash 0 # rjenkins1 item ck-ceph-osd1 weight 6.481 item c

本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统

主题: CPU硬盘服务器开源云存储DUUF数据删除上大
分页:12
转载请注明
本文标题:企业私有云之共享存储ceph在centos7安装与应用
本站链接:http://www.codesec.net/view/484993.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(linux) | 评论(0) | 阅读(26)