Ceph-ansible 部署,ceph-ansible部署


1、部署规划

主机名

IP地址

系统os

用途

备注

ansible-deployment 10.1.204.108 centos7.5   ansible 部署主机,当作部署客户端
node1 10.1.210.105 centos7.5   添加一块独立的盘(vdb)
node2 10.1.210.106 centos7.5   添加一块独立的盘(vdb)
node3 10.1.210.107 centos7.5   添加一块独立的盘(vdb)

 

2、preflight(10.1.204.108主机执行)

code

1. download ansible

$ git clone https://github.com/ceph/ceph-ansible.git

 

2. add repo

yum install epel-release

yum install python-pip

pip install -r requirements.txt


3、no password login(10.1.204.108主机执行

code

1. ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/ceph/.ssh/id_rsa):

Created directory '/home/ceph/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/ceph/.ssh/id_rsa.

Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:hbnRIFZUtgZb9xxhZ/hN5b0GsxESJYbOiwBx6lvCTXo ceph@node1

The key's randomart image is:

+---[RSA 2048]----+

|   ...oo=.==+o+o=|

|   .o. . @.oo+o=o|

|   ...  B =  +oo+|

|  o +.   B    = +|

|   = E. S .  . o |

|    =  . .    .  |

|   .             |

|                 |

|                 |

+----[SHA256]-----+

 

 

ssh-copy-id ceph@node1

ssh-copy-id ceph@node2

ssh-copy-id ceph@node3

 

4、ceph Inventory config

code

[root@node1 ceph-ansible]# cat hosts

### ceph

[mons]

node1

node2

node3

[osds]

node1

node2

node3

 

[mgrs]

node1

node2

node3

 

[mdss]

node1

node2

node3

 

[clients]

node1

node2

node3

 

5、copy config file

拷贝配置文件

cp group_vars/all.yml.sample group_vars/all.yml

cp group_vars/osds.yml.sample group_vars/osds.yml

cp site.yml.sample site.yml

 

6、config all.yml file

group_vars/all.yml config

---

ceph_origin: repository

ceph_repository: community

ceph_mirror: http://mirrors.aliyun.com/ceph

ceph_stable_key: http://mirrors.aliyun.com/ceph/keys/release.asc

ceph_stable_release: luminous

ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"

 

fsid: 54d55c64-d458-4208-9592-36ce881cbcb7 ##通过uuidgen生成

generate_fsid: false

 

cephx: true

 

public_network: 10.1.204.0/23

cluster_network: 10.1.204.0/23

monitor_interface: ens3

 

ceph_conf_overrides:

    global:

      rbd_default_features: 7

      auth cluster required: cephx

      auth service required: cephx

      auth client required: cephx

      osd journal size: 2048

      osd pool default size: 3

      osd pool default min size: 1

      mon_pg_warn_max_per_osd: 1024

      osd pool default pg num: 128

      osd pool default pgp num: 128

      max open files: 131072

      osd_deep_scrub_randomize_ratio: 0.01

 

    mgr:

      mgr modules: dashboard

 

    mon:

      mon_allow_pool_delete: true

 

    client:

      rbd_cache: true

      rbd_cache_size: 335544320

      rbd_cache_max_dirty: 134217728

      rbd_cache_max_dirty_age: 10

 

    osd:

      osd mkfs type: xfs

    # osd mount options xfs: "rw,noexec,nodev,noatime,nodiratime,nobarrier"

      ms_bind_port_max: 7100

      osd_client_message_size_cap: 2147483648

      osd_crush_update_on_start: true

      osd_deep_scrub_stride: 131072

      osd_disk_threads: 4

      osd_map_cache_bl_size: 128

      osd_max_object_name_len: 256

      osd_max_object_namespace_len: 64

      osd_max_write_size: 1024

      osd_op_threads: 8

 

      osd_recovery_op_priority: 1

      osd_recovery_max_active: 1

      osd_recovery_max_single_start: 1

      osd_recovery_max_chunk: 1048576

      osd_recovery_threads: 1

      osd_max_backfills: 4

      osd_scrub_begin_hour: 23

      osd_scrub_end_hour: 7

6、

group_vars/osds.yml config

---

devices:

  - /dev/vdb

osd_scenario: collocated

osd_objectstore: bluestore

 

7、site.yml 配置更改,只改动下面的:

site.yml

---

# Defines deployment design and assigns role to server groups

 

- hosts:

  - mons

#  - agents

  - osds

  - mdss

#  - rgws

#  - nfss

#  - rbdmirrors

  - clients

  - mgrs

install :

ansible-playbook -i hosts site.yml

 安装过程中如果出现错误,清空后重新安装

cp infrastructure-playbooks/purge-cluster.yml purge-cluster.yml # 必须copy到项目根目录下

ansible-playbook -i hosts purge-cluster.yml

相关内容

    暂无相关文章