Appearance
Ceph安装
概述
ceph-nano由 Ceph 和容器技术驱动,内部运行一个 Ceph 容器并暴露一个 Rados 网关,通过提供一个兼容 REST 的 S3 网关,帮助开发者与 S3 API 进行交互。
文件存储如果不使用MinIO的话,可以使用Ceph作为其平替。
安装ceph-nano
先决条件
在安装ceph-nano
之前,需要确保您的服务器上安装了docker
。
安装cn
- Linux amd64
bash
[root@localhost ~]curl -L https://github.com/ceph/cn/releases/download/v2.3.1/cn-v2.3.1-linux-amd64 -o cn && chmod +x cn
- Linux arm64
bash
[root@localhost ~]curl -L https://github.com/ceph/cn/releases/download/v2.3.1/cn-v2.3.1-linux-arm64 -o cn && chmod +x cn
测试安装是否成功
bash
[root@localhost ~]./cn
Ceph Nano - One step S3 in container with Ceph.
*(((((((((((((
(((((((((((((((((((
((((((((* ,(((((((*
(((((( ((((((
*((((, ,((((/
((((, ((((((/ *((((
(((( ((((((((( ((((
/((( ((((((((( ((((
(((. ((((((( /(((/
((( *((((
.((( (((((
,(((((((* /(((
.((((( ((( (/ // (((
/(((. /((((( /(((((
.((((/ (/
Usage:
cn [command]
Available Commands:
cluster Interact with a particular Ceph cluster
s3 Interact with a particular S3 object server
image Interact with cn's container image(s)
version Print the version of cn
kube Outputs cn kubernetes template (cn kube > kube-cn.yml)
update-check Print cn current and latest version number
flavors Interact with flavors
completion Generates bash completion scripts
Flags:
-h, --help help for cn
Use "cn [command] --help" for more information about a command.
开始使用
- 使用工作目录启动程序 /tmp,初始启动会比较久,需要等待一段时间
bash
# ./cn cluster start -d /tmp [cluster]
[root@localhost ~] ./cn cluster start -d /tmp my-first-cluster
Running ceph-nano...
The container image is not present, pulling it.
This operation can take a few minutes......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Endpoint: http://10.36.116.164:8000
Dashboard: http://10.36.116.164:5001 // 控制面板
Access key is: 9ZU1QBYX13KPLXXDDCY2
Secret key is: nthNG1xb7ta5IDKiJKM8626pQitqsalEo0ta7B9E
Working directory: /usr/share/ceph-nano // 工作目录
# 参数说明:
[root@localhost ~]./cn cluster start -h
Examples:
cn cluster start mycluster
cn cluster start mycluster -f tiny
cn cluster start mycluster --work-dir /tmp
cn cluster start mycluster --image ceph/daemon:latest-luminous
cn cluster start mycluster -b /dev/sdb
cn cluster start mycluster -b /srv/nano -s 20GB
Flags:
-d, --work-dir string Directory to work from (default "/usr/share/ceph-nano")
-i, --image string USE AT YOUR OWN RISK. Ceph container image to use, format is 'registry/username/image:tag'.
The image name could also be an alias coming from the hardcoded values or the configuration file.
Use 'image show-aliases' to list all existing aliases. (default "ceph/daemon")
-b, --data string Configure Ceph Nano underlying storage with a specific directory or physical block device.
Block device support only works on Linux running under 'root', only also directory might need running as 'root' if SeLinux is enabled.
-s, --size string Configure Ceph Nano underlying storage size when using a specific directory
-f, --flavor string Select the container flavor. Use 'flavors ls' command to list available flavors. (default "default")
--help help for start
- 创建bucket
bash
# ./cn s3 mb [cluster] [bucket]
[root@localhost ~]./cn s3 mb my-first-cluster my-bucket
Bucket 's3://my-bucket/' created
- 上传文件至bucket
bash
# ./cn s3 put [cluster] [file_path] [bucket]
[root@localhost ~]./cn s3 put my-first-cluster /etc/passwd my-bucket
upload: '/tmp/passwd' -> 's3://my-bucket/passwd' [1 of 1]
5925 of 5925 100% in 1s 4.57 kB/s done
- 常用命令
./cn cluster ls
打印cluster列表
bash
[root@localhost ~]# ./cn cluster ls
+-----------+---------+--------------------+----------------+--------------------------------+--------+
| NAME | STATUS | IMAGE | IMAGE RELEASE | IMAGE CREATION TIME | FLAVOR |
+-----------+---------+--------------------+----------------+--------------------------------+--------+
| mycluster | running | ceph/daemon:latest | master-dba849b | 2021-08-16T16:23:04.895173052Z | huge |
+-----------+---------+--------------------+----------------+--------------------------------+--------+
./cn cluster start [cluster]
启动cluster
INFO
第一次启动会比较久,需要等待几分钟
可选参数:
bash
-d, --work-dir string Directory to work from (default "/usr/share/ceph-nano")
-i, --image string USE AT YOUR OWN RISK. Ceph container image to use, format is 'registry/username/image:tag'.
The image name could also be an alias coming from the hardcoded values or the configuration file.
Use 'image show-aliases' to list all existing aliases. (default "ceph/daemon")
-b, --data string Configure Ceph Nano underlying storage with a specific directory or physical block device.
Block device support only works on Linux running under 'root', only also directory might need running as 'root' if SeLinux is enabled.
-s, --size string Configure Ceph Nano underlying storage size when using a specific directory
-f, --flavor string Select the container flavor. Use 'flavors ls' command to list available flavors. (default "default")
DANGER
启动cluster报错: Error response from daemon: driver failed programming external connectivity on endpoint ceph-nano-s3-cluster (d0489eb87c3df2036d38e281eef8fd1e0e8ca44e3b8faf446b0543cd5babf59a): Unable to enable DNAT rule: (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 5000 -j DNAT --to-destination 172.17.0.2:5000 ! -i docker0: iptables: No chain/target/match by that name. (exit status 1)) 可能是防火墙被关闭导致的,使用 [root@localhost ~]# systemctl stop firewalld.service 开启防火墙后重试
./cn cluster status [cluster]
查看cluster状态
bash
[root@localhost ~]# ./cn cluster status s3-cluster
Endpoint: http://192.168.1.150:8001
Dashboard: http://192.168.1.150:5000
Access key: 2XCQ7FA3Y5G5CNVA15Y9
Secret key: 81BiuYHnKxo0JkQymK32a3tBybnLosBlgznap6fi
Working directory: /usr/share/ceph-nano
./cn cluster stop [cluster]
停止指定的cluster./cn cluster restart [cluster]
重启cluster./cn cluster purge [cluster]
删除cluster
DANGER
删除cluster时,会把cluster下所有bucket内的文件删除
s3相关命令:
bash
Usage:
cn s3 [command]
Available Commands:
mb Make bucket
rb Remove bucket
ls List objects or buckets
la List all object in all buckets
put Put file into bucket
get Get file out of a bucket
del Delete file from bucket
du Disk usage by buckets
info Get various information about Buckets or Files
cp Copy object
mv Move object
sync Synchronize a directory tree to S3
在织信中使用
cluster创建后或者执行./cn cluster status [cluster]
命令可以查看cluster的配置信息。
例如:
bash
[root@localhost ~]# ./cn cluster status s3-cluster
Endpoint: http://192.168.1.150:8001
Dashboard: http://192.168.1.150:5000
Access key: 2XCQ7FA3Y5G5CNVA15Y9
Secret key: 81BiuYHnKxo0JkQymK32a3tBybnLosBlgznap6fi
Working directory: /usr/share/ceph-nano
在织信的admin中找到文件存储
安装docker
以redhat
系统为例,安装docker
的流程如下:
卸载依赖
在安装之前,需要确保您的系统为redhat8或redhat9
bash
[root@localhost ~]sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine \
podman \
runc
设置存储库
bash
[root@localhost ~]sudo yum install -y yum-utils
[root@localhost ~]sudo yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo
安装docker
bash
[root@localhost ~]sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
启动docker
bash
[root@localhost ~]sudo systemctl start docker
验证是否成功安装
bash
[root@localhost ~]sudo docker run hello-world
此命令下载测试映像并在容器中运行。容器运行时,它会打印一条确认消息并退出。
INFO
更多docker命令参考:https://www.runoob.com/docker/docker-command-manual.html