阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Redis Cluster 自动化安装,扩容和缩容

465次阅读
没有评论

共计 32502 个字符,预计需要花费 82 分钟才能阅读完成。

Redis Cluster 自动化安装,扩容和缩容

之前写过一篇基于 Python 的 redis 集群自动化安装的实现,基于纯命令的集群实现还是相当繁琐的,因此官方提供了 redis-trib.rb 这个工具
虽然官方的的 redis-trib.rb 提供了集群创建、检查、修复、均衡等命令行工具,之所个人接受不了 redis-trib.rb,原因在于 redis-trib.rb 无法自定义实现集群中节点的主从关系。
比如 ABCDEF6 个节点,在创建集群的过程中必然要明确指定哪些是主,哪些是从,主从对应关系,可惜通过 redis-trib.rb 无法自定义控制,参考如下截图。
更多的时候,是需要明确指明哪些机器作为主节点,哪些作为从节点,redis-trib.rb 做不到自动控制集群中的哪些机器(实例)作为主,哪些机器(实例)作为从。
如果使用 redis-trib.rb,还需要解决 ruby 的环境依赖,因此个人不太接受使用 redis-trib.rb 搭建集群。

引用《Redis 开发与运维》里面的原话:
如果部署节点使用不同的 IP 地址,redis-trib.rb 会尽可能保证主从节点不分配在同一机器下,因此会重新排序节点列表顺序。
节点列表顺序用于确定主从角色,先主节点之后是从节点。
这说明:使用 redis-trib.rb 是无法人为地完全控制主从节点的分配的。

后面 redis 5.0 版本的 Redis-cli –cluster 已经实现了集群的创建,无需依赖 redis-trib.rb,包括 ruby 环境,redis 5.0 版本 Redis-cli –cluster 本身已经实现了集群等相关功能
但是基于纯命令本身还是比较复杂的, 尤其是在较为复杂的生产环境,通过手动方式来创建集群,扩容或者缩容,会存在一系列的手工操作,以及一些不安全因素。
所以,自动化的集群创建,扩容以及缩容是有必要的。

测试环境

这里基于 Python3,以 redis-cli –cluster 命令为基础,实现 redis 自动化集群,自动化扩容,自动化缩容

测试环境以单机多实例为示例,一共 8 个节点,
1,自动化集群的创建,6 各节点(10001~10006)创建为 3 主(10001~10002)3 从(10004~10006)的集群
2,集群的自动化扩容,增加新节点 10007 为主节点,同时添加 10008 为 10007 节点的 slave 节点
3,集群的自动化缩容,与 2 相反,移除集群中的 10007 以及其 slave 的 10008 节点

 

Redis 集群创建

集群的本质是执行两组命令,一个是将主节点加入到集群中,一个是依次对主节点添加 slave 节点。
但是期间会涉及到找到各个节点 id 的逻辑,因此手动实现的话,比较繁琐。
主要命令如下:

################# create cluster #################
redis-cli –cluster create 127.0.0.1:10001 127.0.0.1:10002 127.0.0.1:10003 -a ****** –cluster-yes
################# add slave nodes #################
redis-cli –cluster add-node 127.0.0.1:10004 127.0.0.1:10001 –cluster-slave –cluster-master-id 6164025849a8ff9297664fc835bc851af5004f61 -a ******
redis-cli –cluster add-node 127.0.0.1:10005 127.0.0.1:10002 –cluster-slave –cluster-master-id 64e634307bdc339b503574f5a77f1b156c021358 -a ******
redis-cli –cluster add-node 127.0.0.1:10006 127.0.0.1:10003 –cluster-slave –cluster-master-id 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a -a ******

 

这里使用 python 创建的过程中打印出来 redis-cli –cluster 命令的日志信息

[root@JD redis_install]# python3 create_redis_cluster.py
################# flush master/slave slots #################
################# create cluster #################
redis-cli --cluster create 127.0.0.1:10001 127.0.0.1:10002 127.0.0.1:10003   -a ****** --cluster-yes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 127.0.0.1:10001)
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
0
################# add slave nodes #################
redis-cli --cluster add-node 127.0.0.1:10004 127.0.0.1:10001 --cluster-slave --cluster-master-id 6164025849a8ff9297664fc835bc851af5004f61 -a ******
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 127.0.0.1:10004 to cluster 127.0.0.1:10001
>>> Performing Cluster Check (using node 127.0.0.1:10001)
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:10004 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:10001.
[OK] New node added correctly.
0
redis-cli --cluster add-node 127.0.0.1:10005 127.0.0.1:10002 --cluster-slave --cluster-master-id 64e634307bdc339b503574f5a77f1b156c021358 -a ******
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 127.0.0.1:10005 to cluster 127.0.0.1:10002
>>> Performing Cluster Check (using node 127.0.0.1:10002)
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
S: 026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004
   slots: (0 slots) slave
   replicates 6164025849a8ff9297664fc835bc851af5004f61
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:10005 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:10002.
[OK] New node added correctly.
0
redis-cli --cluster add-node 127.0.0.1:10006 127.0.0.1:10003 --cluster-slave --cluster-master-id 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a -a ******
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 127.0.0.1:10006 to cluster 127.0.0.1:10003
>>> Performing Cluster Check (using node 127.0.0.1:10003)
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005
   slots: (0 slots) slave
   replicates 64e634307bdc339b503574f5a77f1b156c021358
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004
   slots: (0 slots) slave
   replicates 6164025849a8ff9297664fc835bc851af5004f61
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:10006 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:10003.
[OK] New node added correctly.
0
################# cluster nodes info: #################
8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003@20003 myself,master - 0 1575947748000 53 connected 10923-16383
64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002@20002 master - 0 1575947748000 52 connected 5461-10922
23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005@20005 slave 64e634307bdc339b503574f5a77f1b156c021358 0 1575947746000 52 connected
6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001@20001 master - 0 1575947748103 51 connected 0-5460
026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004@20004 slave 6164025849a8ff9297664fc835bc851af5004f61 0 1575947749000 51 connected
9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006@20006 slave 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 0 1575947749105 53 connected

[root@JD redis_install]#

 

Redis 集群扩容

redis 扩容主要分为两步:
1,增加主节点,同时为主节点增加从节点。
2,重新分配 slot 到新增加的 master 节点上。

主要命令如下:

增加主节点到集群中
redis-cli –cluster add-node 127.0.0.1:10007 127.0.0.1:10001 -a ******
为增加的主节点添加从节点
redis-cli –cluster add-node 127.0.0.1:10008 127.0.0.1:10007 –cluster-slave –cluster-master-id 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 -a ******

重新分片 slot
############################ execute reshard #########################################
redis-cli -a redis@password –cluster reshard 127.0.0.1:10001 –cluster-from 6164025849a8ff9297664fc835bc851af5004f61 –cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a redis@password –cluster reshard 127.0.0.1:10002 –cluster-from 64e634307bdc339b503574f5a77f1b156c021358 –cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a redis@password –cluster reshard 127.0.0.1:10003 –cluster-from 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a –cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1

################# cluster nodes info: #################
026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004@20004 slave 6164025849a8ff9297664fc835bc851af5004f61 0 1575960493000 64 connected
9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006@20006 slave 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 0 1575960493849 66 connected
64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002@20002 master – 0 1575960494852 65 connected 6826-10922
23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005@20005 slave 64e634307bdc339b503574f5a77f1b156c021358 0 1575960492000 65 connected
4854375c501c3dbfb4e2d94d50e62a47520c4f12 127.0.0.1:10008@20008 slave 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 0 1575960493000 67 connected
8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003@20003 master – 0 1575960493000 66 connected 12288-16383
3645e00a8ec3a902bd6effb4fc20c56a00f2c982 127.0.0.1:10007@20007 myself,master – 0 1575960493000 67 connected 0-1364 5461-6825 10923-12287
6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001@20001 master – 0 1575960492848 64 connected 1365-5460
可见新加的节点成功重新分配了 slot,集群扩容成功。

这里有几个需要注意的两个问题,如果是自动化安装的话:
1,add-node 之后(不管是柱节点还是从节点),要 sleep 足够长的时间(这里是 20 秒),让集群中所有的节点都 meet 到新节点,否则会扩容失败
2,新节点的 reshard 之后要 sleep 足够长的时间(这里是 20 秒),否则继续 reshard 其他节点的 slot 会导致上一个 reshared 失败

 

整个过程如下

[root@JD redis_install]# python3 create_redis_cluster.py
#########################cleanup instance#################################
#########################add node into cluster#################################
 redis-cli --cluster add-node 127.0.0.1:10007 127.0.0.1:10001  -a redis@password
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 127.0.0.1:10007 to cluster 127.0.0.1:10001
>>> Performing Cluster Check (using node 127.0.0.1:10001)
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006
   slots: (0 slots) slave
   replicates 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004
   slots: (0 slots) slave
   replicates 6164025849a8ff9297664fc835bc851af5004f61
S: 23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005
   slots: (0 slots) slave
   replicates 64e634307bdc339b503574f5a77f1b156c021358
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:10007 to make it join the cluster.
[OK] New node added correctly.
0
 redis-cli --cluster add-node 127.0.0.1:10008 127.0.0.1:10007 --cluster-slave --cluster-master-id 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 -a ******
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 127.0.0.1:10008 to cluster 127.0.0.1:10007
>>> Performing Cluster Check (using node 127.0.0.1:10007)
M: 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 127.0.0.1:10007
   slots: (0 slots) master
S: 026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004
   slots: (0 slots) slave
   replicates 6164025849a8ff9297664fc835bc851af5004f61
S: 9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006
   slots: (0 slots) slave
   replicates 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a
M: 64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005
   slots: (0 slots) slave
   replicates 64e634307bdc339b503574f5a77f1b156c021358
M: 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:10008 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:10007.
[OK] New node added correctly.
0
#########################reshard slots#################################
############################ execute reshard #########################################
redis-cli -a redis@password --cluster reshard 127.0.0.1:10001 --cluster-from 6164025849a8ff9297664fc835bc851af5004f61 --cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 --cluster-slots 1365 --cluster-yes --cluster-timeout 50000 --cluster-pipeline 10000   --cluster-replace  >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a redis@password --cluster reshard 127.0.0.1:10002 --cluster-from 64e634307bdc339b503574f5a77f1b156c021358 --cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 --cluster-slots 1365 --cluster-yes --cluster-timeout 50000 --cluster-pipeline 10000   --cluster-replace  >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a redis@password --cluster reshard 127.0.0.1:10003 --cluster-from 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a --cluster-to 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 --cluster-slots 1365 --cluster-yes --cluster-timeout 50000 --cluster-pipeline 10000   --cluster-replace  >/dev/null 2>&1
################# cluster nodes info: #################
026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004@20004 slave 6164025849a8ff9297664fc835bc851af5004f61 0 1575960493000 64 connected
9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006@20006 slave 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 0 1575960493849 66 connected
64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002@20002 master - 0 1575960494852 65 connected 6826-10922
23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005@20005 slave 64e634307bdc339b503574f5a77f1b156c021358 0 1575960492000 65 connected
4854375c501c3dbfb4e2d94d50e62a47520c4f12 127.0.0.1:10008@20008 slave 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 0 1575960493000 67 connected
8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003@20003 master - 0 1575960493000 66 connected 12288-16383
3645e00a8ec3a902bd6effb4fc20c56a00f2c982 127.0.0.1:10007@20007 myself,master - 0 1575960493000 67 connected 0-1364 5461-6825 10923-12287
6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001@20001 master - 0 1575960492848 64 connected 1365-5460

[root@JD redis_install]#

 

Redis 集群缩容

缩容按道理是扩容的反向操作.
从这个命令就可以看出来:del-node host:port node_id #删除给定的一个节点,成功后关闭该节点服务。
缩容就缩容了,从集群中移除掉(cluster forget nodeid)某个主节点就行了,为什么还要关闭?因此本文不会采用 redis-cli –cluster del-node 的方式缩容,而是通过普通命令行来缩容。

这里的自定义缩容实质上分两步
1,将移除的主节点的 slot 分配回集群中其他节点,这里测试四个主节点缩容为三个主节点,实际上执行命令如下。
2,集群中的节点依次执行 cluster forget master_node_id(slave_node_id)

############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10001 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 6164025849a8ff9297664fc835bc851af5004f61 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10002 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 64e634307bdc339b503574f5a77f1b156c021358 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10003 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1

{‘host’: ‘127.0.0.1’, ‘port’: 10001, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10001, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10002, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10002, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10003, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10003, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10004, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10004, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10005, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10005, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10006, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10006, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12

 

完整代码如下

[root@JD redis_install]# python3 create_redis_cluster.py
############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10001 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 6164025849a8ff9297664fc835bc851af5004f61 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10002 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 64e634307bdc339b503574f5a77f1b156c021358 –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
############################ execute reshard #########################################
redis-cli -a ****** –cluster reshard 127.0.0.1:10003 –cluster-from 3645e00a8ec3a902bd6effb4fc20c56a00f2c982 –cluster-to 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a –cluster-slots 1365 –cluster-yes –cluster-timeout 50000 –cluster-pipeline 10000 –cluster-replace >/dev/null 2>&1
{‘host’: ‘127.0.0.1’, ‘port’: 10001, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10001, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10002, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10002, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10003, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10003, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10004, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10004, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10005, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10005, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
{‘host’: ‘127.0.0.1’, ‘port’: 10006, ‘password’: ‘******’}—>cluster forget 3645e00a8ec3a902bd6effb4fc20c56a00f2c982
{‘host’: ‘127.0.0.1’, ‘port’: 10006, ‘password’: ‘******’}—>cluster forget 4854375c501c3dbfb4e2d94d50e62a47520c4f12
################# cluster nodes info: #################
23e1871c4e1dc1047ce567326e74a6194589146c 127.0.0.1:10005@20005 slave 64e634307bdc339b503574f5a77f1b156c021358 0 1575968426000 76 connected
026f0179631f50ca858d46c2b2829b3af71af2c8 127.0.0.1:10004@20004 slave 6164025849a8ff9297664fc835bc851af5004f61 0 1575968422619 75 connected
6164025849a8ff9297664fc835bc851af5004f61 127.0.0.1:10001@20001 myself,master – 0 1575968426000 75 connected 0-5460
9f265545ebb799d2773cfc20c71705cff9d733ae 127.0.0.1:10006@20006 slave 8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 0 1575968425000 77 connected
8b75325c59a7242344d0ebe5ee1e0068c66ffa2a 127.0.0.1:10003@20003 master – 0 1575968427626 77 connected 10923-16383
64e634307bdc339b503574f5a77f1b156c021358 127.0.0.1:10002@20002 master – 0 1575968426000 76 connected 5461-10922

[root@JD redis_install]#

其实到这里并没有结束,这里要求缩容之后集群中的所有节点都要成功地执行 cluster forget master_node_id(和 slave_node_id)
否则其他节点仍然有 10007 节点的心跳信息,超过 1 分钟之后,仍旧会将已经踢出集群的 10007 节点 (以及从节点 10008) 会被添加回来
这就一开始就遇到一个奇葩问题,因为没有在缩容后的集群的 slave 节点上执行 cluster forget,被移除的节点,会不断地被添加回来……。
参考这里:http://www.redis.cn/commands/cluster-forget.html

 

 

 

完整的代码实现如下

import os
import time
import redis
from time import ctime,sleep


def create_redis_cluster(list_master_node,list_slave_node):
    print('################# flush master/slave slots #################')
    for node in list_master_node:
        currenrt_conn = redis.StrictRedis(host=node["host"], port=node["port"], password=node["password"], decode_responses=True)
        currenrt_conn.execute_command('flushall')
        currenrt_conn.execute_command('cluster reset')

    for node in list_slave_node:
        currenrt_conn = redis.StrictRedis(host=node["host"], port=node["port"], password=node["password"], decode_responses=True)
        #currenrt_conn.execute_command('flushall')
        currenrt_conn.execute_command('cluster reset')

    print('################# create cluster #################')
    master_nodes = ''
    for node in list_master_node:
        master_nodes = master_nodes + node["host"] + ':' + str(node["port"]) + ' '
    command = "redis-cli --cluster create {0}  -a ****** --cluster-yes".format(master_nodes)
    print(command)
    msg = os.system(command)
    print(msg)
    time.sleep(5)

    print('################# add slave nodes #################')
    counter = 0
    for node in list_master_node:
        currenrt_conn = redis.StrictRedis(host=node["host"], port=node["port"], password=node["password"], decode_responses=True)
        current_master_node = node["host"] + ':' + str(node["port"])
        current_slave_node = list_slave_node[counter]["host"] + ':' + str(list_slave_node[counter]["port"])
        myid = currenrt_conn.cluster('myid')
        #slave 节点在前,master 节点在后
        command = "redis-cli --cluster add-node {0} {1} --cluster-slave --cluster-master-id {2} -a ****** ". format(current_slave_node,current_master_node,myid)
        print(command)
        msg = os.system(command)
        counter = counter + 1
        print(msg)
    # show cluster nodes info
    time.sleep(10)
    print("################# cluster nodes info: #################")
    cluster_nodes = currenrt_conn.execute_command('cluster nodes')
    print(cluster_nodes)

# 返回扩容后,原始节点中,每个主节点需要迁出的 slot 数量
def get_migrated_slot(list_master_node,n):
    migrated_slot_count = int(16384/len(list_master_node)) - int(16384/(len(list_master_node)+n))
    return migrated_slot_count

def redis_cluster_expansion(list_master_node,dict_master_node,dict_slave_node):
    new_master_node =  dict_master_node["host"] + ':' + str(dict_master_node["port"])
    new_slave_node = dict_slave_node["host"] + ':' + str(dict_slave_node["port"])

    print("#########################cleanup instance#################################")
    new_master_conn = redis.StrictRedis(host=dict_master_node["host"], port=dict_master_node["port"], password=dict_master_node["password"], decode_responses=True)
    new_master_conn.execute_command('flushall')
    new_master_conn.execute_command('cluster reset')
    new_master_id = new_master_conn.cluster('myid')

    new_slave_conn = redis.StrictRedis(host=dict_slave_node["host"], port=dict_slave_node["port"], password=dict_slave_node["password"], decode_responses=True)
    new_slave_conn.execute_command('cluster reset')
    new_slave_id = new_slave_conn.cluster('myid')
    #new_slave_conn.execute_command('slaveof no one')

    # 判断新增的节点是否归属于当前集群,
    # 如果已经归属于当前集群且不占用 slot,则先踢出当前集群 cluster forget nodeid, 或者终止,给出告警,总之,怎么开心怎么来
    # 登录集群中的任何一个节点
    cluster_node_conn = redis.StrictRedis(host=list_master_node[0]["host"], port=list_master_node[0]["port"], password=list_master_node[0]["password"],decode_responses=True)
    dict_node_info = cluster_node_conn.cluster('nodes')
    '''dict_node_info format example :
    {'127.0.0.1:10008@20008': {'node_id': '1d10c3ce3b9b7f956a26122980827fe6ce623d22', 'flags': 'master', 'master_id': '-','last_ping_sent': '0', 'last_pong_rcvd': '1575599442000', 'epoch': '8', 'slots': [], 'connected': True}, 
    '127.0.0.1:10002@20002': {'node_id': '64e634307bdc339b503574f5a77f1b156c021358', 'flags': 'master', 'master_id': '-', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599442000', 'epoch': '7', 'slots': [['5461', '10922']], 'connected': True}, 
    '127.0.0.1:10001@20001': {'node_id': '6164025849a8ff9297664fc835bc851af5004f61', 'flags': 'myself,master', 'master_id': '-', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599438000', 'epoch': '6', 'slots': [['0', '5460']], 'connected': True}, 
    '127.0.0.1:10007@20007': {'node_id': '307f589ec7b1eb7bd65c680527afef1e30ce2303', 'flags': 'master', 'master_id': '-', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599443599', 'epoch': '5', 'slots': [], 'connected': True}, 
    '127.0.0.1:10005@20005': {'node_id': '23e1871c4e1dc1047ce567326e74a6194589146c', 'flags': 'slave', 'master_id': '64e634307bdc339b503574f5a77f1b156c021358', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599441000', 'epoch': '7', 'slots': [], 'connected': True}, 
    '127.0.0.1:10004@20004': {'node_id': '026f0179631f50ca858d46c2b2829b3af71af2c8', 'flags': 'slave', 'master_id': '6164025849a8ff9297664fc835bc851af5004f61', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599440000', 'epoch': '6', 'slots': [], 'connected': True}, 
    '127.0.0.1:10006@20006': {'node_id': '9f265545ebb799d2773cfc20c71705cff9d733ae', 'flags': 'slave', 'master_id': '8b75325c59a7242344d0ebe5ee1e0068c66ffa2a', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599442000', 'epoch': '8', 'slots': [], 'connected': True}, 
    '127.0.0.1:10003@20003': {'node_id': '8b75325c59a7242344d0ebe5ee1e0068c66ffa2a', 'flags': 'master', 'master_id': '-', 'last_ping_sent': '0', 'last_pong_rcvd': '1575599442599', 'epoch': '8', 'slots': [['10923', '16383']], 'connected': True}
    }
    '''
    dict_master_node_in_cluster = 0
    dict_slave_node_in_cluster = 0

    for key_node in dict_node_info:
        if new_master_node in key_node:
            dict_master_node_in_cluster = 1
            if len(dict_node_info[key_node]['slots']) > 0:
                print('error: ' +new_master_node + ' already existing in cluster and alloted slots,execute break......')
                return
        if new_slave_node in key_node:
            dict_slave_node_in_cluster = 1
            if len(dict_node_info[key_node]['slots']) > 0:
                print('error: ' +new_slave_node + ' already existing in cluster and alloted slots,execute break......')
                return

    if dict_master_node_in_cluster == 1:
        for master_node in list_master_node:
            key_node_conn = redis.StrictRedis(host=master_node["host"], port=master_node["port"],password=master_node["password"], decode_responses=True)
            print('waring: ' + new_master_node + ' already existing in cluster,cluster forget it......')
            forget_command = 'cluster forget {0}'.format(new_master_id)
            key_node_conn.execute_command(forget_command)
    if dict_slave_node_in_cluster == 1:
        for master_node in list_master_node:
            key_node_conn = redis.StrictRedis(host=master_node["host"], port=master_node["port"],password=master_node["password"], decode_responses=True)
            print('waring: ' + new_slave_node + ' already existing in cluster,forget it......')
            forget_command = 'cluster forget {0}'.format(new_slave_id)
            key_node_conn.execute_command(forget_command)

    print("#########################add node into cluster#################################")
    try:
        cluster_node = list_master_node[0]["host"] + ':' + str(list_master_node[0]["port"])
        # 1, 待加入节点在前,第二个节点为集群中的任意一个节点
        add_node_command = " redis-cli --cluster add-node {0} {1}  -a ****** ".format(new_master_node,cluster_node)
        print(add_node_command)
        print(os.system(add_node_command))
        time.sleep(20)
        # slave 节点在前,master 节点在后
        add_node_command = " redis-cli --cluster add-node {0} {1} --cluster-slave --cluster-master-id {2} -a ****** ". format(new_slave_node,new_master_node,new_master_id)
        print(add_node_command)
        print(os.system(add_node_command))
        time.sleep(20)
    except Exception as e:
        print('add new node error,the reason is:')
        print(e)

    print("#########################reshard slots#################################")
    migrated_slot_count = get_migrated_slot(list_master_node,1)
    for node in list_master_node:
        current_master_conn = redis.StrictRedis(host=node["host"], port=node["port"], password=node["password"], decode_responses=True)
        current_master_node = node["host"] + ':' + str(node["port"])
        current_master_node_id = current_master_conn.cluster('myid')
        '''
        example:3 节点 --> 扩容 4 节点,每个迁移 1365
        '''
        try:
            command = r'''redis-cli -a ****** --cluster reshard {0} --cluster-from {1} --cluster-to {2} --cluster-slots {3} --cluster-yes --cluster-timeout 50000 --cluster-pipeline 10000   --cluster-replace  >/dev/null 2>&1 '''. format(current_master_node,current_master_node_id,new_master_id,migrated_slot_count)
            print('############################ execute reshard #########################################')
            print(command)
            msg = os.system(command)
            time.sleep(20)
        except Exception as e:
            print('reshard slots error,the reason is:')
            print(e)

    print("################# cluster nodes info: #################")
    cluster_nodes = new_master_conn.execute_command('cluster nodes')
    print(cluster_nodes)


def redis_cluster_shrinkage(list_master_node,list_slave_node,dict_master_node,dict_slave_node):
    # 判断新增的节点是否归属于当前集群,
    # 如果不归属当前集群,则退出
    cluster_node_conn = redis.StrictRedis(host=list_master_node[0]["host"], port=list_master_node[0]["port"], password=list_master_node[0]["password"],decode_responses=True)
    dict_node_info = cluster_node_conn.cluster('nodes')

    removed_master_node = dict_master_node["host"] + ':' + str(dict_master_node["port"])+'@'+str(dict_master_node["port"]+10000)
    removed_slave_node = dict_slave_node["host"] + ':' + str(dict_slave_node["port"])+'@'+str(dict_slave_node["port"]+10000)

    if not removed_master_node in dict_node_info.keys():
        print('Error:'+ str(removed_master_node) +' not in cluster,exiting')
        return
    if not removed_slave_node in dict_node_info.keys():
        print('Error:' + str(removed_slave_node) + ' not in cluster,exiting')
        return

    removed_master_conn = redis.StrictRedis(host=dict_master_node["host"], port=dict_master_node["port"], password=dict_master_node["password"], decode_responses=True)
    removed_master_id = removed_master_conn.cluster('myid')
    removed_slave_conn = redis.StrictRedis(host=dict_slave_node["host"], port=dict_slave_node["port"], password=dict_slave_node["password"], decode_responses=True)
    removed_slave_id = removed_slave_conn.cluster('myid')

    for node in list_master_node:
        current_master_conn = redis.StrictRedis(host=node["host"], port=node["port"], password=node["password"], decode_responses=True)
        current_master_node = node["host"] + ':' + str(node["port"])
        current_master_node_id = current_master_conn.cluster('myid')
        '''
        4 节点 --> 缩容 3 节点,平均将 slot 归还到三个 master 节点
        '''
        try:
            command = r'''redis-cli -a ****** --cluster reshard {0} --cluster-from {1} --cluster-to {2} --cluster-slots 1365 --cluster-yes --cluster-timeout 50000 --cluster-pipeline 10000   --cluster-replace  >/dev/null 2>&1 '''.\
                format(current_master_node, removed_master_id, current_master_node_id)
            print('############################ execute reshard #########################################')
            print(command)
            msg = os.system(command)
            time.sleep(10)
        except Exception as e:
            print('reshard slots error,the reason is:')
            print(e)

    removed_master_conn.execute_command('cluster reset')
    removed_slave_conn.execute_command('cluster reset')

    for master_node in list_master_node:
        master_node_conn =  redis.StrictRedis(host=master_node["host"], port=master_node["port"],password=master_node["password"], decode_responses=True)
        foget_master_command = 'cluster forget {0}'.format(removed_master_id)
        foget_slave_command = 'cluster forget {0}'.format(removed_slave_id)
        print(str(master_node)+ '--->' + foget_master_command)
        print(str(master_node)+ '--->' + foget_slave_command)
        master_node_conn.execute_command(foget_master_command)
        master_node_conn.execute_command(foget_slave_command)

    for slave_node in list_slave_node:
        slave_node_conn = redis.StrictRedis(host=slave_node["host"], port=slave_node["port"], password=slave_node["password"], decode_responses=True)
        foget_master_command = 'cluster forget {0}'.format(removed_master_id)
        foget_slave_command = 'cluster forget {0}'.format(removed_slave_id)
        print(str(slave_node)+ '--->' +foget_master_command)
        print(str(slave_node)+ '--->' +foget_slave_command)
        slave_node_conn.execute_command(foget_master_command)
        slave_node_conn.execute_command(foget_slave_command)

    print("################# cluster nodes info: #################")
    cluster_nodes = cluster_node_conn.execute_command('cluster nodes')
    print(cluster_nodes)


if __name__ == '__main__':
    # master
    node_1 = {'host': '127.0.0.1', 'port': 10001, 'password': '******'}
    node_2 = {'host': '127.0.0.1', 'port': 10002, 'password': '******'}
    node_3 = {'host': '127.0.0.1', 'port': 10003, 'password': '******'}
    # slave
    node_4 = {'host': '127.0.0.1', 'port': 10004, 'password': '******'}
    node_5 = {'host': '127.0.0.1', 'port': 10005, 'password': '******'}
    node_6 = {'host': '127.0.0.1', 'port': 10006, 'password': '******'}
    # 主从节点个数必须相同
    list_master_node = [node_1, node_2, node_3]
    list_slave_node = [node_4, node_5, node_6]
    
    # 自动化集群创建
    #create_redis_cluster(list_master_node,list_slave_node)

    # 自动化扩容
    node_1 = {'host': '127.0.0.1', 'port': 10007, 'password': '******'}
    node_2 = {'host': '127.0.0.1', 'port': 10008, 'password': '******'}
    redis_cluster_expansion(list_master_node,node_1,node_2)
    
    # 自动化缩容,
    #redis_cluster_shrinkage(list_master_node,list_slave_node,node_1,node_2)

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-22发表,共计32502字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中

星哥玩云

星哥玩云
星哥玩云
分享互联网知识
用户数
4
文章数
19350
评论数
4
阅读量
7963924
文章搜索
热门文章
星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛 NAS-6:抖音视频同步工具,视频下载自动下载保存 前言 各位玩 NAS 的朋友好,我是星哥!...
星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛 NAS-3:安装飞牛 NAS 后的很有必要的操作 前言 如果你已经有了飞牛 NAS 系统,之前...
我把用了20年的360安全卫士卸载了

我把用了20年的360安全卫士卸载了

我把用了 20 年的 360 安全卫士卸载了 是的,正如标题你看到的。 原因 偷摸安装自家的软件 莫名其妙安装...
再见zabbix!轻量级自建服务器监控神器在Linux 的完整部署指南

再见zabbix!轻量级自建服务器监控神器在Linux 的完整部署指南

再见 zabbix!轻量级自建服务器监控神器在 Linux 的完整部署指南 在日常运维中,服务器监控是绕不开的...
飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛NAS中安装Navidrome音乐文件中文标签乱码问题解决、安装FntermX终端

飞牛 NAS 中安装 Navidrome 音乐文件中文标签乱码问题解决、安装 FntermX 终端 问题背景 ...
阿里云CDN
阿里云CDN-提高用户访问的响应速度和成功率
随机文章
星哥带你玩飞牛NAS-1:安装飞牛NAS

星哥带你玩飞牛NAS-1:安装飞牛NAS

星哥带你玩飞牛 NAS-1:安装飞牛 NAS 前言 在家庭和小型工作室场景中,NAS(Network Atta...
星哥带你玩飞牛NAS-7:手把手教你免费内网穿透-Cloudflare tunnel

星哥带你玩飞牛NAS-7:手把手教你免费内网穿透-Cloudflare tunnel

星哥带你玩飞牛 NAS-7:手把手教你免费内网穿透 -Cloudflare tunnel 前言 大家好,我是星...
在Windows系统中通过VMware安装苹果macOS15

在Windows系统中通过VMware安装苹果macOS15

在 Windows 系统中通过 VMware 安装苹果 macOS15 许多开发者和爱好者希望在 Window...
开源MoneyPrinterTurbo 利用AI大模型,一键生成高清短视频!

开源MoneyPrinterTurbo 利用AI大模型,一键生成高清短视频!

  开源 MoneyPrinterTurbo 利用 AI 大模型,一键生成高清短视频! 在短视频内容...
自己手撸一个AI智能体—跟创业大佬对话

自己手撸一个AI智能体—跟创业大佬对话

自己手撸一个 AI 智能体 — 跟创业大佬对话 前言 智能体(Agent)已经成为创业者和技术人绕...

免费图片视频管理工具让灵感库告别混乱

一言一句话
-「
手气不错
每年0.99刀,拿下你的第一个顶级域名,详细注册使用

每年0.99刀,拿下你的第一个顶级域名,详细注册使用

每年 0.99 刀,拿下你的第一个顶级域名,详细注册使用 前言 作为长期折腾云服务、域名建站的老玩家,星哥一直...
小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比

小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比

小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比 星哥玩云,带你从小白到上云高手。今天咱们就来聊聊——什...
还在找免费服务器?无广告免费主机,新手也能轻松上手!

还在找免费服务器?无广告免费主机,新手也能轻松上手!

还在找免费服务器?无广告免费主机,新手也能轻松上手! 前言 对于个人开发者、建站新手或是想搭建测试站点的从业者...
自己手撸一个AI智能体—跟创业大佬对话

自己手撸一个AI智能体—跟创业大佬对话

自己手撸一个 AI 智能体 — 跟创业大佬对话 前言 智能体(Agent)已经成为创业者和技术人绕...
零成本上线!用 Hugging Face免费服务器+Docker 快速部署HertzBeat 监控平台

零成本上线!用 Hugging Face免费服务器+Docker 快速部署HertzBeat 监控平台

零成本上线!用 Hugging Face 免费服务器 +Docker 快速部署 HertzBeat 监控平台 ...