阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

PostgreSQL逻辑复制之slony篇

448次阅读
没有评论

共计 18983 个字符,预计需要花费 48 分钟才能阅读完成。

Slony 是 PostgreSQL 领域中最广泛的复制解决方案之一。它不仅是最古老的复制实现之一,它也是一个拥有最广泛的外部工具支持的工具,比如 pgAdmin3。多年来,Slony 是在 PostgreSQL 中复制数据的惟一可行的解决方案。Slony 使用逻辑复制;Slony- I 一般要求表有主键, 或者唯一键;Slony 的工作不是基于 PostgreSQL 事务日志的;而是基于触发器的;基于逻辑复制高可用性;PostgreSQL 除了 slony;还有 Londiste,BDR 等等后续文章会讲到

1. 安装 Slony

下载地址:http://www.slony.info;安装步骤:

# tar -jxvf slony1-2.2.5.tar.bz2
# cd slony1-2.2.5
# ./configure –with-pgconfigdir=/opt/pgsql96/bin
# make
# make install

安装完成!

执行./configure 时;会在当前目录是否可以找到 pg_config 命令;本例 pg_config 在 /opt/pgsql96/bin 目录下;

2. Slony 架构图

PostgreSQL 逻辑复制之 slony 篇

3. 复制表

现有实验环境:

主机名 IP 角色
PostgreSQL201192.168.1.201master
PostgreSQL202192.168.1.202slave

3.1 在两台数据库中都创建一个 slony 的超级用户;专为 slony 服务

create user slony superuser password ‘li0924’;

3.2  本实验两台主机都有 lottu 数据库;以 lottu 数据库中的表作为实验对象;在两个数据库中以相同的方式创建该表 synctab,因为表结构不会自动复制。

create table synctab(id int primary key,name text);

3.3  在所有节点设置允许 Slony- I 用户远程登录;在 pg_hba.conf 文件添加

host    all            slony            192.168.1.0/24        trust

3.4 设置 slony(在 master 主机操作)

编写一个 slonik 脚本用于注册这些节点的脚本如下所示:

[postgres@Postgres201 ~]$ cat slony_setup.sh
#!/bin/sh
MASTERDB=lottu
SLAVEDB=lottu
HOST1=192.168.1.201
HOST2=192.168.1.202
DBUSER=slony
slonik<<_EOF_
cluster name = first_cluster;
# define nodes (this is needed by pretty much
# all slonik scripts)
node 1 admin conninfo = ‘dbname=$MASTERDB host=$HOST1 user=$DBUSER’;
node 2 admin conninfo = ‘dbname=$SLAVEDB host=$HOST2 user=$DBUSER’;
# init cluster
init cluster (id=1, comment = ‘Master Node’);
# group tables into sets
create set (id=1, origin=1, comment=’Our tables’);
set add table (set id=1, origin=1, id=1, fully qualified name = ‘lottu.synctab’, comment=’sample table’);
store node (id=2, comment = ‘Slave node’, event node=1);
store path (server = 1, client = 2, conninfo=’dbname=$MASTERDB host=$HOST1 user=$DBUSER’);
store path (server = 2, client = 1, conninfo=’dbname=$SLAVEDB host=$HOST2 user=$DBUSER’);
_EOF_

现在这个表在 Slony 的控制下,我们可以开始订阅脚本如下所示:

[postgres@Postgres201 ~]$ cat slony_subscribe.sh
#!/bin/sh
MASTERDB=lottu
SLAVEDB=lottu
HOST1=192.168.1.201
HOST2=192.168.1.202
DBUSER=slony
slonik<<_EOF_
cluster name = first_cluster;
node 1 admin conninfo = ‘dbname=$MASTERDB host=$HOST1 user=$DBUSER’;
node 2 admin conninfo = ‘dbname=$SLAVEDB host=$HOST2 user=$DBUSER’;
subscribe set (id = 1, provider = 1, receiver = 2, forward = no);
_EOF_

在 master 主机执行脚本

[postgres@Postgres201 ~]$ ./slony_setup.sh
[postgres@Postgres201 ~]$ ./slony_subscribe.sh &
[1] 1225

定义了我们想要复制的东西之后,我们可以在每台主机启动 slon 守护进程

slon first_cluster ‘host=192.168.1.201 dbname=lottu user=slony’ &
slon first_cluster ‘host=192.168.1.202 dbname=lottu user=slony’ &

3.5 验证 slony- I 是否配置成功?

在 master 主机执行 dml 操作

[postgres@Postgres201 ~]$ psql lottu lottu
psql (9.6.0)
Type “help” for help.

lottu=# \d synctab
    Table “lottu.synctab”
 Column |  Type  | Modifiers
——–+———+———–
 id    | integer | not null
 name  | text    |
Indexes:
    “synctab_pkey” PRIMARY KEY, btree (id)
Triggers:
    _first_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON synctab FOR EACH ROW EXECUTE PROCEDURE _first_cluster.logtrigger(‘_first_cluster’, ‘1’, ‘k’)
    _first_cluster_truncatetrigger BEFORE TRUNCATE ON synctab FOR EACH STATEMENT EXECUTE PROCEDURE _first_cluster.log_truncate(‘1’)
Disabled user triggers:
    _first_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON synctab FOR EACH ROW EXECUTE PROCEDURE _first_cluster.denyaccess(‘_first_cluster’)
    _first_cluster_truncatedeny BEFORE TRUNCATE ON synctab FOR EACH STATEMENT EXECUTE PROCEDURE _first_cluster.deny_truncate()

lottu=# insert into synctab values (1001,’lottu’);
INSERT 0 1

在 slave 主机查看是否对应变化

[postgres@Postgres202 ~]$ psql
psql (9.6.0)
Type “help” for help.

postgres=# \c lottu lottu
You are now connected to database “lottu” as user “lottu”.
lottu=> select * from synctab ;
  id  | name 
——+——-
 1001 | lottu
(1 row)

4.  Slony- I 相关表或者视图查看

4.1 配置成功;会在所在的数据库中生成一个 schema

[postgres@Postgres201 ~]$ psql lottu lottu
psql (9.6.0)
Type “help” for help.

lottu=# \dn
      List of schemas
      Name      |  Owner 
—————-+———-
 _first_cluster | slony
 lottu          | lottu
 public        | postgres
(3 rows)

4.2 查看集群中的节点信息

lottu=# select * from _first_cluster.sl_node;
 no_id | no_active | no_comment  | no_failed
——-+———–+————-+———–
    1 | t        | Master Node | f
    2 | t        | Slave node  | f
(2 rows)

4.3 查看集群中的集合信息

lottu=# select * from _first_cluster.sl_set;
 set_id | set_origin | set_locked | set_comment
——–+————+————+————-
      1 |          1 |            | Our tables
(1 row)

4.4 查看集群中的表信息

lottu=# select * from _first_cluster.sl_table;
-[RECORD 1]————-
tab_id      | 1
tab_reloid  | 57420
tab_relname | synctab
tab_nspname | lottu
tab_set    | 1
tab_idxname | synctab_pkey
tab_altered | f
tab_comment | sample table

5. 日常维护

5.1  Slony- I 向现有集群中增加一个复制表

以表 synctab2 为例:

create table synctab2(id int primary key,name text,reg_time timestamp);

我们要创建一个新的表格集; 脚本是这样的

[postgres@Postgres201 ~]$ cat slony_add_table_set.sh
#!/bin/sh
MASTERDB=lottu
SLAVEDB=lottu
HOST1=192.168.1.201
HOST2=192.168.1.202
DBUSER=slony
slonik<<_EOF_
cluster name = first_cluster;
node 1 admin conninfo = ‘dbname=$MASTERDB host=$HOST1 user=$DBUSER’;
node 2 admin conninfo = ‘dbname=$SLAVEDB host=$HOST2 user=$DBUSER’;
create set (id=2, origin=1, comment=’a second replication set’);
set add table (set id=2, origin=1, id=2, fully qualified name =’lottu.synctab2′, comment=’second table’);
subscribe set(id=1, provider=1,receiver=2);
subscribe set(id=2, provider=1,receiver=2);
merge set(id=1, add id=2,origin=1);
_EOF_

执行 slony_add_table_set.sh 脚本

[postgres@Postgres201 ~]$ ./slony_add_table_set.sh
<stdin>:8 subscription in progress before mergeSet. waiting
<stdin>:8 subscription in progress before mergeSet. waiting

查看是否添加成功

lottu=# select * from _first_cluster.sl_table;
-[RECORD 1]————–
tab_id      | 1
tab_reloid  | 57420
tab_relname | synctab
tab_nspname | lottu
tab_set    | 1
tab_idxname | synctab_pkey
tab_altered | f
tab_comment | sample table
-[RECORD 2]————–
tab_id      | 2
tab_reloid  | 57840
tab_relname | synctab2
tab_nspname | lottu
tab_set    | 1
tab_idxname | synctab2_pkey
tab_altered | f
tab_comment | second table

5.2  Slony- I 向现有集群中删除一个复制表

[postgres@Postgres201 ~]$ cat slony_drop_table.sh
#!/bin/sh
MASTERDB=lottu
SLAVEDB=lottu
HOST1=192.168.1.201
HOST2=192.168.1.202
DBUSER=slony
slonik<<_EOF_
cluster name = first_cluster;
node 1 admin conninfo = ‘dbname=$MASTERDB host=$HOST1 user=$DBUSER’;
node 2 admin conninfo = ‘dbname=$SLAVEDB host=$HOST2 user=$DBUSER’;
set drop table (id=2, origin=1);
_EOF_

执行 slony_drop_table.sh 脚本

[postgres@Postgres201 ~]$ ./slony_drop_table.sh

查看是否删除成功

lottu=# select * from _first_cluster.sl_table;
 tab_id | tab_reloid | tab_relname | tab_nspname | tab_set | tab_idxname  | tab_altered | tab_comment 
——–+————+————-+————-+———+————–+————-+————–
      1 |      57420 | synctab    | lottu      |      1 | synctab_pkey | f          | sample table
(1 row)

5. 3 删除 slony

[postgres@Postgres201 ~]$ cat slony_drop_node.sh
#!/bin/sh
MASTERDB=lottu
SLAVEDB=lottu
HOST1=192.168.1.201
HOST2=192.168.1.202
DBUSER=slony
slonik<<_EOF_
cluster name = first_cluster;
node 1 admin conninfo = ‘dbname=$MASTERDB host=$HOST1 user=$DBUSER’;
node 2 admin conninfo = ‘dbname=$SLAVEDB host=$HOST2 user=$DBUSER’;
uninstall node (id = 1);
uninstall node (id = 2);
_EOF_

执行脚本如下:

[postgres@Postgres201 ~]$ ./slony_drop_node.sh
<stdin>:4: NOTICE:  Slony-I: Please drop schema “_first_cluster”
<stdin>:4: NOTICE:  drop cascades to 175 other objects
DETAIL:  drop cascades to table _first_cluster.sl_node
drop cascades to table _first_cluster.sl_nodelock
drop cascades to table _first_cluster.sl_set
drop cascades to table _first_cluster.sl_setsync
drop cascades to table _first_cluster.sl_table
drop cascades to table _first_cluster.sl_sequence
drop cascades to table _first_cluster.sl_path
drop cascades to table _first_cluster.sl_listen
drop cascades to table _first_cluster.sl_subscribe
drop cascades to table _first_cluster.sl_event
drop cascades to table _first_cluster.sl_confirm
drop cascades to table _first_cluster.sl_seqlog
drop cascades to function _first_cluster.sequencelastvalue(text)
drop cascades to table _first_cluster.sl_log_1
drop cascades to table _first_cluster.sl_log_2
drop cascades to table _first_cluster.sl_log_script
drop cascades to table _first_cluster.sl_registry
drop cascades to table _first_cluster.sl_apply_stats
drop cascades to view _first_cluster.sl_seqlastvalue
drop cascades to view _first_cluster.sl_failover_targets
drop cascades to sequence _first_cluster.sl_local_node_id
drop cascades to sequence _first_cluster.sl_event_seq
drop cascades to sequence _first_cluster.sl_action_seq
drop cascades to sequence _first_cluster.sl_log_status
drop cascades to table _first_cluster.sl_config_lock
drop cascades to table _first_cluster.sl_event_lock
drop cascades to table _first_cluster.sl_archive_counter
drop cascades to table _first_cluster.sl_components
drop cascades to type _first_cluster.vactables
drop cascades to function _first_cluster.createevent(name,text)
drop cascades to function _first_cluster.createevent(name,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.denyaccess()
drop cascades to trigger _first_cluster_denyaccess on table lottu.synctab
drop cascades to function _first_cluster.lockedset()
drop cascades to function _first_cluster.getlocalnodeid(name)
drop cascades to function _first_cluster.getmoduleversion()
drop cascades to function _first_cluster.resetsession()
drop cascades to function _first_cluster.logapply()
drop cascades to function _first_cluster.logapplysetcachesize(integer)
drop cascades to function _first_cluster.logapplysavestats(name,integer,interval)
drop cascades to function _first_cluster.checkmoduleversion()
drop cascades to function _first_cluster.decode_tgargs(bytea)
drop cascades to function _first_cluster.logtrigger()
drop cascades to trigger _first_cluster_logtrigger on table lottu.synctab
drop cascades to function _first_cluster.terminatenodeconnections(integer)
drop cascades to function _first_cluster.killbackend(integer,text)
drop cascades to function _first_cluster.seqtrack(integer,bigint)
drop cascades to function _first_cluster.slon_quote_brute(text)
drop cascades to function _first_cluster.slon_quote_input(text)
drop cascades to function _first_cluster.slonyversionmajor()
drop cascades to function _first_cluster.slonyversionminor()
drop cascades to function _first_cluster.slonyversionpatchlevel()
drop cascades to function _first_cluster.slonyversion()
drop cascades to function _first_cluster.registry_set_int4(text,integer)
drop cascades to function _first_cluster.registry_get_int4(text,integer)
drop cascades to function _first_cluster.registry_set_text(text,text)
drop cascades to function _first_cluster.registry_get_text(text,text)
drop cascades to function _first_cluster.registry_set_timestamp(text,timestamp with time zone)
drop cascades to function _first_cluster.registry_get_timestamp(text,timestamp with time zone)
drop cascades to function _first_cluster.cleanupnodelock()
drop cascades to function _first_cluster.registernodeconnection(integer)
drop cascades to function _first_cluster.initializelocalnode(integer,text)
drop cascades to function _first_cluster.storenode(integer,text)
drop cascades to function _first_cluster.storenode_int(integer,text)
drop cascades to function _first_cluster.enablenode(integer)
drop cascades to function _first_cluster.enablenode_int(integer)
drop cascades to function _first_cluster.disablenode(integer)
drop cascades to function _first_cluster.disablenode_int(integer)
drop cascades to function _first_cluster.dropnode(integer[])
drop cascades to function _first_cluster.dropnode_int(integer)
drop cascades to function _first_cluster.prefailover(integer,boolean)
drop cascades to function _first_cluster.failednode(integer,integer,integer[])
drop cascades to function _first_cluster.failednode2(integer,integer,bigint,integer[])
drop cascades to function _first_cluster.failednode3(integer,integer,bigint)
drop cascades to function _first_cluster.failoverset_int(integer,integer,bigint)
drop cascades to function _first_cluster.uninstallnode()
drop cascades to function _first_cluster.clonenodeprepare(integer,integer,text)
drop cascades to function _first_cluster.clonenodeprepare_int(integer,integer,text)
drop cascades to function _first_cluster.clonenodefinish(integer,integer)
drop cascades to function _first_cluster.storepath(integer,integer,text,integer)
drop cascades to function _first_cluster.storepath_int(integer,integer,text,integer)
drop cascades to function _first_cluster.droppath(integer,integer)
drop cascades to function _first_cluster.droppath_int(integer,integer)
drop cascades to function _first_cluster.storelisten(integer,integer,integer)
drop cascades to function _first_cluster.storelisten_int(integer,integer,integer)
drop cascades to function _first_cluster.droplisten(integer,integer,integer)
drop cascades to function _first_cluster.droplisten_int(integer,integer,integer)
drop cascades to function _first_cluster.storeset(integer,text)
drop cascades to function _first_cluster.storeset_int(integer,integer,text)
drop cascades to function _first_cluster.lockset(integer)
drop cascades to function _first_cluster.unlockset(integer)
drop cascades to function _first_cluster.moveset(integer,integer)
drop cascades to function _first_cluster.moveset_int(integer,integer,integer,bigint)
and 75 other objects (see server log for list)
<stdin>:5: NOTICE:  Slony-I: Please drop schema “_first_cluster”
<stdin>:5: NOTICE:  drop cascades to 175 other objects
DETAIL:  drop cascades to table _first_cluster.sl_node
drop cascades to table _first_cluster.sl_nodelock
drop cascades to table _first_cluster.sl_set
drop cascades to table _first_cluster.sl_setsync
drop cascades to table _first_cluster.sl_table
drop cascades to table _first_cluster.sl_sequence
drop cascades to table _first_cluster.sl_path
drop cascades to table _first_cluster.sl_listen
drop cascades to table _first_cluster.sl_subscribe
drop cascades to table _first_cluster.sl_event
drop cascades to table _first_cluster.sl_confirm
drop cascades to table _first_cluster.sl_seqlog
drop cascades to function _first_cluster.sequencelastvalue(text)
drop cascades to table _first_cluster.sl_log_1
drop cascades to table _first_cluster.sl_log_2
drop cascades to table _first_cluster.sl_log_script
drop cascades to table _first_cluster.sl_registry
drop cascades to table _first_cluster.sl_apply_stats
drop cascades to view _first_cluster.sl_seqlastvalue
drop cascades to view _first_cluster.sl_failover_targets
drop cascades to sequence _first_cluster.sl_local_node_id
drop cascades to sequence _first_cluster.sl_event_seq
drop cascades to sequence _first_cluster.sl_action_seq
drop cascades to sequence _first_cluster.sl_log_status
drop cascades to table _first_cluster.sl_config_lock
drop cascades to table _first_cluster.sl_event_lock
drop cascades to table _first_cluster.sl_archive_counter
drop cascades to table _first_cluster.sl_components
drop cascades to type _first_cluster.vactables
drop cascades to function _first_cluster.createevent(name,text)
drop cascades to function _first_cluster.createevent(name,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.createevent(name,text,text,text,text,text,text,text,text,text)
drop cascades to function _first_cluster.denyaccess()
drop cascades to trigger _first_cluster_denyaccess on table lottu.synctab
drop cascades to function _first_cluster.lockedset()
drop cascades to function _first_cluster.getlocalnodeid(name)
drop cascades to function _first_cluster.getmoduleversion()
drop cascades to function _first_cluster.resetsession()
drop cascades to function _first_cluster.logapply()
drop cascades to function _first_cluster.logapplysetcachesize(integer)
drop cascades to function _first_cluster.logapplysavestats(name,integer,interval)
drop cascades to function _first_cluster.checkmoduleversion()
drop cascades to function _first_cluster.decode_tgargs(bytea)
drop cascades to function _first_cluster.logtrigger()
drop cascades to trigger _first_cluster_logtrigger on table lottu.synctab
drop cascades to function _first_cluster.terminatenodeconnections(integer)
drop cascades to function _first_cluster.killbackend(integer,text)
drop cascades to function _first_cluster.seqtrack(integer,bigint)
drop cascades to function _first_cluster.slon_quote_brute(text)
drop cascades to function _first_cluster.slon_quote_input(text)
drop cascades to function _first_cluster.slonyversionmajor()
drop cascades to function _first_cluster.slonyversionminor()
drop cascades to function _first_cluster.slonyversionpatchlevel()
drop cascades to function _first_cluster.slonyversion()
drop cascades to function _first_cluster.registry_set_int4(text,integer)
drop cascades to function _first_cluster.registry_get_int4(text,integer)
drop cascades to function _first_cluster.registry_set_text(text,text)
drop cascades to function _first_cluster.registry_get_text(text,text)
drop cascades to function _first_cluster.registry_set_timestamp(text,timestamp with time zone)
drop cascades to function _first_cluster.registry_get_timestamp(text,timestamp with time zone)
drop cascades to function _first_cluster.cleanupnodelock()
drop cascades to function _first_cluster.registernodeconnection(integer)
drop cascades to function _first_cluster.initializelocalnode(integer,text)
drop cascades to function _first_cluster.storenode(integer,text)
drop cascades to function _first_cluster.storenode_int(integer,text)
drop cascades to function _first_cluster.enablenode(integer)
drop cascades to function _first_cluster.enablenode_int(integer)
drop cascades to function _first_cluster.disablenode(integer)
drop cascades to function _first_cluster.disablenode_int(integer)
drop cascades to function _first_cluster.dropnode(integer[])
drop cascades to function _first_cluster.dropnode_int(integer)
drop cascades to function _first_cluster.prefailover(integer,boolean)
drop cascades to function _first_cluster.failednode(integer,integer,integer[])
drop cascades to function _first_cluster.failednode2(integer,integer,bigint,integer[])
drop cascades to function _first_cluster.failednode3(integer,integer,bigint)
drop cascades to function _first_cluster.failoverset_int(integer,integer,bigint)
drop cascades to function _first_cluster.uninstallnode()
drop cascades to function _first_cluster.clonenodeprepare(integer,integer,text)
drop cascades to function _first_cluster.clonenodeprepare_int(integer,integer,text)
drop cascades to function _first_cluster.clonenodefinish(integer,integer)
drop cascades to function _first_cluster.storepath(integer,integer,text,integer)
drop cascades to function _first_cluster.storepath_int(integer,integer,text,integer)
drop cascades to function _first_cluster.droppath(integer,integer)
drop cascades to function _first_cluster.droppath_int(integer,integer)
drop cascades to function _first_cluster.storelisten(integer,integer,integer)
drop cascades to function _first_cluster.storelisten_int(integer,integer,integer)
drop cascades to function _first_cluster.droplisten(integer,integer,integer)
drop cascades to function _first_cluster.droplisten_int(integer,integer,integer)
drop cascades to function _first_cluster.storeset(integer,text)
drop cascades to function _first_cluster.storeset_int(integer,integer,text)
drop cascades to function _first_cluster.lockset(integer)
drop cascades to function _first_cluster.unlockset(integer)
drop cascades to function _first_cluster.moveset(integer,integer)
drop cascades to function _first_cluster.moveset_int(integer,integer,integer,bigint)
and 75 other objects (see server log for list)

完美;一切归零!

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-22发表,共计18983字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中

星哥玩云

星哥玩云
星哥玩云
分享互联网知识
用户数
4
文章数
19348
评论数
4
阅读量
7804105
文章搜索
热门文章
开发者必备神器:阿里云 Qoder CLI 全面解析与上手指南

开发者必备神器:阿里云 Qoder CLI 全面解析与上手指南

开发者必备神器:阿里云 Qoder CLI 全面解析与上手指南 大家好,我是星哥。之前介绍了腾讯云的 Code...
星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛 NAS-6:抖音视频同步工具,视频下载自动下载保存 前言 各位玩 NAS 的朋友好,我是星哥!...
云服务器部署服务器面板1Panel:小白轻松构建Web服务与面板加固指南

云服务器部署服务器面板1Panel:小白轻松构建Web服务与面板加固指南

云服务器部署服务器面板 1Panel:小白轻松构建 Web 服务与面板加固指南 哈喽,我是星哥,经常有人问我不...
我把用了20年的360安全卫士卸载了

我把用了20年的360安全卫士卸载了

我把用了 20 年的 360 安全卫士卸载了 是的,正如标题你看到的。 原因 偷摸安装自家的软件 莫名其妙安装...
星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛NAS-3:安装飞牛NAS后的很有必要的操作

星哥带你玩飞牛 NAS-3:安装飞牛 NAS 后的很有必要的操作 前言 如果你已经有了飞牛 NAS 系统,之前...
阿里云CDN
阿里云CDN-提高用户访问的响应速度和成功率
随机文章
一句话生成拓扑图!AI+Draw.io 封神开源组合,工具让你的效率爆炸

一句话生成拓扑图!AI+Draw.io 封神开源组合,工具让你的效率爆炸

一句话生成拓扑图!AI+Draw.io 封神开源组合,工具让你的效率爆炸 前言 作为天天跟架构图、拓扑图死磕的...
星哥带你玩飞牛NAS-12:开源笔记的进化之路,效率玩家的新选择

星哥带你玩飞牛NAS-12:开源笔记的进化之路,效率玩家的新选择

星哥带你玩飞牛 NAS-12:开源笔记的进化之路,效率玩家的新选择 前言 如何高效管理知识与笔记,已经成为技术...
终于收到了以女儿为原型打印的3D玩偶了

终于收到了以女儿为原型打印的3D玩偶了

终于收到了以女儿为原型打印的 3D 玩偶了 前些日子参加某网站活动,获得一次实物 3D 打印的机会,于是从众多...
星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛NAS-6:抖音视频同步工具,视频下载自动下载保存

星哥带你玩飞牛 NAS-6:抖音视频同步工具,视频下载自动下载保存 前言 各位玩 NAS 的朋友好,我是星哥!...
小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比

小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比

小白也能看懂:什么是云服务器?腾讯云 vs 阿里云对比 星哥玩云,带你从小白到上云高手。今天咱们就来聊聊——什...

免费图片视频管理工具让灵感库告别混乱

一言一句话
-「
手气不错
安装并使用谷歌AI编程工具Antigravity(亲测有效)

安装并使用谷歌AI编程工具Antigravity(亲测有效)

  安装并使用谷歌 AI 编程工具 Antigravity(亲测有效) 引言 Antigravity...
星哥带你玩飞牛NAS-14:解锁公网自由!Lucky功能工具安装使用保姆级教程

星哥带你玩飞牛NAS-14:解锁公网自由!Lucky功能工具安装使用保姆级教程

星哥带你玩飞牛 NAS-14:解锁公网自由!Lucky 功能工具安装使用保姆级教程 作为 NAS 玩家,咱们最...
开源MoneyPrinterTurbo 利用AI大模型,一键生成高清短视频!

开源MoneyPrinterTurbo 利用AI大模型,一键生成高清短视频!

  开源 MoneyPrinterTurbo 利用 AI 大模型,一键生成高清短视频! 在短视频内容...
星哥带你玩飞牛NAS-7:手把手教你免费内网穿透-Cloudflare tunnel

星哥带你玩飞牛NAS-7:手把手教你免费内网穿透-Cloudflare tunnel

星哥带你玩飞牛 NAS-7:手把手教你免费内网穿透 -Cloudflare tunnel 前言 大家好,我是星...
支付宝、淘宝、闲鱼又双叕崩了,Cloudflare也瘫了连监控都挂,根因藏在哪?

支付宝、淘宝、闲鱼又双叕崩了,Cloudflare也瘫了连监控都挂,根因藏在哪?

支付宝、淘宝、闲鱼又双叕崩了,Cloudflare 也瘫了连监控都挂,根因藏在哪? 最近两天的互联网堪称“故障...