0%

因为笔记本是用以前的时间仓恢复回来的,所以brew啥的都是x64架构的,需要重新安装brew。

Homebrew

x86_64 和 ARM64 版本的 homebrew 的安装目录是不一样的

x86_64 安装目录:/usr/local/homebrew

ARM64 安装目录:/opt/homebrew

1
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

这是后电脑上有了两套brew,

切换命令

为了方便在x64和arm64之间来回切换就参照 Mac M1 安装 Homebrew 最佳实践 做了

文件 ~/.brew_arm

1
eval "$(/opt/homebrew/bin/brew shellenv)"

文件 ~/.brew_intel

1
eval "$(/usr/local/homebrew/bin/brew shellenv)"

将下面代码加入到 .zshrc

1
2
3
# homebrew
alias brew_arm='source ~/.brew_arm'
alias brew_intel="source ~/.brew_intel"

切换命令:

1
2
brew_intel # 切换到 x86_64
brew_arm # 切换到 arm64

redis

切换到arm64 安装了新的redis brew install redis,版本是7.x .

然后再 idea 中调试运行spring boot程序就会卡在中间,打开调试信息看 貌似 redisson 一直不停地连接、close,估计是arm 版本的redis有问题。

先停止redis brew services stop redis,然后卸载 brew uninstall redis.

切换到 x64环境 brew_intel, 然后重新安装

1
2
3
4
5
6
7
8
9
$ brew install redis
$ which redis-server
/usr/local/bin/redis-server #这就是x64版本

$ brew services start redis

$ file /usr/local/bin/redis-server
/usr/local/bin/redis-server: Mach-O 64-bit executable x86_64 # 确认是x64

然后idea中调试就正常了。

如何安装 v14 及以下的老版本 Node

安装 Node 的部分写的很简单,因为按这个步骤,一般不会出问题。而当你用 nvm 尝试去安装 v14 及以下的 Node 版本时,大概率会报错,而我们在工作中恰恰又可能依赖 v14 及以下的 lts 版本。那么为什么会报错呢?究其原因还是因为低版本的 node 并不是基于 arm64 架构的,所以不适配 M1 芯片。在这里教大家两个方法,就能成功安装上低版本 Node。

arm 芯片,用 nvm 安装老版本的node 会提示安装不上,或者提示某个依赖的组件不支持arm64。这时候就需要安装 x64 版本的node。

方法一:

具体办法就是通过 Rosetta2 来启动终端,这样通过 Rosetta2 转译到 x86 架构中执行安装,也一样可以安装成功。

  • 在 finder 中,点击应用程序,并在实用工具中找到iterm.app
  • 右键终端,点击显示简介
  • 选择 使用Rosetta 打开

然后重新打开 itern,在命令行下重新nvm install 12.20.12 就可以了。

弄完以后就不用把 iterm 运行在 Rosetta下。

方法二

在终端中,输入:

1
arch -x86_64 zsh

通过这个命令可以让 shell 运行在Rosetta2下。
之后你可以通过 nvm install 12.20.12 来安装低版本 Node。
在此之后,您可以不用在 Rosetta2 中就可以使用安装的可执行文件,也就是说,您可以将 Node v15与其他节点版本互换使用。

docker方式安装ElasticSearch

前言:

项目中要用到 ElasticSearch,以前都是使用单机版,既然是正式使用,就需要学习一下集群啥的,也要把安全性考虑进去。

刚入手的MacBook Pro M2 16寸( M2 ARM64) ,其实对容器以及虚拟机的兼容性还是有点不确定,所以这次会同时在旧的 MacBook Pro 2015 15寸( Intel I7) 同时安装测试。

参考:搜了一下,往上大多都是同样的方式安装,我基本参考 简书上“卖菇凉的小火柴丶”的文章 docker-compose安装elasticsearch8.5.0集群

先测试单机版

准备好环境文件 .env ,这个env文件会在后面几个测试方案中一直使用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# elastic账号的密码 (至少六个字符),别用纯数字,否则死给你看
ELASTIC_PASSWORD=iampassword

# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es,,别用纯数字,否则死给你看
KIBANA_PASSWORD=iampassword

# es和kibana的版本
STACK_VERSION=7.17.9

# 集群名字
CLUSTER_NAME=docker-cluster

# x-pack安全设置,这里选择basic,基础设置,如果选择了trail,则会在30天后到期
LICENSE=basic
#LICENSE=trial

# es映射到宿主机的的端口
ES_PORT=9200

# kibana映射到宿主机的的端口
KIBANA_PORT=5601

# es容器的内存大小,请根据自己硬件情况调整(字节为单位,当前1G)
MEM_LIMIT=1073741824

# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=es

然后准备 docker-compose.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
version: '3'
services:
es-single:
image: elasticsearch:${STACK_VERSION}
container_name: es-single
volumes:
- ./data/esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=es-single
- cluster.name=es-docker-cluster
- discovery.type=single-node
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1

kibana-single:
depends_on:
- es-single
image: kibana:${STACK_VERSION}
container_name: kibana-single
ports:
- ${KIBANA_PORT}:5601
volumes:
- ./data/kibanadata:/usr/share/kibana/data

environment:
- SERVERNAME=kibana-single
- ELASTICSEARCH_HOSTS=http://es-single:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
mem_limit: ${MEM_LIMIT}

然后启动 docker-compose up -d

稍等十几秒后在查看 curl -u elastic:iampassword http://localhost:9200 (浏览器里也可以直接查看,不过这样显得牛逼)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"name" : "es-single",
"cluster_name" : "es-docker-cluster",
"cluster_uuid" : "0pIB-A9kScyLkhj6YkYSjA",
"version" : {
"number" : "7.17.9",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48222227ee6b9e70e502f0f0daa52435ee634d",
"build_date" : "2023-01-31T05:34:43.305517834Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

再过十几秒后网页打开 http://localhost:5601 看就可以看到登录页面。

装逼的样子就是这样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ curl -v  http://localhost:5601
* Trying 127.0.0.1:5601...
* Connected to localhost (127.0.0.1) port 5601 (#0)
> GET / HTTP/1.1
> Host: localhost:5601
> User-Agent: curl/7.86.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< location: /login?next=%2F
< x-content-type-options: nosniff
< referrer-policy: no-referrer-when-downgrade
< content-security-policy: script-src 'unsafe-eval' 'self'; worker-src blob: 'self'; style-src 'unsafe-inline' 'self'
< kbn-name: f382d92d1bda
< kbn-license-sig: da420c53321c02b93e5b67b614ccdf37075cab5cc99a13d97fca5727603889d0
< cache-control: private, no-cache, no-store, must-revalidate
< content-length: 0
< Date: Sat, 18 Feb 2023 04:54:46 GMT
< Connection: keep-alive
< Keep-Alive: timeout=120
<

这样单机本的就好了。

集群版

新建一个 cluster 目录,把 .env 文件复制进去 ,

创建新的docker-compose.yaml文件,内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
version: '3'
services:
setup-cluster:
image: elasticsearch:${STACK_VERSION}
container_name: setup-cluster
volumes:
- ./setup-cluster.sh:/setup-cluster.sh
environment:
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- KIBANA_PASSWORD=${KIBANA_PASSWORD}
user: "0"
command: >
bash /setup-cluster.sh

es-cluster-01:
depends_on:
- setup-cluster
image: elasticsearch:${STACK_VERSION}
container_name: es-cluster-01
volumes:
- ./data/esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=es-cluster-01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es-cluster-01,es-cluster-02,es-cluster-03
- discovery.seed_hosts=es-cluster-02,es-cluster-03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
# - xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: curl -u elastic:${ELASTIC_PASSWORD} -s -f localhost:9200/_cat/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5

es-cluster-02:
image: elasticsearch:${STACK_VERSION}
container_name: es-cluster-02
depends_on:
- es-cluster-01
volumes:
# - ./certs:/usr/share/elasticsearch/config/certs
- ./data/esdata02:/usr/share/elasticsearch/data
ports:
- '9202:9200'
- '9302:9300'
environment:
- node.name=es-cluster-02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es-cluster-01,es-cluster-02,es-cluster-03
- discovery.seed_hosts=es-cluster-01,es-cluster-03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
# - xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: curl -u elastic:${ELASTIC_PASSWORD} -s -f localhost:9200/_cat/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5


es-cluster-03:
image: elasticsearch:${STACK_VERSION}
container_name: es-cluster-03
depends_on:
- es-cluster-01
volumes:
- ./data/esdata03:/usr/share/elasticsearch/data
ports:
- '9203:9200'
- '9303:9300'
environment:
- node.name=es-cluster-03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es-cluster-01,es-cluster-02,es-cluster-03
- discovery.seed_hosts=es-cluster-01,es-cluster-02
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
# - xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: curl -u elastic:${ELASTIC_PASSWORD} -s -f localhost:9200/_cat/health >/dev/null || exit 1
interval: 30s
timeout: 10s
retries: 5



kibana-cluster:
depends_on:
es-cluster-01:
condition: service_healthy
es-cluster-02:
condition: service_healthy
es-cluster-03:
condition: service_healthy
image: kibana:${STACK_VERSION}
container_name: kibana-cluster
ports:
- ${KIBANA_PORT}:5601
volumes:
- ./data/kibanadata:/usr/share/kibana/data

environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=["http://es-cluster-01:9200","http://es-cluster-02:9200","http://es-cluster-03:9200"]
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120


启动 docker-compose up -d

一分钟后查看 , kibana正在启动

1
2
3
4
5
6
7
$ docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
es-cluster-01 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-01 About a minute ago Up About a minute (healthy) 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
es-cluster-02 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-02 About a minute ago Up About a minute (healthy) 0.0.0.0:9202->9200/tcp, 0.0.0.0:9302->9300/tcp
es-cluster-03 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-03 About a minute ago Up About a minute (healthy) 0.0.0.0:9203->9200/tcp, 0.0.0.0:9303->9300/tcp
kibana-cluster kibana:7.17.9 "/bin/tini -- /usr/l…" kibana-cluster About a minute ago Up 11 seconds (health: starting) 0.0.0.0:5601->5601/tcp
setup-cluster elasticsearch:7.17.9 "/bin/tini -- /usr/l…" setup-cluster About a minute ago Up About a minute 9200/tcp, 9300/tcp

再过一会还是不见kibana启动好,却发现es-client-01退出,查看日志没有任何错误提示。

1
2
3
4
5
6
$ docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
es-cluster-02 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-02 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:9202->9200/tcp, 0.0.0.0:9302->9300/tcp
es-cluster-03 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-03 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:9203->9200/tcp, 0.0.0.0:9303->9300/tcp
kibana-cluster kibana:7.17.9 "/bin/tini -- /usr/l…" kibana-cluster 2 minutes ago Up About a minute (health: starting) 0.0.0.0:5601->5601/tcp
setup-cluster elasticsearch:7.17.9 "/bin/tini -- /usr/l…" setup-cluster 2 minutes ago Up 2 minutes 9200/tcp, 9300/tcp

然后执行想着执行docker-compose up -d 把es-client-01起来,结果是

1
2
3
4
5
6
$ docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
es-cluster-01 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-01 19 minutes ago Up 16 minutes (healthy) 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
es-cluster-03 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-03 19 minutes ago Up 19 minutes (healthy) 0.0.0.0:9203->9200/tcp, 0.0.0.0:9303->9300/tcp
kibana-cluster kibana:7.17.9 "/bin/tini -- /usr/l…" kibana-cluster 19 minutes ago Up 18 minutes (healthy) 0.0.0.0:5601->5601/tcp
setup-cluster elasticsearch:7.17.9 "/bin/tini -- /usr/l…" setup-cluster 19 minutes ago Up 19 minutes 9200/tcp, 9300/tcp

这是后02 node又退出了,而且还是没有任何出错提示。感觉是这个集群只有两个能起来。

这时候直接访问es 和 kibana 都正常。

这时候用 ElasticSearch Head 查看es集群,发现一切正常,集群健康值green。

在老款笔记本执行

在2015款MacBook 上执行,这台电脑启动比较慢,应该是cpu 、内存、硬盘速度都不够快。

第一次完提示03不健康,估计是kibana检查重试的次数到了后自己退出了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ docker-compose up -d
[+] Running 4/5
⠿ Container setup-cluster Started 0.9s
⠿ Container es-cluster-01 Healthy 156.1s
⠿ Container es-cluster-03 Error 155.6s
⠿ Container es-cluster-02 Healthy 156.5s
⠿ Container kibana-cluster Created 0.1s
dependency failed to start: container for service "es-cluster-03" is unhealthy

$ docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
es-cluster-01 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-01 2 minutes ago Up About a minute (health: starting) 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
es-cluster-02 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-02 2 minutes ago Up About a minute (health: starting) 0.0.0.0:9202->9200/tcp, 0.0.0.0:9302->9300/tcp
es-cluster-03 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-03 2 minutes ago Up About a minute (health: starting) 0.0.0.0:9203->9200/tcp, 0.0.0.0:9303->9300/tcp
setup-cluster elasticsearch:7.17.9 "/bin/tini -- /usr/l…" setup-cluster 2 minutes ago Up About a minute 9200/tcp, 9300/tcp


这时候就手动启动 docker-compose up -d

1
2
3
4
5
6
7
8
$ docker-compose up -d
[+] Running 5/5
⠿ Container setup-cluster Running 0.0s
⠿ Container es-cluster-01 Healthy 0.6s
⠿ Container es-cluster-03 Healthy 0.6s
⠿ Container es-cluster-02 Healthy 0.6s
⠿ Container kibana-cluster Started

但是这时候kibana怎么也启动不起来,检查日志发现

es-cluster-02 | {“type”: “server”, “timestamp”: “2023-02-18T06:11:25,259Z”, “level”: “WARN”, “component”: “o.e.c.r.a.DiskThresholdMonitor”, “cluster.name”: “docker-cluster”, “node.name”: “es-cluster-02”, “message”: “high disk watermark [90%] exceeded on [pdT2lWRmQEi04k5GYvrWuA][es-cluster-01][/usr/share/elasticsearch/data/nodes/0] free: 88.6gb[9.2%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete”, “cluster.uuid”: “xaadt2vISeWTK4hk8RDJeA”, “node.id”: “7rYuhhyeS86iyKOtUChBKw” }

大致意思是我硬盘空间快满了,shards将不会分配给这个node,搜了一下解决办法就是

1
2
3
4
5
6
7
8
9
10
11
curl -XPUT "http://localhost:9200/_cluster/settings" \
-H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster": {
"routing": {
"allocation.disk.threshold_enabled": false
}
}
}
}'

执行完以后看到 kibana 日志就迅速滚动起来。后面再看看 kibana 启动时候都干了啥,为啥这么慢。

这时候 cpu 占用比较高,风扇哗啦啦响。

过了好久发现es-cluster-01 退出了,依然是没有任何错误提示,kibana自己提示 unhealthy 了。

1
2
3
4
5
6
$ docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
es-cluster-02 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-02 30 minutes ago Up 29 minutes (healthy) 0.0.0.0:9202->9200/tcp, 0.0.0.0:9302->9300/tcp
es-cluster-03 elasticsearch:7.17.9 "/bin/tini -- /usr/l…" es-cluster-03 30 minutes ago Up 29 minutes (healthy) 0.0.0.0:9203->9200/tcp, 0.0.0.0:9303->9300/tcp
kibana-cluster kibana:7.17.9 "/bin/tini -- /usr/l…" kibana-cluster 29 minutes ago Up 26 minutes (unhealthy) 0.0.0.0:5601->5601/tcp
setup-cluster elasticsearch:7.17.9 "/bin/tini -- /usr/l…" setup-cluster 30 minutes ago Up 29 minutes 9200/tcp, 9300/tcp

唉~看来es集群没问题,但是启动kibana的时候会较多的事情。再次重新启动,这时候一切正常了。

下面研究为啥cluster只启动两个的问题。这时候访问任何一个 node ,感觉都是健康的。

这世道乱了,忙乱了好久,最后看了下docker分配的cpu只有1个,内存只有2.8G😲,好吧,增加内存,这世界就安静了。

集群版价security版

新建一个 cluster-ssl 目录,把 .env 文件复制进去 ,

新建 docker-compose.yml,主要增加了 xpack 的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
version: '3'
services:
setupssl:
image: elasticsearch:${STACK_VERSION}
container_name: setupssl
volumes:
- ./data/certs:/usr/share/elasticsearch/config/certs
- ./setup.sh:/setup.sh
environment:
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- KIBANA_PASSWORD=${KIBANA_PASSWORD}
user: "0"
command: >
bash /setup.sh
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120

es01:
depends_on:
setupssl:
condition: service_healthy
image: elasticsearch:${STACK_VERSION}
container_name: es01
volumes:
- ./data/certs:/usr/share/elasticsearch/config/certs
- ./data/esdata01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120

es02:
depends_on:
- es01
image: elasticsearch:${STACK_VERSION}
container_name: es02
volumes:
- ./data/certs:/usr/share/elasticsearch/config/certs
- ./data/esdata02:/usr/share/elasticsearch/data
ports:
- '9202:9200'
- '9302:9300'
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120


es03:
depends_on:
- es02
image: elasticsearch:${STACK_VERSION}
container_name: es03
volumes:
- ./data/certs:/usr/share/elasticsearch/config/certs
- ./data/esdata03:/usr/share/elasticsearch/data
ports:
- '9203:9200'
- '9303:9300'
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es03/es03.key
- xpack.security.http.ssl.certificate=certs/es03/es03.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es03/es03.key
- xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120

kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: kibana:${STACK_VERSION}
container_name: kibana
ports:
- ${KIBANA_PORT}:5601
volumes:
- ./data/certs:/usr/share/kibana/config/certs
- ./data/kibanadata:/usr/share/kibana/data

environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120

setup.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";

然后启动顺利

安装arm64 macos的miniconda

miniconda下载

1
2
3
4
5
6
7
chmod +x ./Miniforge3-MacOSX-arm64.sh
./Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate

conda create -n tf python==3.9
conda activate tf

安装tensorflow

默认安装2.8版本的tensorflow,也可以指定版本。最好默认

1
2
3
4
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
python -m pip install tensorflow-metal

在这里看tensorflow入门,在这里就可以看文档:

https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/text_classification.ipynb?hl=zh-cn

告诉我可以用 google colab玩,就安装一下。

安装jupyter

1
2
conda install jupyter notebook

支持google的 colab

1
2
3
4
pip install jupyter_http_over_ws
jupyter serverextension enable --py jupyter_http_over_ws
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0

然后再右上角的connect种连接本地的jupyter。就可以在网页上运行python。

看看url地址貌似运行的github上的代码,我就把http://github.com/tensorflow/docs-i10n 这个代码仓库

clone了,然后就可以用 https://colab.research.google.com/github/wanghongxing/docs-l10n/blob/master/site/zh-cn/tutorials/keras/text_classification.ipynb?hl=zh-cn#scrollTo=6-tTFS04dChr 这个地址来学习tensorflow的文本分类例子。这时候是自己的代码仓库,感觉很厉害的样子。

执行之前别忘了安装 matplotlib,pip install matplotlib

有很多旧的dvd,都是早年给小孩的刻的DVD碟片。时间久了,碟片机都扔了,光驱也快淘汰了。当下最方便的还是用手机看,决定弄一份出来存到硬盘上,然后转换成方便手机观看的格式。

手机支持观看的格式,基本都是h264或者h265编码的mp4文件。找了很多工具,都是收费或者免费只能导出一半,这个钱还是不愿意花,自己用弄 ffmpeg。

本方案需要有dvd光驱或者dvd writer,如果老旧光盘读取有问题,就需要dvd player,那就应该走video capture方案,这个后面有时间买个dvd player后再弄(ps:吐槽一下,买的那些个绿色的dvd rewriter盘片,基本都读不出来,只有清华紫光的有保障)。

软件安装

1
brew install ffmpeg

复制

把碟片查到硬盘,其实就是把dvd碟片中的VIDEO_TS目录下内容复制到硬盘上,因为我有好多碟片,就一张一张复制,每张都改成碟片刻录的日期。

其中有一张碟片的VTS_01_4.VOB复制不出来了。

看网上说需要用ddrescue来挽救。

GNU ddrescue是一个用于磁盘、CD-ROM与其他数字存储媒体的资料恢复工具。其将原始存储区块从一个设备或文件复制到另一个,同时以智能方式处理读取错误,透过从部分读取的区块中截取尚称良好的扇区来最小化资料损失。 GNU ddrescue是用C++编程语言编写的,并以开源软件的形式提供,最初于2004年发布。

1
brew install ddrescue

Locate the drive using diskutil list.

1
2
3
/dev/disk3 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: PRJ_20090118 *4.4 GB disk3

Unmount the disk

1
diskutil unmount /dev/disk3   

Start a rescue operation of the disk into an image. Make sure the location of Rescue.dmg is replaced with your desired location.

1
sudo /usr/local/bin/ddrescue -v -n -c 4096 /dev/disk3 Rescue.dmg Rescue.log

注:因为死了,就强制kill &把光驱电源线,重新插入后发现disk3 变成了disk2。不知道什么鬼

上面个直接就死给我看了

1
sudo /usr/local/bin/ddrescue -c 4096 -d -r 3 -v /dev/disk2  Rescue.dmg Rescue.log

提示我 ddrescue: Direct disc access not available.

查了半天,说macos不支持direct access,可以通过raw方式;

再查raw方式,发现macos的raw格式disk是通过/dev/rdisk*来的。

1
sudo /usr/local/bin/ddrescue  -r1 -b2048  /dev/rdisk2  Rescue.dmg Rescue.log

经过一晚上折腾,放弃了,太慢了,12个小时才恢复10多M的坏块。

试一试编码

先从网上找几个使用例子

h264

How to convert DVD to mp4 with ffmpeg Ko Takagi Posted on 2021年4月17日 Updated on 2022年8月8日

1
2
3
4
5
6
7
8
ffmpeg -i VTS_01_1.VOB -b:v 1500k -r 30 -vcodec h264 \
-strict -2 -acodec aac -ar 44100 -f mp4 convert.mp4


ffmpeg -i "concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB" \
-b:v 1500k -r 30 -vcodec h264 -strict -2 -acodec aac -ar 44100 -f mp4 convert.mp4


就是单个转换和多个拼接一起转换;这兄弟指定了视频、音频码率。

试一下:

1
2
3
4
5
6
7
ffmpeg -i VTS_01_1.VOB -b:v 1500k -r 30 -vcodec h264 \
-strict -2 -acodec aac -ar 44100 -f mp4 VTS_01_1-1500k.mp4


-rwxrwxrwx 1 whx staff 977M 9 21 2008 VTS_01_1.VOB
-rw-r--r-- 1 whx staff 194M 1 20 20:26 VTS_01_1-1500k.mp4

期间有提示错误

1
2
3
4

[mpeg @ 0x7fd5c0816400] stream 1 : no PTS found at end of file, duration not set
[ac3 @ 0x7fd5c081ca00] incomplete frame8kB time=00:16:28.91 bitrate=1628.7kbits/s dup=4945 drop=0 speed=2.92x

我估计应该是文件应该一起来转换。

不过看文件大小,vob文件977M ,生成的mp4文件 194M。

其中bitrate=1628.7kbits/s应该指码率是1628k。

再试一下4个一起

1
2
3
4
5
6
7
8
9
ffmpeg -i "concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB" \
-b:v 1500k -r 30 -vcodec h264 -strict -2 -acodec aac -ar 44100 -f mp4 all-h264-1500k.mp4

-rwxrwxrwx 1 whx staff 977M 9 21 2008 VTS_01_1.VOB
-rwxrwxrwx 1 whx staff 977M 9 22 2008 VTS_01_2.VOB
-rwxrwxrwx 1 whx staff 977M 9 22 2008 VTS_01_3.VOB
-rwxrwxrwx 1 whx staff 977M 9 22 2008 VTS_01_4.VOB
-rwxrwxrwx 1 whx staff 170M 9 22 2008 VTS_01_5.VOB
-rw-r--r-- 1 whx staff 781M 1 20 21:00 all-h264-1500k.mp4

h265

我想试试不限制码率,只指定编码方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ffmpeg -i "concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB" \
-vcodec libx265 all-x265.mp4

Output #0, mp4, to 'all-x265.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: hevc (hev1 / 0x31766568), yuv420p(tv, bt470bg, top coded first (swapped)), 720x576 [SAR 16:15 DAR 4:3], q=2-31, 25 fps, 12800 tbn
Metadata:
encoder : Lavc58.134.100 libx265
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s
Metadata:
encoder : Lavc58.134.100 aac


-rw-r--r-- 1 whx staff 265M 1 20 21:38 all-x265.mp4

H265编码比较费 CPU,反正慢的要死。期间看到

1
2
frame=85892 fps= 48 q=34.4 size=  231936kB time=00:57:15.47 bitrate= 553.1kbits/s speed=1.93x

貌似码率是553k,最终文件大小265M还是比较喜人。

but:播放的时候quick time player不识别。

查询说Quicktime Player和iOS不再支持hev1 tag的mp4/mov。

回看输出Stream #0:0: Video: hevc (hev1 / 0x31766568),这儿应该指输出hev1.

二者大致有如下不同:

‘hvc1’ stores all parameter sets inside the MP4 container below the sample description boxes.
‘hev1’ stores all parameter sets in band (inside the HEVC stream).
我决定试试,只转一个vob,免得太慢。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
ffmpeg -i "concat:VTS_01_1.VOB" \
-vcodec libx265 -vtag hvc1 VTS_01_1-x265-hvc1.mp4

Output #0, mp4, to 'VTS_01_1-x265-hvc1.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: hevc (hvc1 / 0x31637668), yuv420p(tv, bt470bg, top coded first (swapped)), 720x576 [SAR 16:15 DAR 4:3], q=2-31, 25 fps, 12800 tbn
Metadata:
encoder : Lavc58.134.100 libx265
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 128 kb/s


-rwxrwxrwx 1 whx staff 977M 9 21 2008 VTS_01_1.VOB
-rw-r--r-- 1 whx staff 194M 1 20 20:26 VTS_01_1-1500k.mp4
-rw-r--r-- 1 whx staff 73M 1 20 22:25 VTS_01_1-x265-hvc1.mp4

压缩率完美。

1
2
3
4
ffmpeg -i "concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB" \
-vcodec libx265 -vtag hvc1 all-x265.mp4


闲着没事就试试,弄完了再试试别的

1
2
3
4
5
6
7
8
9
ffmpeg -codecs |grep EV |grep H.26

DEV.L. flv1 FLV / Sorenson Spark / Sorenson H.263 (Flash Video) (decoders: flv ) (encoders: flv )
DEV.L. h261 H.261
DEV.L. h263 H.263 / H.263-1996, H.263+ / H.263-1998 / H.263 version 2
DEV.L. h263p H.263+ / H.263-1998 / H.263 version 2
DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (encoders: libx264 libx264rgb h264_videotoolbox )
DEV.L. hevc H.265 / HEVC (High Efficiency Video Coding) (encoders: libx265 hevc_videotoolbox )

EV 就是过滤视频编码。

不指定那么多繁琐的参数试试看

开始测试h264有那么多参数,试着少点参数转h264试试看

1
2
3
4
5
6
7
ffmpeg -i "concat:VTS_01_1.VOB|VTS_01_2.VOB|VTS_01_3.VOB|VTS_01_4.VOB" \
-vcodec h264 all-h264.mp4

frame=64350 fps= 83 q=28.0 size= 443648kB time=00:42:53.82 bitrate=1412.0kbits/s speed=3.34x

-rw-r--r-- 1 whx staff 636M 1 20 22:02 all-h264.mp4

h264码率大概在1412k,播放效果不错。

批量转换

因为碟片多,让我一个一个的复制显然不是程序猿的作风,弄脚本~

1
2
3
4
##这个是查找所有的VOB文件然后转换成h265编码的mp4文件
find ./ -name '*.VOB' -exec bash -c 'ffmpeg -i $0 -vcodec libx265 -vtag hvc1 ${0/VOB/mp4}' {} \;


试试GPU

电脑是2015年的macbook pro 15寸 ,CPU 2.5 GHz 四核Intel Core i7 ,显卡AMD Radeon R9 M370X 2 GB。貌似可以试试GPU性能。

试试h264

1
ffmpeg -i VTS_02_1.VOB -c:v h264_videotoolbox  whx-h264-gpu.mp4

速度贼啦啦快,但是效果惨不忍睹,基本上可到的都是马赛克。

换成1M码率,试试看:

1
2
ffmpeg -i VTS_02_1.VOB -c:v h264_videotoolbox  -b:v 1000k whx-h264-gpu-1m.mp4

速度贼快,效果还好

1
2
ffmpeg -i VTS_02_1.VOB -c:v h264_videotoolbox  -b:v 500k whx-h264-gpu-500k.mp4

换成500k码率,速度更快,效果又不行了。

换成1500k码率:

1
2
ffmpeg -i VTS_01_1.VOB -c:v h264_videotoolbox  -b:v 1500k whx-h264-gpu-1500k.mp4

速度挺快,效果很好。

试试h265

1
2
3
4
5
6
ffmpeg -i VTS_01_1.VOB -c:v h265_videotoolbox  -vtag hvc1  whx-x265-gpu.mp4

[hevc_videotoolbox @ 0x7f87d8058a00] Error: cannot create compression session: -12908
[hevc_videotoolbox @ 0x7f87d8058a00] Try -allow_sw 1. The hardware encoder may be busy, or not supported.
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

1
2
3
4
5
6
7
ffmpeg -i VTS_02_1.VOB -c:v hevc_videotoolbox  -b:v 1000k  -vtag hvc1  whx-x265-gpu.mp4

[hevc_videotoolbox @ 0x7f78eb80f200] Error: cannot create compression session: -12908
[hevc_videotoolbox @ 0x7f78eb80f200] Try -allow_sw 1. The hardware encoder may be busy, or not supported.
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height


这样h265的gpu编码就失败了。

应该是这个显卡太老了不支持h265的硬解码。

gpu的优缺点

优点:速度贼快

缺点:文件太大

使用gps批量命令

虽然硬盘占用大,但是速度快,决定公用gpu

1
2
3
find ./ -name '*.VOB' -exec bash -c 'ffmpeg -i $0 -c:v h264_videotoolbox  -b:v 1500k ${0/VOB/mp4}' {} \;


摄像机里面的视频文件处理

摄像机里还有大量拍的视频,都是MPEG2编码的,为了用方便用手机,就复制到硬盘上,然后转换成h265。

1
2
3
4
5

##这个是查找所有的 MPG 文件然后转换成h265编码的mp4文件
find . -name '*.MPG' -exec bash -c 'ffmpeg -i $0 -vcodec libx265 -vtag hvc1 ${0/MPG/mp4}' {} \;


试着把生成的文件加上日期后缀

摄像机里面复制出来的 MPG 文件都是数字名称没法看出来具体年月,,但是复制出来的在电脑上的创建日期是保留的,修改一下脚本,把年月日记录在转换后的文件名上。

1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/sh
set +x
convertFile(){
prefix=`date -r ${0} "+%Y年%m月%d日%H点%M分%S"`
#fname= ${0/.MPG/-${prefix}.mp4}
echo "文件名: $1 ${prefix}"
ffmpeg -i $0 -vcodec libx265 -vtag hvc1 ${0/.MPG/-${prefix}.mp4}
}
export -f convertFile
find . -name '*.MPG' -exec bash -c 'convertFile ${0}' {} \;


修改视频码率,设置缩放后的视频大小和码率

1
2
3
4
5
6
7
8
9
10
11
12
13
ffmpeg -i 浩之宝视频2024-1-9.mp4   -r 15 -b 350k  -vcodec libx265 -vtag hvc1 浩之宝视频2024-1-9-350.mp4
ffmpeg -i 浩之宝视频2024-1-9.mp4 -vf scale=iw*.8:ih*.8 -r 15 -b 350k -vcodec libx265 -vtag hvc1 浩之宝视频2024-1-9-350k.mp4
ffmpeg -i 澎众店视频2024-1-9.mp4 -vf scale=iw*.8:ih*.8 -r 15 -b 350k -vcodec libx265 -vtag hvc1 澎众店视频2024-1-9-3501k.mp4
ffmpeg -i 澎众店视频2024-1-9.mp4 -r 15 -b 350k -vcodec libx265 -vtag hvc1 澎众店视频2024-1-9-350k.mp4




ffmpeg -i 1彭众.mp4 -r 15 -b 200k -vcodec libx265 -vtag hvc1 1彭众-350k.mp4
ffmpeg -i 2深蓝.mp4 -r 15 -b 200k -vcodec libx265 -vtag hvc1 2深蓝-350k.mp4



我的应用采用的是ruoyi-vue-pro,无意中发现带了很多http后缀的文件,搜了一下看了日拱一兵 IntelliJ IDEA的这个接口调试工具真是太好用了!这篇blog,知道了是 IntelliJ IDEA 的 HTTP Client,就拿来研究了一下。

新建http client

在idea中新建文件中找到HTTP Request.

截屏2023-01-18 10.50.17

环境变量

注:很多都是拿ruoyi-vue-pro的例子

环境变量需要定义在环境文件中,环境文件有两种:

  1. 创建名为 http-client.env.json 的环境文件(其实里面就是保存 JSON 数据),该文件里可以定义用在整个项目上的所有常规变量
  2. 也可以创建名为http-client.private.env.json, 看文件名你应该也猜到这是保存敏感数据的,比如密码,token等,该文件默认是被加入到 VCS 的 ignore文件中的,同时优先级高于其他环境文件, 也就是说,该文件的变量会覆盖其他环境文件中的变量值

里面的文件内容就像这样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"local": {
"baseUrl": "http://127.0.0.1:48080/platform/admin-api",
"token": "test1",
"adminTenentId": "1",

"appApi": "http://127.0.0.1:48080/platform/app-api",
"appToken": "test1",
"appTenentId": "1"
},
"gateway": {
"baseUrl": "http://127.0.0.1:8888/platform/admin-api",
"token": "test1",
"adminTenentId": "1",

"appApi": "http://127.0.0.1:8888/platform/app-api",
"appToken": "test1",
"appTenentId": "1"
}
}

注:下面的用例都是参照日拱一兵 来弄。

使用 response handler 脚本

我们要让登录成功后的所有请求都自动携带成功返回的 Token,这样不用我们每次都手动将其添加到header中,我直接把返回的json信息中的accessToken写入环境变量,就是用’>’开头,后面用类似于模板标签的方式框住 js 脚本

1
2
3
4
5

> {%
client.global.set("token", response.body.data.accessToken);

%}

编辑 HTTP request 文件

我们模拟实际项目中场景来编辑文件

  1. 用户登录,成功后获取 Token,通常是 POST 请求
  2. 用户后续访问行为都要在请求头中携带登录成功返回的 Token

通过点击 Add Request,选择相应的方法就可以编写啦

登录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
### 请求 /login 接口 => 成功(无验证码)
POST {{baseUrl}}/system/auth/login
Content-Type: application/json
tenant-id: {{adminTenentId}}

{
"username": "admin",
"password": "admin123"
}

> {%
client.global.set("token", response.body.data.accessToken);
%}

执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
http://{{baseUrl}}/system/auth/login

HTTP/1.1 200
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
trace-id:
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 18 Jan 2023 02:43:41 GMT
Keep-Alive: timeout=60
Connection: keep-alive

{
"code": 200,
"data": {
"userId": 1,
"accessToken": "85f67291fcb54d65b9dffd35098ceafb",
"refreshToken": "c14484195eec4d8ca0548e22ff6858a5",
"expiresTime": "2023-01-18 11:13:41"
},
"message": ""
}
Response file saved.
> 2023-01-18T104341.200.json

Response code: 200; Time: 138ms; Content length: 180 bytes

然后后续使用这个token

获取权限信息这里需要使用刚才返回的 token

1
2
3
4
5
6
7
8
9
10
11
### 请求 /get-permission-info 接口 => 成功
GET {{baseUrl}}/system/auth/get-permission-info
Authorization: Bearer {{token}}
tenant-id: {{adminTenentId}}

### 请求 /list-menus 接口 => 成功
GET {{baseUrl}}/system/auth/list-menus
Authorization: Bearer {{token}}
#Authorization: Bearer a6aa7714a2e44c95aaa8a2c5adc2a67a
tenant-id: {{adminTenentId}}

执行结果虽然没有显示发送的header,但是可以看到认证成功了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
http://{{baseUrl}}/system/auth/get-permission-info

HTTP/1.1 200
Vary: Origin
Vary: Access-Control-Request-Method
Vary: Access-Control-Request-Headers
trace-id:
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
Content-Type: application/json
Transfer-Encoding: chunked
Date: Wed, 18 Jan 2023 02:43:56 GMT
Keep-Alive: timeout=60
Connection: keep-alive

{
"code": 200,
"data": {
"user": {
"id": 1,
"nickname": "老王",
"avatar": "http://127.0.0.1:48080/platform/admin-api/infra/file/5/get/ef30195d8b2cd33a1d8233dfe6ea5881ca868b94b5dcf93be8cb78ba5151b8c7.jpg"
},
"roles": [
"common",
"super_admin",
"ACTUATOR"
],
"permissions": [
"",
"infra:config:create",
"bpm:task-assign-rule:create",
"system:user:query",
"system:error-code:query",

完事

到这里基本差不多了,后续对需要测试的接口一个个写就行了,方便的地方就是一直在idea中写,不用打开postman。

以前使用pyenv来管理python版本,最近看到别人用 anaconda,就了解了一下,试着安装。

安装anaconda

从 anaconda 官网 https://www.anaconda.com/products/distribution#macos 下载可视化版本https://repo.anaconda.com/archive/Anaconda3-2022.10-MacOSX-x86_64.pkg,然后安装;安装完后系统多了一个 anaconda-navigator,安装目录是~/opt/anaconda3 ,打开后可以在environments下create新的环境,但是新建的时候python版本不能随便选,由于一直使用python-3.7.10来使用paddle,所以还是得自己手动创建。

1
2
3
4
5
6
7
8
9
10
11
12
 
conda create -n paddle232-py3710 python=3.7.10

#
# To activate this environment, use
#
# $ conda activate paddle232-py3710
#
# To deactivate an active environment, use
#
# $ conda deactivate

创建好后在可视化界面也可以看到。

然后命令行激活

1
2
3
4
5
6
(base) wanghongxing:~ whx$ python -V
Python 3.9.13
(base) wanghongxing:~ whx$ conda activate paddle232-py3710

(paddle232-py3710) wanghongxing:~ whx$ python -V
Python 3.7.10

可以看到,刚开始python是3.9.13,执行 conda activate paddle232-py3710后python版本变成3.7.10

安装paddle2.3.2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
(paddle232-py3710) wanghongxing:~ python -m pip install paddlepaddle==2.3.2 -i https://pypi.tuna.tsinghua.edu.cn/simple

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting paddlepaddle==2.3.2
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/ba/bf/bc6a1dd9a1126d8cd1467917d34bf12c9282152f99afc5b8cad29118eda4/paddlepaddle-2.3.2-cp37-cp37m-macosx_10_6_intel.whl (93.0 MB)
Collecting numpy>=1.13
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/32/dd/43d8b2b2ebf424f6555271a4c9f5b50dc3cc0aafa66c72b4d36863f71358/numpy-1.21.6-cp37-cp37m-macosx_10_9_x86_64.whl (16.9 MB)
Collecting decorator
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d5/50/83c593b07763e1161326b3b8c6686f0f4b0f24d5526546bee538c89837d6/decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting paddle-bfloat==0.1.7
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/c6/40/65edb5bf459317bdc808f884805784470c9b5ab38b81df23fac02e02f5b8/paddle_bfloat-0.1.7-cp37-cp37m-macosx_10_9_x86_64.whl (44 kB)
Collecting six
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d9/5a/e7c31adbe875f2abbb91bd84cf2dc52d792b5a01506781dbcf25c91daf11/six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting protobuf<=3.20.0,>=3.1.0
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/be/f0/2633123b475c9ae6e9be25351c7ba6ca3adc223d73789ca2f6f5e4686723/protobuf-3.20.0-cp37-cp37m-macosx_10_9_x86_64.whl (961 kB)
Collecting astor
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl (27 kB)
Collecting Pillow
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/91/1d/57a09a69508a27c1c6caa4197ce7fac5be5b7d736889ba1a20931ff4efca/Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl (3.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 14.4 MB/s eta 0:00:00
Collecting opt-einsum==3.3.0
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/bc/19/404708a7e54ad2798907210462fd950c3442ea51acc8790f3da48d2bee8b/opt_einsum-3.3.0-py3-none-any.whl (65 kB)
Collecting requests>=2.20.0
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/ca/91/6d9b8ccacd0412c08820f72cebaa4f0c0441b5cda699c90f618b6f8a1b42/requests-2.28.1-py3-none-any.whl (62 kB)
Collecting idna<4,>=2.5
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/fc/34/3030de6f1370931b9dbb4dad48f6ab1015ab1d32447850b9fc94e60097be/idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/65/0c/cc6644eaa594585e5875f46f3c83ee8762b647b51fc5b0fb253a242df2dc/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Collecting charset-normalizer<3,>=2
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/db/51/a507c856293ab05cdc1db77ff4bc1268ddd39f29e7dc4919aa497f0adbec/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Requirement already satisfied: certifi>=2017.4.17 in ./opt/anaconda3/envs/paddle232-py3710/lib/python3.7/site-packages (from requests>=2.20.0->paddlepaddle==2.3.2) (2022.12.7)
Installing collected packages: paddle-bfloat, urllib3, six, protobuf, Pillow, numpy, idna, decorator, charset-normalizer, astor, requests, opt-einsum, paddlepaddle
Successfully installed Pillow-9.4.0 astor-0.8.1 charset-normalizer-2.1.1 decorator-5.1.1 idna-3.4 numpy-1.21.6 opt-einsum-3.3.0 paddle-bfloat-0.1.7 paddlepaddle-2.3.2 protobuf-3.20.0 requests-2.28.1 six-1.16.0 urllib3-1.26.13

(paddle232-py3710) wanghongxing:~ whx$ which paddle
/Users/whx/opt/anaconda3/envs/paddle232-py3710/bin/paddle
(paddle232-py3710) wanghongxing:~ whx$ paddle version
<stdin>:3: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
PaddlePaddle 2.3.2, compiled with
with_avx: ON
with_gpu: OFF
with_mkl: OFF
with_mkldnn: OFF
with_python: ON


安装paddle

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

(paddle232-py3710) wanghongxing:guyuai-emotion-train whx$ pip3 install paddle
Collecting paddle
Using cached paddle-1.0.2.tar.gz (579 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py egg_info did not run successfully.
exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 36, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/6d/lxb9s6557cx2wq_2fs6s2_bm0000gr/T/pip-install-h1m6_55m/paddle_d36d8dad2117474f8fc2aaa140c6ac30/setup.py", line 3, in <module>
import paddle
File "/private/var/folders/6d/lxb9s6557cx2wq_2fs6s2_bm0000gr/T/pip-install-h1m6_55m/paddle_d36d8dad2117474f8fc2aaa140c6ac30/paddle/__init__.py", line 5, in <module>
import common, dual, tight, data, prox
ModuleNotFoundError: No module named 'common'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

然后手动安装依赖包

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
pip install common
pip install dual
pip install tight
pip install data
pip install prox

(paddle232-py3710) wanghongxing:guyuai-emotion-train whx$ pip3 install paddle==1.0.2
Collecting paddle==1.0.2
Using cached paddle-1.0.2.tar.gz (579 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: paddle
Building wheel for paddle (setup.py) ... done
Created wheel for paddle: filename=paddle-1.0.2-py3-none-any.whl size=33366 sha256=63916ce0eea092ef9e690f3e992b51f32f955eabc66070be6e9feba60d672973
Stored in directory: /Users/whx/Library/Caches/pip/wheels/e2/38/0e/382f68d54c6949b370f1438aa96172ff44a8ed367134cce32e
Successfully built paddle
Installing collected packages: paddle
Successfully installed paddle-1.0.2


安装完后仍然执行异常,没办法,删除rm -rf ~/opt/anaconda3,重新下载命令行版本,下载地址https://repo.anaconda.com/archive/Anaconda3-2022.10-MacOSX-x86_64.sh

1
2
3
4
5
6
bash Anaconda3-2022.10-MacOSX-x86_64.sh
conda create -n paddle232-py3710 python=3.7.10
conda activate paddle232-py3710
pip install paddle==1.0.2
python -m pip install paddlepaddle==2.3.2 -i https://pypi.tuna.tsinghua.edu.cn/simple

这时候 paddle2.3.2 环境正常。

1
2
arch -x86_64 bash

ipfs 搭建私有网络 作为服务运行

主节点:whx: 192.168.33.10

worker: 192.168.33.21

下载:

1
wget https://github.com/ipfs/go-ipfs/releases/download/v0.7.0/go-ipfs_v0.7.0_linux-amd64.tar.gz

解压缩 tar xzf go-ipfs_v0.7.0_linux-amd64.tar.gz

切换root,两个节点都执行

初始化:

1
2
3
4
5
cd go-ipfs
sudo bash install.sh
cp /usr/local/bin/ipfs /usr/bin/ipfs
ipfs --version
/usr/local/bin/ipfs init

主节点:

安装golang

1
2
3
yum install epel-release
yum install go

创建共享key

1
2
3
4
5
6
7
8
9
10
11
12
# go get github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
# go/bin/ipfs-swarm-key-gen > /root/.ipfs/swarm.key
# ipfs bootstrap rm all
# ipfs id
{
"ID": "12D3KooWNvmkBW5noeQLzEpSpktYSZjq69727Z9e9F1sEJtuwJEb",
"PublicKey": "CAESIMLMEqwfQn4BGZEcmH9ch+Oz93YYWqg//+i5/2dG26nI",
"Addresses": null,
"AgentVersion": "go-ipfs/0.7.0/",
"ProtocolVersion": "ipfs/0.1.0",
"Protocols": null
}

worker节点复制swarm.key,然后

1
2
3
ipfs bootstrap rm all
ipfs bootstrap add /ip4/192.168.33.10/tcp/4001/ipfs/12D3KooWNvmkBW5noeQLzEpSpktYSZjq69727Z9e9F1sEJtuwJEb

两台服务器都把ipfs作为服务

1
2
cd /lib/systemd/system
vi ipfs.service
1
2
3
4
5
6
7
8
9
[Unit]
Description=IPFS
[Service]
ExecStart=/usr/local/bin/ipfs daemon
Restart=always
User=root
Group=root
[Install]
WantedBy=multi-user.target

启用服务:

systemctl enable ipfs.service

systemctl start ipfs.service

systemctl status ipfs.service

测试

1
2
3
4
5
6
7
8
9
10
11
echo "from whx "> whx.txt

# ipfs add whx.txt
added QmYkeyhAYTaWdizfFEno2EtBTRdjazcSwYnhTWt4q7L5zU whx.txt
# ipfs cat /ipfs/QmYkeyhAYTaWdizfFEno2EtBTRdjazcSwYnhTWt4q7L5zU
from whx

# echo "from test3">test.txt
# ipfs add test.txt
added QmXazQTUuAoiXEDqoCd4eF9g5okoMHWE2KMtAviS3dQ7h2 test.txt

添加一个目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# tree test
test
├── index.html
├── sub
│   └── sub.html
└── test.html
# ipfs add -r test
added QmQV4SEvUf8UhLmf7bsjx97jtY4Pw1XBk2be2GUdyxRTMx test/index.html
added QmRxqKo5fUnpNPzvWcPnBZfkV9533bqZLBzRWFeWtkjbME test/sub/sub.html
added QmbNyHGiz83bNbUnyNjXTjr1pm8AhD5XHyhQfS6iLiwBT1 test/test.html
added QmZicCMMw7xfcbMHbZ1hNJ1n9om1jDFywdjQFXEkGwGQMW test/sub
added QmWBEkwUxHL81GScHYcKdMio7PBzJDNRHZxUpc8uRFdyQT test

# ipfs ls /ipfs/QmWBEkwUxHL81GScHYcKdMio7PBzJDNRHZxUpc8uRFdyQT/
QmQV4SEvUf8UhLmf7bsjx97jtY4Pw1XBk2be2GUdyxRTMx 20 index.html
QmZicCMMw7xfcbMHbZ1hNJ1n9om1jDFywdjQFXEkGwGQMW - sub/
QmbNyHGiz83bNbUnyNjXTjr1pm8AhD5XHyhQfS6iLiwBT1 20 test.html


# echo "aaaa" >test\sub\test.txt
# ipfs add -r test
added QmQV4SEvUf8UhLmf7bsjx97jtY4Pw1XBk2be2GUdyxRTMx test/index.html
added QmRxqKo5fUnpNPzvWcPnBZfkV9533bqZLBzRWFeWtkjbME test/sub/sub.html
added QmbxCEbzgQZmy38pew5Wy6cbfiefW8z3vGbepZuZzcgchP test/sub/test1.txt
added QmbNyHGiz83bNbUnyNjXTjr1pm8AhD5XHyhQfS6iLiwBT1 test/test.html
added QmWR5PNWprgGfnYS9t3MUZGV1YtYifWkebgrh7UB6Zi3xD test/sub
added QmeoLK9ZsQK9zroLjb7HNxhyqE37ieL66uFnLRJoWD4bxL test

# ipfs ls /ipfs/QmeoLK9ZsQK9zroLjb7HNxhyqE37ieL66uFnLRJoWD4bxL
QmQV4SEvUf8UhLmf7bsjx97jtY4Pw1XBk2be2GUdyxRTMx 20 index.html
QmWR5PNWprgGfnYS9t3MUZGV1YtYifWkebgrh7UB6Zi3xD - sub/
QmbNyHGiz83bNbUnyNjXTjr1pm8AhD5XHyhQfS6iLiwBT1 20 test.html

注意到两次的不同

1
2
3
4
5
6
7
8
9
10
QmZicCMMw7xfcbMHbZ1hNJ1n9om1jDFywdjQFXEkGwGQMW -  sub/

QmWR5PNWprgGfnYS9t3MUZGV1YtYifWkebgrh7UB6Zi3xD - sub/

# ipfs ls QmZicCMMw7xfcbMHbZ1hNJ1n9om1jDFywdjQFXEkGwGQMW
QmRxqKo5fUnpNPzvWcPnBZfkV9533bqZLBzRWFeWtkjbME 21 sub.html

# ipfs ls QmWR5PNWprgGfnYS9t3MUZGV1YtYifWkebgrh7UB6Zi3xD
QmRxqKo5fUnpNPzvWcPnBZfkV9533bqZLBzRWFeWtkjbME 21 sub.html
QmbxCEbzgQZmy38pew5Wy6cbfiefW8z3vGbepZuZzcgchP 10 test1.txt

绑定节点名 publish

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# ipfs name publish QmeoLK9ZsQK9zroLjb7HNxhyqE37ieL66uFnLRJoWD4bxL
Published to k51qzi5uqu5dk53t2hi2f8dhupwl2kkxypggp4dd7svxkxirwg411zfthqn08e: /ipfs/QmeoLK9ZsQK9zroLjb7HNxhyqE37ieL66uFnLRJoWD4bxL
# ipfs cat /ipns/k51qzi5uqu5dk53t2hi2f8dhupwl2kkxypggp4dd7svxkxirwg411zfthqn08e/index.html
index.html from test

# echo "index.html version new" >test/index.html
# ipfs add -r test
added QmNZt9aFJHtggpXupw2VjsvZ1RksV7AfjhLgBSSN5q9a51 test/index.html
added QmRxqKo5fUnpNPzvWcPnBZfkV9533bqZLBzRWFeWtkjbME test/sub/sub.html
added QmbxCEbzgQZmy38pew5Wy6cbfiefW8z3vGbepZuZzcgchP test/sub/test1.txt
added QmbNyHGiz83bNbUnyNjXTjr1pm8AhD5XHyhQfS6iLiwBT1 test/test.html
added QmWR5PNWprgGfnYS9t3MUZGV1YtYifWkebgrh7UB6Zi3xD test/sub
added QmbwhfHfarbU3X9wa8ovtK7Q4fGkd8R3B8biH4kUD7CmuU test
# ipfs name publish QmbwhfHfarbU3X9wa8ovtK7Q4fGkd8R3B8biH4kUD7CmuU
Published to k51qzi5uqu5dk53t2hi2f8dhupwl2kkxypggp4dd7svxkxirwg411zfthqn08e: /ipfs/QmbwhfHfarbU3X9wa8ovtK7Q4fGkd8R3B8biH4kUD7CmuU
# ipfs cat /ipns/k51qzi5uqu5dk53t2hi2f8dhupwl2kkxypggp4dd7svxkxirwg411zfthqn08e/index.html
index.html version new

节点直接不能互相发现,这是因为swarm announce的ip 不固定,多网卡的情况下回在虚拟机里绑定到docker网卡,修改方法是修改.ipfs/config

1
2
3
4
5
6
7
{
"Addresses": {
"Announce": [
"/ip4/1.2.3.4/tcp/4001",
]
}
}

下载webui

https://github.com/ipfs-shipyard/ipfs-webui/releases/download/v2.11.1/ipfs-webui.tar.gz

1
2
3
4
# tar zxf ipfs-webui.tar.gz
# ipfs add -r build
added QmaFK9e6DjMuqnsi9hDJoJb6E1iFxJaHHWiKUoUpVjeg7h build/static
added QmZtzPm6EgQToncp6RuHdaTtyPpWQy2gvTrXuMsYQxHV5k build
1
2
3
4
5
6
7
# ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["http://bafybeiflxftai7lnntipulvvmn3vfcs3ktig4kgzxollh276zqmchxz6cm.ipfs.localhost:8080", "http://localhost:3000", "http://127.0.0.1:5001", "https://webui.ipfs.io"]'
# ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "POST"]'

# curl -v http://localhost:8080/ipfs/QmZtzPm6EgQToncp6RuHdaTtyPpWQy2gvTrXuMsYQxHV5k
< Location: http://bafybeiflxftai7lnntipulvvmn3vfcs3ktig4kgzxollh276zqmchxz6cm.ipfs.localhost:8080/
echo "127.0.0.1 bafybeiflxftai7lnntipulvvmn3vfcs3ktig4kgzxollh276zqmchxz6cm.ipfs.localhost">>/etc/hosts

访问如下地址

http://bafybeiflxftai7lnntipulvvmn3vfcs3ktig4kgzxollh276zqmchxz6cm.ipfs.localhost:8080/#/welcome


ipfs key

1
2
# ipfs key list -l
k51qzi5uqu5dk53t2hi2f8dhupwl2kkxypggp4dd7svxkxirwg411zfthqn08e self

云端服务器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# ipfs id
{
"ID": "12D3KooWDiAhxybZwdsdHrnjogpbdnKuBPDSFn883cwyDpJb5hiZ",
"PublicKey": "CAESIDnYTZXIdT2889xegXXVgnDQwTx9GuoY/FKXqOYBGU8q",
"Addresses": [
"/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWDiAhxybZwdsdHrnjogpbdnKuBPDSFn883cwyDpJb5hiZ",
"/ip4/192.168.1.16/tcp/4001/p2p/12D3KooWDiAhxybZwdsdHrnjogpbdnKuBPDSFn883cwyDpJb5hiZ",
"/ip6/::1/tcp/4001/p2p/12D3KooWDiAhxybZwdsdHrnjogpbdnKuBPDSFn883cwyDpJb5hiZ"
],
"AgentVersion": "go-ipfs/0.7.0/",
"ProtocolVersion": "ipfs/0.1.0",
"Protocols": [
"/ipfs/bitswap",
"/ipfs/bitswap/1.0.0",
"/ipfs/bitswap/1.1.0",
"/ipfs/bitswap/1.2.0",
"/ipfs/id/1.0.0",
"/ipfs/id/push/1.0.0",
"/ipfs/lan/kad/1.0.0",
"/ipfs/ping/1.0.0",
"/libp2p/autonat/1.0.0",
"/libp2p/circuit/relay/0.1.0",
"/p2p/id/delta/1.0.0",
"/x/"
]
}

在服务器安装节点

1
2
3
114.115.210.207
114.115.212.160
119.3.165.66

然后再所有内网节点加入前两个节点,不加入第三个节点

1
2
3
4
ipfs bootstrap add /ip4/114.115.210.207/tcp/4001/ipfs/12D3KooWDiAhxybZwdsdHrnjogpbdnKuBPDSFn883cwyDpJb5hiZ
ipfs bootstrap add /ip4/114.115.212.160/tcp/4001/ipfs/12D3KooWART3DbX2qT93YjPzBxyEVLqaFSoH9rm6Wt9ZZE2VwErh


然后在内网节点添加文件,不在两台bootstrap中查看,在第三台上不能看到文件。

在两台bootstrap中的任意一台查看过文件后,可以在第三台上看到。

结论:内网的文件需要在外网服务器上pin后,才能在别的节点查看。

todo:在一台nat后的服务器上添加文件,研究其它节点的情况。


研究nodejs版本的各个例子

browser-http-client-upload-file:

启动后点击查看的时候会把http://localhost:8080/ipfs/Qmb3b88paN4AocnjyTNCpRqh2CsbskhvoLbE7o6BhHmD85

重定向到

http://Qmb3b88paN4AocnjyTNCpRqh2CsbskhvoLbE7o6BhHmD85.ipfs .localhost:8080/ipfs/

修改配置文件后不跳转了,

1
2
3
4
5
6
ipfs config --json Gateway.PublicGateways '{
"localhost:8080": {
"UseSubdomains": false,
"Paths": ["/ipfs", "/ipns", "/api"]
}
}'

浏览器直接访问ipfs api 的时候报access-control-allow 错,添加配置后重启ipfs即可

1
2
3
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Methods '["PUT", "GET", "POST"]'
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Credentials '["true"]'

1
2
3
echo "a">test.txt
echo "b">test2.txt
echo "c">test3.txt

master:

1
2
3
4
5
6
7
8
9
10
11
12
13
# ipfs --api /ip4/127.0.0.1/tcp/9095 add test3.txt
added QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 test3.txt

# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

# ipfs-cluster-ctl pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 | | PIN | Repl. Factor: -1 | Allocations: [everywhere] | Recursive | Metadata: no | Exp: ∞

#ipfs pin ls
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

注:QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn 是一个空的文件目录,每个初始化的节点都有。

在另外两台服务器上都执行 pin ls,

1
2
3
4
5
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
# ipfs pin ls
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

说明使用cluster add的时候在所有的节点都已经同步保存了。

whx:

1
2
3
4
5
6
7
8
9
10
11
# ipfs add test.txt
added Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 test.txt

# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

# ipfs pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive

说明普通的add命令只在本机pin

whx:

1
2
3
4
5
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin add Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
pinned Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursively
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive

在另外两台服务器上都执行 pin ls,

1
2
3
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

说明使用 ipfs add 以后需要用 cluster 的 pin add 来让 cluster 保存了。

whx:

1
2
3
4
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm  Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
unpinned Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

执行删除操作后,pin ls看不到该文件了。

master:

1
2
3
4
5
6
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin add Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
pinned Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursively

# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

在master上执行pin add 后该文件又出现了。(难到这个在存储里还有,重新pin add 后又出现了)

worker:

1
2
3
4
5
6
7
8
9
10
11
# ipfs add test2.txt
added QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM test2.txt
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive
# ipfs pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM recursive
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive

whx:

1
2
3
4
5
6
# ipfs cat QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
b
# ipfs pin ls
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive

说明可以看不同节点增加的文件,但是并没有pin。

whx:

1
2
3
4
5
6
7
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin add QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
pinned QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM recursively
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM recursive
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive

在任意一台服务器上

1
2
3
4
5
6
7
8
9
10
11
12

# ipfs --api /ip4/127.0.0.1/tcp/9095 cat QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
b
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
unpinned QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
# ipfs --api /ip4/127.0.0.1/tcp/9095 cat QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
b
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin ls
Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9 recursive
QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7 recursive
# ipfs cat QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
b

这时候删除了都能看,然后在一台清除gc

1
2
3
4
5
6
7
8
9
10
11
12
# ipfs repo gc
removed QmWLX2v5rnFeAbFyP4VPkDKLVVk91LncD6S648Ce4GgGgz
removed QmevuAKVMG3Pr6nTp2oy9MzuPhZuZMxRfFApbfzk5zafr6
removed QmV7XXZAUrBaE76Susmd4gRLzziK5QH1EigHqgma8adKf9
removed QmWgtt1onrFVgqwAuYBWQ2HNkxJtBgnDzqWRVZQzpWeDUj
removed QmWNGqp2zsw6nLr6aySK3UAQkTkoYozMNszwcz1bWmuckX
removed QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
removed QmT9PmBnECcLRCu4hYmbijCca6k7ocdagGqUFpH9gWf5Cn
removed QmaoaDnyHfZYEXjCuX59bgzA2YbgXMUKuSARdS3GHgbxoN
# ipfs cat QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
b

一台上清除gc后依然能看,下面在所有服务器上清除

1
2
3
4
5
6
7
8
9
10
11
# ipfs repo gc
removed QmWLX2v5rnFeAbFyP4VPkDKLVVk91LncD6S648Ce4GgGgz
removed QmevuAKVMG3Pr6nTp2oy9MzuPhZuZMxRfFApbfzk5zafr6
removed QmV7XXZAUrBaE76Susmd4gRLzziK5QH1EigHqgma8adKf9
removed QmWgtt1onrFVgqwAuYBWQ2HNkxJtBgnDzqWRVZQzpWeDUj
removed QmWNGqp2zsw6nLr6aySK3UAQkTkoYozMNszwcz1bWmuckX
removed QmPphhXAJkNKktecNqsbRf3zXXdsSA4fM9DS4LMaWFmbti
removed QmR9pC5uCF3UExca8RSrCVL8eKv7nHMpATzbEQkAHpXmVM
removed QmT9PmBnECcLRCu4hYmbijCca6k7ocdagGqUFpH9gWf5Cn
removed QmaoaDnyHfZYEXjCuX59bgzA2YbgXMUKuSARdS3GHgbxoN

清除完以后就看不到文件了,说明只要gc还没有清除,文件就都还能看。


测试 replication_factor_minreplication_factor_max

~/.ipfs-cluster/service.json 文件中 replication_factor_minreplication_factor_max 均为默认值 -1

集群中每个节点都存了一份文件副本。这不是我们想要的,需要继续实验如何控制副本数。

修改 ~/.ipfs-cluster/service.json 文件中的两个配置项: replication_factor_minreplication_factor_max

  • replication_factor_min 代表存储该文件的集群最小节点数,可以理解为副本数下线,-1 代表全部。本次实验中设置为 1.
  • replication_factor_max 代表存储该文件的集群最大阶段属,可以理解为副本数上限,-1 代表全部。本次实验中设置为 1.
1
2
3
4
5
6
7
jq ".cluster.replication_factor_min = 2" ${IPFS_CLUSTER_PATH}/service.json |jq ".cluster.replication_factor_max = 3" > ${IPFS_CLUSTER_PATH}/tmp.service.json
mv -f ${IPFS_CLUSTER_PATH}/tmp.service.json ${IPFS_CLUSTER_PATH}/service.json
grep replication_factor ${IPFS_CLUSTER_PATH}/service.json
systemctl restart ipfs-cluster
systemctl status ipfs-cluster


随机生成100M的大文件

1
2
3
dd if=/dev/urandom of=test1.file count=100000 bs=1024
dd if=/dev/urandom of=test2.file count=100000 bs=1024
dd if=/dev/urandom of=test3.file count=100000 bs=1024

添加文件并查看ipfs存储

1
2
3
4
5
# ipfs --api /ip4/127.0.0.1/tcp/9095 add test1.file
added QmaVdYFJP88SzYtDayfiU4xbzCTmStmB6Ug4ihtJV7Nzgg test1.file
# du -sh /data/*
101M /data/ipfs
44K /data/ipfs-cluster

所有三台服务器上存储内容一致.

清理所有存储

1
2
3
4
5
6
7
8
9
10
11
12
13
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm QmVFxmvDaBBGBFUMPLSRL9mb4N6Xm8xEYnCuy5m85gPAaP
unpinned QmVFxmvDaBBGBFUMPLSRL9mb4N6Xm8xEYnCuy5m85gPAaP
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm QmavpLffUr1iY2QfhsbMoHrm9tb8rrjoPGP8erwFW2M3tX
unpinned QmavpLffUr1iY2QfhsbMoHrm9tb8rrjoPGP8erwFW2M3tX
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7
unpinned QmetGxZTgo8tYAKQH1KLsY13MxqeVHbxYVmvzBzJAKU6Z7
# ipfs --api /ip4/127.0.0.1/tcp/9095 pin rm Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
unpinned Qmbvkmk9LFsGneteXk3G7YLqtLVME566ho6ibaQZZVHaC9
# ipfs repo gc
# du -sh /data/*
352K /data/ipfs
48K /data/ipfs-cluster

重新设定复制因子

1
2
3
4
5
6
7
jq ".cluster.replication_factor_min = 1" ${IPFS_CLUSTER_PATH}/service.json |jq ".cluster.replication_factor_max = 2" > ${IPFS_CLUSTER_PATH}/tmp.service.json
mv -f ${IPFS_CLUSTER_PATH}/tmp.service.json ${IPFS_CLUSTER_PATH}/service.json
grep replication_factor ${IPFS_CLUSTER_PATH}/service.json
systemctl restart ipfs-cluster
systemctl status ipfs-cluster


添加文件并查看ipfs存储

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

whx: # ipfs --api /ip4/127.0.0.1/tcp/9095 add test1.file
added QmaVdYFJP88SzYtDayfiU4xbzCTmStmB6Ug4ihtJV7Nzgg test1.file

whx: # du -sh /data/*
100M /data/ipfs
68K /data/ipfs-cluster

master: # du -hs /data/*
101M /data/ipfs
68K /data/ipfs-cluster

worker: # du -hs /data/*
972K /data/ipfs
68K /data/ipfs-cluster

添加第二个文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
whx: # ipfs --api /ip4/127.0.0.1/tcp/9095 add test2.file
added QmNzEZywMY6M79wZ6TWuaUU5LFfGDgiqxbrF6mxMdJnjjY test2.file
whx: # du -hs /data/*
200M /data/ipfs
68K /data/ipfs-cluster

worker : # du -hs /data/*
101M /data/ipfs
68K /data/ipfs-cluster

master: # du -hs /data/*
101M /data/ipfs
68K /data/ipfs-cluster

结论:每个文件保存了两份

在master上新增文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
master: # ipfs --api /ip4/127.0.0.1/tcp/9095 add test3.file

whx: # du -hs /data/*
200M /data/ipfs
68K /data/ipfs-cluster

worker: # du -hs /data/*
200M /data/ipfs
68K /data/ipfs-cluster

master: # du -hs /data/*
200M /data/ipfs
68K /data/ipfs-cluster

这时候三台服务器持平

在worker上操作:

1
2
3
4
5
6
7
8
dd if=/dev/urandom of=test4.file count=100000 bs=1024
dd if=/dev/urandom of=test5.file count=100000 bs=1024
dd if=/dev/urandom of=test6.file count=100000 bs=1024
ipfs --api /ip4/127.0.0.1/tcp/9095 add test4.file
ipfs --api /ip4/127.0.0.1/tcp/9095 add test5.file
ipfs --api /ip4/127.0.0.1/tcp/9095 add test6.file


完成后所有服务器存储容量一致:

1
2
3
# du -hs /data/*
399M /data/ipfs
72K /data/ipfs-cluster

继续增加

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
dd if=/dev/urandom of=test7.file count=100000 bs=1024
dd if=/dev/urandom of=test8.file count=100000 bs=1024
ipfs --api /ip4/127.0.0.1/tcp/9095 add test7.file
ipfs --api /ip4/127.0.0.1/tcp/9095 add test8.file

whx: # du -hs /data/*
598M /data/ipfs
76K /data/ipfs-cluster
worker: # du -hs /data/*
500M /data/ipfs
76K /data/ipfs-cluster
master: # du -hs /data/*
499M /data/ipfs
76K /data/ipfs-cluster

说明cluster尽量在节点之间保持平衡


使用ipfs-cluster-ctl添加

1
2
3
4
5
6
7
8
9
10
11
12
# dd if=/dev/urandom of=test9.file count=100000 bs=1024
# ipfs-cluster-ctl add --rmin 1 --rmax 1 test9.file
added QmdTZ9vhg9ocqgPE7frdYGAZHFkF8cYM51dLWzxx9y4nSW test9.file
whx: # du -hs /data/*
598M /data/ipfs
76K /data/ipfs-cluster
worker: # du -hs /data/*
500M /data/ipfs
76K /data/ipfs-cluster
master: # du -hs /data/*
599M /data/ipfs
76K /data/ipfs-cluster

单独增加一个后,增加到master

继续

1
2
3
4
5
6
7
8
9
10
11
12
# dd if=/dev/urandom of=test10.file count=100000 bs=1024
# ipfs-cluster-ctl add --rmin 1 --rmax 1 test10.file
added QmeNha17eq2MfJugthsu9KWdcb18aGKjHbP6LV5rJzXjYo test10.file
whx: # du -hs /data/*
598M /data/ipfs
76K /data/ipfs-cluster
worker: # du -hs /data/*
600M /data/ipfs
76K /data/ipfs-cluster
master: # du -hs /data/*
599M /data/ipfs
76K /data/ipfs-cluster

这时候所有服务器存储容量保持平衡。


测试别的命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# ipfs add usage-ipfs.txt
added Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx usage-ipfs.txt
# ipfs-cluster-ctl pin add --name whx --replication 1 Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx
Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx | whx:
> whx : PINNED | 2021-05-13T06:19:34.061276362Z
> 12D3KooWDWcqJD1y6JtSByN1N8zRphLkc7UbFvhd1BeKpG9MxdPe : REMOTE | 2021-05-13T06:19:34.065641318Z
> 12D3KooWNTfChtCFYH8kHkrMQttJcFTKRXrFbg6yMFN6uGt3fpWV : REMOTE | 2021-05-13T06:19:34.065641318Z

# ipfs-cluster-ctl pin ls Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx
Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx | whx | PIN | Repl. Factor: 1--1 | Allocations: [12D3KooWAdivuLNnoka4H3hNRCjnHxeBNVhUBwQ1YR1HBmUnjLTT] | Recursive | Metadata: no | Exp: ∞

# ipfs-cluster-ctl status Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx
Qmefd5f3y8z7SDyr9SrTeGZvWYueAJaijmmfCBYxFEdwhx | whx:
> whx : PINNED | 2021-05-13T06:27:04.864319281Z
> 12D3KooWDWcqJD1y6JtSByN1N8zRphLkc7UbFvhd1BeKpG9MxdPe : REMOTE | 2021-05-13T06:27:04.864091368Z
> 12D3KooWNTfChtCFYH8kHkrMQttJcFTKRXrFbg6yMFN6uGt3fpWV : REMOTE | 2021-05-13T06:27:04.864091368Z


测试curl方式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# curl -X POST "http://127.0.0.1:4001/api/v0/pin/ls" |jq
{
"Keys": {
"QmNzEZywMY6M79wZ6TWuaUU5LFfGDgiqxbrF6mxMdJnjjY": {
"Type": "recursive"
}
}

# curl -X POST "http://127.0.0.1:9095/api/v0/pin/ls" |jq
{
"Keys": {
"QmNzEZywMY6M79wZ6TWuaUU5LFfGDgiqxbrF6mxMdJnjjY": {
"Type": "recursive"
}
}

curl -X GET "http://127.0.0.1:9094/id"
curl -X GET "http://127.0.0.1:9094/version"
curl -X GET "http://127.0.0.1:9094/peers"
curl -X GET "http://127.0.0.1:9094/pins"

curl -X POST "http://127.0.0.1:9094/add"

curl -X GET "http://127.0.0.1:9094/pins/QmQcCo8MqRjAwUMeZCcALb91aZc9NFCswPwakQbgQb5oH9"

add method param

1
2
3
4
replication
name
mode

1
2
3
4
ipfs --api /ip4/127.0.0.1/tcp/9095 cat




As a final tip, this table provides a quick summary of methods available.

METHOD ENDPOINT COMMENT
GET /id Cluster peer information
GET /version Cluster version
GET /peers Cluster peers
DELETE /peers/{peerID} Remove a peer
POST /add Add content to the cluster
GET /allocations List of pins and their allocations (pinset)
GET /allocations/{cid} Show a single pin and its allocations (from the pinset)
GET /pins Local status of all tracked CIDs
POST /pins/sync Sync local status from IPFS
GET /pins/{cid} Local status of single CID
POST /pins/{cid} Pin a CID
POST /pins/{ipfs|ipns|ipld}/<path> Pin using an IPFS path
DELETE /pins/{cid} Unpin a CID
DELETE /pins/{ipfs|ipns|ipld}/<path> Unpin using an IPFS path
POST /pins/{cid}/sync Sync a CID
POST /pins/{cid}/recover Recover a CID
POST /pins/recover Recover all pins in the receiving Cluster peer
GET /monitor/metrics Get a list of metric types known to the peer
GET /monitor/metrics/{metric} Get a list of current metrics seen by this peer
GET /health/alerts Display a list of alerts (metric expiration events)
GET /health/graph Get connection graph
GET /health/alerts Get connection graph
POST /ipfs/gc Perform GC in the IPFS nodes

openssl升级

编译安装了

1
2
./config --prefix=/usr/local/openssl // 指定安装路径
make && make install

最后替换当前系统的旧版本 openssl 「先保存原来的」

1
2
3
4
5
6
7
8
mv /usr/bin/openssl /usr/bin/openssl.old
mv /usr/lib64/openssl /usr/lib64/openssl.old
mv /usr/lib64/libssl.so /usr/lib64/libssl.so.old
ln -s /usr/local/openssl/bin/openssl /usr/bin/openssl
ln -s /usr/local/openssl/include/openssl /usr/include/openssl
ln -s /usr/local/openssl/lib/libssl.so /usr/lib64/libssl.so
echo "/usr/local/openssl/lib" >> /etc/ld.so.conf
ldconfig -v

最后查看当前系统 openssl 版本

1
openssl version