0%

3.x版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
yum install BackupPC

cd /usr/share/BackupPC/
chown backuppc:apache sbin/*
cd sbin

chmod u+s BackupPC_Admin

usermod -s /bin/bash backuppc

firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --reload

# /etc/BackupPC/config.pl
$Conf{CgiAdminUsers} = 'backuppc';
$Conf{PingPath} = '/bin/ping';


htpasswd -c /etc/BackupPC/apache.users backuppc

zaq1XSW2

systemctl restart backuppc
systemctl restart httpd

4.x

1
2
3
4
5
6
yum --enablerepo=extras -y install epel-release
yum --enablerepo=epel-testing -y install BackupPC-XS rsync-bpc

yum groupinstall "Development Tools"


安装BackupPC-XS

1
2
3
4
5
6
7
8
 https://objects.githubusercontent.com/github-production-release-asset-2e65be/ 
tar zxf BackupPC-XS-0.62.tar.gz
cd BackupPC-XS-0.62

perl Makefile.PL
make
make test
make install

Install the following requisites for BackupPC version 4.3.1

1
2
3
4
5

yum -y install bzip2 httpd mod_perl par2cmdline perl-Archive-Zip perl-CGI \
perl-Compress-Raw-Zlib perl-Data-Dumper perl-Digest-MD5 perl-File-Listing \
perl-File-RsyncP perl-Net-FTP-AutoReconnect perl-Net-FTP-RetrHandle \
perl-Time-ParseDate perl-XML-RSS perl-version rrdtool samba-client

Lets set up some directories that we are going to use later.

My backuppc server has a raid10 array mounted at /data/. I am going to use this to store the data for backuppc. Lets create the backuppc folder on my raid. Note: this folder can grow to be extremely large.

1
mkdir -p /data/backuppc

Now lets create one more folder for BackupPC to store some web interface files.

1
mkdir -p /var/www/html/backuppc

And create the cgi-bin directory for backuppc

1
mkdir -p /var/www/cgi-bin/backuppc

Make note of those 2 directories. The installer will ask for them if we don’t explicitly set them below.

Create and setup the backuppc user.

Create group backuppc.

1
groupadd backuppc

Create backuppc user, and tell it where its home folder is. My backuppc user’s home folder is /opt/backuppc. Change this to whatever you’d like.

1
2
3
useradd --home-dir /opt/backuppc --create-home --shell /bin/bash --base-dir /opt/backuppc --gid backuppc backuppc


Set Permissions on the directories we created:

1
2
chown -R backuppc:backuppc /data/backuppc
chown -R backuppc:backuppc /opt/backuppc

Download the BackupPC tar.gz archive and extract it.

1
2
3
4
5
cd ~
wget https://github.com/backuppc/backuppc/releases/download/4.4.0/BackupPC-4.4.0.tar.gz
tar zxf BackupPC-4.4.0.tar.gz
cd BackupPC-4.4.0

Run the configure.pl script

Pay close attention to the paths in this command.

1
2
3
perl configure.pl --batch --cgi-dir /var/www/cgi-bin/backuppc \
--data-dir /data0/backuppc --hostname backuppc --html-dir /var/www/html/backuppc \
--html-dir-url /backuppc --install-dir /opt/backuppc

Alternatively, you can execute the perl script with no arguments and it will walk you through the install step by step. This is the preferred method if you are upgrading a version 3.X install. (Not supported by this guide)

1
perl configure.pl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Ok, we're about to:

- install the binaries, lib and docs in /usr/local/BackupPC
- create the data directory /data/BackupPC
- optionally install the cgi-bin interface
- create/update the config.pl file /etc/BackupPC/config.pl
--> Do you want to continue? [y]?
Created /usr/local/BackupPC/bin
Created /usr/local/BackupPC/share
Created /usr/local/BackupPC/share/doc
Created /usr/local/BackupPC/share/doc/BackupPC
Created /usr/local/BackupPC/lib
Created /usr/local/BackupPC/lib/BackupPC
Created /usr/local/BackupPC/lib/BackupPC/CGI
Created /usr/local/BackupPC/lib/BackupPC/Config
Created /usr/local/BackupPC/lib/BackupPC/Lang
Created /usr/local/BackupPC/lib/BackupPC/Storage
Created /usr/local/BackupPC/lib/BackupPC/Xfer
Created /usr/local/BackupPC/lib/BackupPC/Zip
Created /usr/local/BackupPC/lib/Net
Created /usr/local/BackupPC/lib/Net/FTP
Created /data/BackupPC
Created /data/BackupPC/pool
Created /data/BackupPC/cpool
Created /data/BackupPC/pc
Created /etc/BackupPC
Created /var/log/BackupPC
Created /var/run/BackupPC
Installing binaries in /usr/local/BackupPC/bin
Installing library in /usr/local/BackupPC/lib
Making systemd and init.d scripts
Making Apache configuration file for suid-perl
Installing docs in /usr/local/BackupPC/share/doc/BackupPC
Installing cgi script BackupPC_Admin in /var/www/cgi-bin/backuppc
Installing config.pl and hosts in /etc/BackupPC



Enjoy!

Apache config

I chose the CGI route (not S-CGI) and chose the directory /var/www/cgi-bin/backuppc for my CGI directory. I chose /var/www/html/backuppc as my images directory. You may need to set this path in the config if you ran the configure.pl script with no arguments.

Now we need to copy the Apache config into the apache conf.d directory.

1
cp httpd/BackupPC.conf /etc/httpd/conf.d/

Install the included systemd script:

1
2
3
4
cp systemd/backuppc.service /etc/systemd/system/
systemctl daemon-reload
systemctl start backuppc
systemctl enable backuppc

Create the backuppc authentication file.

1
2
3
htpasswd -c /etc/BackupPC/BackupPC.users backuppc


Note: this sets the username to backuppc (modify it to your liking)

Set permissions on this file.

1
chown backuppc:backuppc /etc/BackupPC/BackupPC.users

Edit the config.pl file in /etc/BackupPC/

1
vim /etc/BackupPC/config.pl

Check the image directory and image url variables.

1
2
$Conf{CgiImageDir} = '/var/www/html/backuppc';
$Conf{CgiImageDirURL} = '/BackupPc';

Also add backuppc as the administrative user: $Conf{CgiAdminUsers} = 'backuppc';

这时候不能往/etc/BackupPC目录写东西,

chcon -R -t httpd_sys_content_t /etc/BackupPC

查看备份内容

1
2
3
4
5
6
# 列表
BackupPC_ls -h bi-server -n 0 -s /data/bakdata /
BackupPC_ls -h erp-oracle -n 1 -s /backup/bakdata /

#查看文件
BackupPC_zcat -h erp-oracle -n 1 -s /backup/bakdata /navy_bak221220_full.log

Linux查看CPU信息,机器型号,内存等信息

1. 系统

命令 含义
uname -a 查看内核/操作系统/CPU信息
head -n 1 /etc/issue 查看操作系统版本
cat /proc/cpuinfo 查看CPU信息
hostname 查看计算机名
lspci -tv 列出所有PCI设备
lsusb -tv 列出所有USB设备
lsmod 列出加载的内核模块
env 查看环境变量
lspci | grep -i nvidia 查看NVIDIA显卡

2. 资源

命令 含义
free -m 查看内存使用量和交换区使用量
df -h 查看各分区使用情况
du -sh <目录名> 查看指定目录的大小
grep MemTotal /proc/meminfo 查看内存总量
grep MemFree /proc/meminfo 查看空闲内存量
uptime 查看系统运行时间、用户数、负载
cat /proc/loadavg 查看系统负载

3. 磁盘和分区

命令 含义
mount column -t 查看挂接的分区状态
fdisk -l 查看所有分区
swapon -s 查看所有交换分区
hdparm -i /dev/hda 查看磁盘参数(仅适用于IDE设备)
dmesg grep IDE 查看启动时IDE设备检测状况

4. 网络

命令 含义
ifconfig 查看所有网络接口的属性
iptables -L 查看防火墙设置
route -n 查看路由表
netstat -lntp 查看所有监听端口
netstat -antp 查看所有已经建立的连接
netstat -s 查看网络统计信息

5. 进程

命令 含义
ps -ef 查看所有进程
top 实时显示进程状态

6. 用户

命令 含义
w 查看活动用户
id <用户名> 查看指定用户信息
last 查看用户登录日志
cut -d: -f1 /etc/passwd 查看系统所有用户
cut -d: -f1 /etc/group 查看系统所有组
crontab -l 查看当前用户的计划任务

7. 服务

命令 含义
chkconfig –list 列出所有系统服务
chkconfig –list | grep on 列出所有启动的系统服务

8. 程序

命令 含义
rpm -qa 查看所有安装的软件包

9. 查看CPU信息(型号)

  • CPU型号
    cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c
  • 几颗核心
    cat /proc/cpuinfo | grep physical | uniq -c
  • 查看CPU模式
    getconf LONG_BIT
  • 查看CPU运算flags
    cat /proc/cpuinfo | grep flags | grep ' lm ' | wc -l
  • 完整看cpu详细信息
    dmidecode | grep 'Processor Information
  • 查看内存信息
    cat /proc/meminfo
  • 查看当前操作系统内核信息
    uname -a
  • 查看当前操作系统发行版信息
    cat /etc/issue | grep Linux
  • 查看机器型号
    dmidecode | grep "Product Name
  • 查看网卡信息
    dmesg | grep -i eth

10. GPU相关命令

  • 查看显卡信息
    lspci | grep -i vga
  • 若使用NVIDIA显卡
    lspci | grep -i nvidia
  • 查看显卡详情
    lspci -v -s 00:0f.0
  • 查看显存使用情况
    nvidia-smi
  • 周期性输出显卡使用情况
    watch -n 10 nvidia-smi
  • 查看cuda版本
    cat /usr/local/cuda/version.txt
  • 查看cudnn版本
    cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2

开源oss

  • Minio:一个基于Apache License v2.0协议的开源对象存储服务器。Minio支持Amazon S3兼容API,可以用于构建云端的备份,CDN和存储库。

  • Ceph:一个分布式文件系统,可以在计算机集群上提供高性能的对象存储和文件系统服务。Ceph是一个开源项目,提供了一个名为RadosGW的对象存储服务器,支持Amazon S3和OpenStack Swift API。

  • GlusterFS:一个分布式文件系统,可以在计算机集群上提供高性能的文件存储服务。GlusterFS使用FUSE来将分布式文件系统挂载到本地文件系统上,可以提供类似于NFS或CIFS的文件共享服务。

20231110-kettle学习

kettle用repositories

新建一个数据库的仓库,记着要把启动命令中增加 “-Dfile.encoding=UTF-8”,否则打开文件的时候会出现 Invalid byte 1 of 1-byte UTF-8 sequence.

然后新建的转换和作业就保存到数据库中

linux服务器操作

把pdi解压缩后复制到/usr/local/data-integration 目录

设置环境变量

1
export KETTLE_HOME=/usr/local/data-integration

把windows下做好的 repositories.xml 文件复制到/usr/local/data-integration/.kettle

目录下

查看repository:

1
2
3
4
5
./kitchen.sh -listrep
List of repositories:
#1 : birepo [bi-databases] id=KettleDatabaseRepository


查看所有的job

1
2
3
4
./kitchen.sh -rep:birepo  -user:admin -pass:admin   -listjobs
2023/11/10 17:40:59 - Kitchen - Start of run.
2023/11/10 17:40:59 - RepositoriesMeta - Reading repositories XML file: /usr/local/data-integration/.kettle/repositories.xml
zgcw

查看所有的trans

1
2
3
4
5
./pan.sh -rep:birepo  -user:admin -pass:admin   -listtrans
2023/11/10 17:43:09 - Pan - 开始运行.
2023/11/10 17:43:09 - RepositoriesMeta - Reading repositories XML file: /usr/local/data-integration/.kettle/repositories.xml
car_transfer_info
market

执行转换:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ ./pan.sh -rep:birepo  -user:admin -pass:admin   -trans:market
2023/11/10 17:45:35 - Pan - 开始运行.
2023/11/10 17:45:35 - RepositoriesMeta - Reading repositories XML file: /usr/local/data-integration/.kettle/repositories.xml
2023/11/10 17:45:36 - market - 为了转换解除补丁开始 [market]
2023/11/10 17:45:36 - read_from_market.0 - Finished reading query, closing connection
2023/11/10 17:45:36 - read_from_market.0 - 完成处理 (I=443, O=0, R=0, W=443, U=0, E=0)
2023/11/10 17:45:37 - write_to_market.0 - 完成处理 (I=443, O=0, R=443, W=443, U=0, E=0)
2023/11/10 17:45:37 - Pan - 完成!
2023/11/10 17:45:37 - Pan - 开始=2023/11/10 17:45:36.601, 停止=2023/11/10 17:45:37.081
2023/11/10 17:45:37 - Pan - 0 秒后处理结束.
2023/11/10 17:45:37 - market -
2023/11/10 17:45:37 - market - 进程 read_from_market.0 成功结束, 处理了 443 行. ( - 行/秒)
2023/11/10 17:45:37 - market - 进程 write_to_market.0 成功结束, 处理了 443 行. ( - 行/秒)

执行转换

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ ./pan.sh -rep:birepo  -user:admin -pass:admin   -trans:car_transfer_info
2023/11/10 17:46:18 - Pan - 开始运行.
2023/11/10 17:46:18 - RepositoriesMeta - Reading repositories XML file: /usr/local/data-integration/.kettle/repositories.xml
2023/11/10 17:46:18 - car_transfer_info - 为了转换解除补丁开始 [car_transfer_info]
2023/11/10 17:46:19 - get_lastast_time_of_car_transfer.0 - Finished reading query, closing connection
2023/11/10 17:46:19 - get_lastast_time_of_car_transfer.0 - 完成处理 (I=1, O=0, R=0, W=1, U=0, E=0)
2023/11/10 17:46:19 - read_modifed_record.0 - Finished reading query, closing connection
2023/11/10 17:46:19 - read_modifed_record.0 - 完成处理 (I=17, O=0, R=1, W=17, U=0, E=0)
2023/11/10 17:46:19 - update_car_transfer_info.0 - 完成处理 (I=17, O=15, R=17, W=17, U=2, E=0)
2023/11/10 17:46:19 - Pan - 完成!
2023/11/10 17:46:19 - Pan - 开始=2023/11/10 17:46:18.994, 停止=2023/11/10 17:46:19.542
2023/11/10 17:46:19 - Pan - 0 秒后处理结束.
2023/11/10 17:46:19 - car_transfer_info -
2023/11/10 17:46:19 - car_transfer_info - 进程 get_lastast_time_of_car_transfer.0 成功结束, 处理了 1 行. ( - 行/秒)
2023/11/10 17:46:19 - car_transfer_info - 进程 read_modifed_record.0 成功结束, 处理了 17 行. ( - 行/秒)
2023/11/10 17:46:19 - car_transfer_info - 进程 update_car_transfer_info.0 成功结束, 处理了 17 行. ( - 行/秒)

执行job

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ ./kitchen.sh -rep:birepo  -user:admin -pass:admin   -job:zgcw
2023/11/10 17:54:09 - Kitchen - Start of run.
2023/11/10 17:54:09 - RepositoriesMeta - Reading repositories XML file: /usr/local/data-integration/.kettle/repositories.xml
2023/11/10 17:54:10 - zgcw - 开始执行任务
2023/11/10 17:54:11 - zgcw - 开始项[market]
2023/11/10 17:54:11 - market - Using run configuration [Pentaho local]
2023/11/10 17:54:11 - market - 为了转换解除补丁开始 [market]
2023/11/10 17:54:11 - read_from_market.0 - Finished reading query, closing connection
2023/11/10 17:54:11 - read_from_market.0 - 完成处理 (I=443, O=0, R=0, W=443, U=0, E=0)
2023/11/10 17:54:11 - write_to_market.0 - 完成处理 (I=443, O=0, R=443, W=443, U=0, E=0)
2023/11/10 17:54:11 - zgcw - 开始项[car_transfer_info]
2023/11/10 17:54:11 - car_transfer_info - Using run configuration [Pentaho local]
2023/11/10 17:54:11 - car_transfer_info - 为了转换解除补丁开始 [car_transfer_info]
2023/11/10 17:54:11 - get_lastast_time_of_car_transfer.0 - Finished reading query, closing connection
2023/11/10 17:54:11 - get_lastast_time_of_car_transfer.0 - 完成处理 (I=1, O=0, R=0, W=1, U=0, E=0)
2023/11/10 17:54:12 - read_modifed_record.0 - Finished reading query, closing connection
2023/11/10 17:54:12 - read_modifed_record.0 - 完成处理 (I=10, O=0, R=1, W=10, U=0, E=0)
2023/11/10 17:54:12 - update_car_transfer_info.0 - 完成处理 (I=10, O=9, R=10, W=10, U=1, E=0)
2023/11/10 17:54:12 - zgcw - 开始项[成功]
2023/11/10 17:54:12 - zgcw - 完成作业项[成功] (结果=[true])
2023/11/10 17:54:12 - zgcw - 完成作业项[car_transfer_info] (结果=[true])
2023/11/10 17:54:12 - zgcw - 完成作业项[market] (结果=[true])
2023/11/10 17:54:12 - zgcw - 任务执行完毕
2023/11/10 17:54:12 - Kitchen - Finished!
2023/11/10 17:54:12 - Kitchen - Start=2023/11/10 17:54:10.967, Stop=2023/11/10 17:54:12.271
2023/11/10 17:54:12 - Kitchen - Processing ended after 1 seconds.

执行job带参数

1
2
./kitchen.sh -rep:erp_kettle  -user:admin -pass:admin -parameter:start_date=2023-12-1 -job:guanbao

做第一个市场每日交易统计

新建表 transfer_daily_report

1
2
3
4
5
6
7
8
9
10
CREATE TABLE `transfer_daily_report` (
`market_id` int(10) DEFAULT NULL COMMENT '市场编号',
`day` DATE DEFAULT NULL COMMENT '天',
`vehicle_type` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '车辆类型',
`amount` varchar(30) COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '金额',
`cnt` int DEFAULT NULL COMMENT '数量',
`market` varchar(200) COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '市场名称',
`province` varchar(100) COLLATE utf8mb4_general_ci DEFAULT NULL COMMENT '省',
KEY `p` (`market_id`,`day`,`vehicle_type`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci COMMENT='日统计(每个市场、每天、分类型统计)';

新建重构作业流程

第一步先清库,新建执行sql的任务

1
delete  from transfer_daily_report;

新建一个获取全部交易执行的语句

1
2
3
4
5
6
7
8
SELECT date(  handle_time) as day,
SUM(vehicle_price) as amount , count(*) as cnt,
i.market_id,m.name as market, m.province_name as province,
i.vehicle_type
FROM zg_vehicle_transfer_info i inner join market m on m.id = i.market_id
where i.handle_time is not null
GROUP BY i.market_id,day ,i.vehicle_type;

新增一个生成车辆分类的脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
//Script here
var vehicle_category= ""
if(vehicle_type.indexOf("客车")>=0){
vehicle_category="客车"
}
else if(vehicle_type.indexOf("轿车")>=0){
vehicle_category="客车"
}
else if(vehicle_type.indexOf("小型汽")>=0){
vehicle_category="客车"
}
else if(vehicle_type.indexOf("小型轿")>=0){
vehicle_category="客车"
}
else if(vehicle_type.indexOf("面包车")>=0){
vehicle_category="客车"
}
else if(vehicle_type.indexOf("小型越野")>=0){
vehicle_category="客车"
}

else if(vehicle_type.indexOf("货车")>=0){
vehicle_category="货车"
}
else if(vehicle_type.indexOf("载货")>=0){
vehicle_category="货车"
}
else if(vehicle_type.indexOf("重型")>=0){
vehicle_category="货车"
}
else if(vehicle_type.indexOf("货运")>=0){
vehicle_category="货车"
}
else if(vehicle_type.indexOf("摩托车")>=0){
vehicle_category="摩托车"
}
else if(vehicle_type.indexOf("二轮")>=0){
vehicle_category="摩托车"
}

最后添加一个表输出的节点。

新建日更新的作业

新增一个获取最近10天交易的sql

1
2
3
4
5
6
7
SELECT date(handle_time) as day,
SUM(vehicle_price) as amount , count(*) as cnt,
i.market_id,m.name as market, m.province_name as province,
i.vehicle_type
FROM zg_vehicle_transfer_info i inner join market m on m.id = i.market_id
where i.handle_time is not null and i.handle_time > date_sub( now(),interval 10 day)
GROUP BY i.market_id,day ,i.vehicle_type;

添加生成车辆类别的脚本(同上)

最后增加一个插入或者更新表的节点

这里要根据市场、日期、车辆类型来更新。

make oracle on vagrant box

https://geraldonit.com/2018/06/11/creating-an-oracle-database-vagrant-box/

根据

生成两个vm,使用oraclelinux/7 , 分别是192.168.56.10 192.168.56.11

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

# Box metadata location and box name
BOX_URL = "https://oracle.github.io/vagrant-projects/boxes"
BOX_NAME = "oraclelinux/7"

# UI object for printing information
ui = Vagrant::UI::Prefixed.new(Vagrant::UI::Colored.new, "vagrant")

# Define constants
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Use vagrant-env plugin if available
if Vagrant.has_plugin?("vagrant-env")
config.env.load(".env.local", ".env") # enable the plugin
end

# VM name
VM_NAME = default_s('VM_NAME', 'oracle11g-vagrant-whx')

# Memory for the VM (in MB, 2048 MB = 2 GB)
VM_MEMORY = default_i('VM_MEMORY', 4096)

# VM time zone
# If not specified, will be set to match host time zone (if possible)
VM_SYSTEM_TIMEZONE = default_s('VM_SYSTEM_TIMEZONE', host_tz)

# Listener port
VM_LISTENER_PORT = default_i('VM_LISTENER_PORT', 1521)
VM_LISTENER_HOST_PORT = default_i('VM_LISTENER_HOST_PORT', 1521)

# Oracle Database password for SYS and SYSTEM accounts
# If left blank, the password will be generated automatically
VM_ORACLE_PWD = default_s('VM_ORACLE_PWD', '')
end

# Convenience methods
def default_s(key, default)
ENV[key] && ! ENV[key].empty? ? ENV[key] : default
end

def default_i(key, default)
default_s(key, default).to_i
end

def host_tz
# get host time zone for setting VM time zone
# if host time zone isn't an integer hour offset from GMT, fall back to UTC
offset_sec = Time.now.gmt_offset
if (offset_sec % (60 * 60)) == 0
offset_hr = ((offset_sec / 60) / 60)
timezone_suffix = offset_hr >= 0 ? "-#{offset_hr.to_s}" : "+#{(-offset_hr).to_s}"
'Etc/GMT' + timezone_suffix
else
'UTC'
end
end

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = BOX_NAME
config.vm.box_url = "#{BOX_URL}/#{BOX_NAME}.json"
config.vm.define VM_NAME

# Provider-specific configuration
config.vm.provider "virtualbox" do |v|
v.memory = VM_MEMORY
v.name = VM_NAME
end
config.vm.provider :libvirt do |v|
v.memory = VM_MEMORY
end

config.vm.network :private_network, ip: "192.168.56.10"

# add proxy configuration from host env - optional
if Vagrant.has_plugin?("vagrant-proxyconf")
ui.info "Getting Proxy Configuration from Host..."
has_proxy = false
["http_proxy", "HTTP_PROXY"].each do |proxy_var|
if proxy = ENV[proxy_var]
ui.info "HTTP proxy: " + proxy
config.proxy.http = proxy
has_proxy = true
break
end
end

["https_proxy", "HTTPS_PROXY"].each do |proxy_var|
if proxy = ENV[proxy_var]
ui.info "HTTPS proxy: " + proxy
config.proxy.https = proxy
has_proxy = true
break
end
end

if has_proxy
# Only consider no_proxy if we have proxies defined.
no_proxy = ""
["no_proxy", "NO_PROXY"].each do |proxy_var|
if ENV[proxy_var]
no_proxy = ENV[proxy_var]
ui.info "No proxy: " + no_proxy
no_proxy += ","
break
end
end
config.proxy.no_proxy = no_proxy + "localhost,127.0.0.1"
end
else
["http_proxy", "HTTP_PROXY", "https_proxy", "HTTPS_PROXY"].each do |proxy_var|
if ENV[proxy_var]
ui.warn 'To enable proxies in your VM, install the vagrant-proxyconf plugin'
break
end
end
end

# VM hostname
config.vm.hostname = VM_NAME

# Oracle port forwarding
config.vm.network "forwarded_port", guest: VM_LISTENER_PORT, host: VM_LISTENER_HOST_PORT

# Provision everything on the first run
config.vm.provision "shell", path: "scripts/install.sh", env:
{
"SYSTEM_TIMEZONE" => VM_SYSTEM_TIMEZONE,
"LISTENER_PORT" => VM_LISTENER_PORT,
"ORACLE_PWD" => VM_ORACLE_PWD
}

end

安装依赖

1
2
3
4
5
yum install binutils  compat-libstdc++-33  compat-libstdc++-33.i686  gcc   gcc-c++  glibc  glibc.i686  glibc-devel  glibc-devel.i686  ksh  libgcc   libgcc.i686  libstdc++ libstdc++.i686 libstdc++-devel  libstdc++-devel.i686  libaio  libaio.i686  libaio-devel  libaio-devel.i686  libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686  libXau  libXau.i686 libxcb  libxcb.i686   libXi  libXi.i686  make  sysstat  unixODBC unixODBC-devel  zlib-devel  elfutils-libelf-devel -y

yum install -y xorg-x11-xauth xorg-x11-font-utils xorg-x11-fonts-* xorg-x11-fonts-Type1


修改/etc/ssh/sshd_config

1
2
3
X11forwarding yes
X11UseLocalhost no
X11DisplayOffset 10

退出终端

生成进入x终端的脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#!/bin/sh
PORT=$(vagrant ssh-config | grep Port | grep -o '[0-9]\+')

IDFILE=$(vagrant ssh-config |grep IdentityFile|awk '{print $2}')
ssh -Y \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
-o "XAuthLocation=/opt/X11/bin/xauth" \
-i $IDFILE \
vagrant@localhost \
-p $PORT \
"$@"




XQuartz中启动terminal,执行命令:

1
2
3
4
5
./whx.sh
echo $DISPLAY

cd /data/database
./runInstaller

root用户执行:

1
2
/home/vagrant/app/oraInventory/orainstRoot.sh
/home/vagrant/app/vagrant/product/11.2.0/dbhome_1/root.sh

master设置:

设置sid: whx

设置密码:zaq1XSW2

1
2
3
4
5
export ORACLE_SID=whx
export ORACLE_BASE=/home/vagrant/app/vagrant
export ORACLE_HOME=/home/vagrant/app/vagrant/product/11.2.0/dbhome_1

PATH=$PATH:$ORACLE_HOME/bin

slaver设置

设置sid: orcl

设置密码:zaq1XSW2

1
2
3
4
5
export ORACLE_SID=orcl
export ORACLE_BASE=/home/vagrant/app/vagrant
export ORACLE_HOME=/home/vagrant/app/vagrant/product/11.2.0/dbhome_1

PATH=$PATH:$ORACLE_HOME/bin

启动oracle

1
dbstart  $ORACLE_HOME

下面开始试着主从复制

https://www.cnblogs.com/hooly/p/8178570.html

进入主从,分别执行

1
2
3
4
5
6
sqlplus / as sysdba
alter system set aq_tm_processes=2 scope=both; ---启用对队列消息的时间监视
alter system set global_names=true scope=both; ---设置全局名称为true
alter system set undo_retention=3600 scope=both; --设置回滚段时间,默认是900
alter system set streams_pool_size=25M scope=spfile; --sga设置为自动调整情况下不需设置该参数

设置主数据库(whx )为归档mode (以as sysdba身份,可在sqlplus中执行)

(以as sysdba身份,可在sqlplus中执行)

查是否归档,如是归档,请忽略第3点

1
2
3
4
5
6
7
SQL> archive log list;

Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 6
Current log sequence 8

归档设置

1
2
3
4
5
6
7
shutdown immediate;
startup mount;
alter database archivelog;
alter system set LOG_ARCHIVE_DEST_1='LOCATION=/home/vagrant/app/archive'; ---设置归档目录 (执行此句后,在Windows系统文件夹中看看arc文件夹有没有创建成功,如果没有,则手动创建,在执行此语句)
alter database open;
alter system switch logfile; --相应目录检查是否生成arc文件 (如果提示数据库没开启,则先执行alter database open;)

主/从数据新建stream管理用户(在主从数据库都执行以下操作)

1
2
3
4
5
6
sqlplus / as sysdba;
create tablespace tbs_stream datafile '/home/vagrant/app/vagrant/oradata/whx/tbs_stream01.dbf' size 2000m autoextend on maxsize unlimited segment space management auto; ---创建主环境的Stream专用表空间
execute dbms_logmnr_d.set_tablespace('tbs_stream'); --将logminer的数据字典从system表空间转移到新建的表空间,防止撑满system表空间
create user strmadmin identified by strmadmin default tablespace tbs_stream temporary tablespace temp; --创建用户
grant dba to strmadmin; --直接给dba权限.

1
2
3
4
5
6
7
sqlplus / as sysdba;
create tablespace tbs_stream datafile '/home/vagrant/app/vagrant/oradata/orcl/tbs_stream01.dbf' size 2000m autoextend on maxsize unlimited segment space management auto; ---创建主环境的Stream专用表空间
execute dbms_logmnr_d.set_tablespace('tbs_stream'); --将logminer的数据字典从system表空间转移到新建的表空间,防止撑满system表空间
create user strmadmin identified by strmadmin default tablespace tbs_stream temporary tablespace temp; --创建用户
grant dba to strmadmin; --直接给dba权限.


主数据库新建连接从数据库的link

1
2
3
4
5
6
7
8
9
create public database link orcl connect to strmadmin IDENTIFIED BY strmadmin
using '(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.11)(PORT = 1521))
)
(CONNECT_DATA =
(SID= orcl)
)
)';

从数据库新建连接主数据库的link

1
2
3
4
5
6
7
8
9
create public database link whx connect to strmadmin IDENTIFIED BY strmadmin
using '(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.10)(PORT = 1521))
)
(CONNECT_DATA =
(SID = whx)
)
)';

select sysdate from dual@whx

主数据库流队列创建 (可在plsql中的SQL中执行,登录名应为strmadmin)

1
2
3
4
5
connect strmadmin/strmadmin  
begin
dbms_streams_adm.set_up_queue(queue_table => 'whx_queue_table',queue_name => 'whx_queue');
end;
/

提示:PL/SQL procedure successfully completed.

从数据库流队列创建

1
2
3
4
5
connect strmadmin/strmadmin  --以strmadmin身份,登录从数据库。
begin
dbms_streams_adm.set_up_queue(queue_table => 'orcl_queue_table',queue_name => 'orcl_queue');
end;
/

提示:PL/SQL procedure successfully completed.

主数据库创建捕获进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
connect strmadmin/strmadmin

begin
dbms_streams_adm.add_schema_rules(
schema_name => 'strmadmin',
streams_type => 'capture',
streams_name => 'capture_whx',
queue_name => 'strmadmin.whx_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => null,
inclusion_rule => true);
end;
/

提示:PL/SQL procedure successfully completed.

从数据库实例化strmadmin用户 (这两个路径须一致)

1
2
3
exp strmadmin/strmadmin@whx file='/home/vagrant/crm.dmp' object_consistent=y rows=y
exp strmadmin/strmadmin@orcl file='/home/vagrant/crm.dmp' object_consistent=y rows=y

这个执行失败

1
2
3
EXP-00056: ORACLE error 12154 encountered
ORA-12154: TNS:could not resolve the connect identifier specified
EXP-00000: Export terminated unsuccessfully

修改 /home/vagrant/app/vagrant/product/11.2.0/dbhome_1/network/admin/tnsnames.ora增加 WHX部分内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oracle11g-vagrant-slave)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

WHX =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.10)(PORT = 1521))
(CONNECT_DATA =
(SID = whx)
)
)

然后导出正常。

在从数据库新建strmadmin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
imp strmadmin/strmadmin@orcl file='/home/vagrant/crm.dmp' ignore=y commit=y streams_instantiation=y full=y


Import: Release 11.2.0.1.0 - Production on Tue Oct 31 23:08:12 2023

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V11.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing STRMADMIN's objects into STRMADMIN
. . importing table "AQ$_WHX_QUEUE_TABLE_C" 0 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_G" 0 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_H"
Note: table contains ROWID column, values may be obsolete 0 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_I"
Note: table contains ROWID column, values may be obsolete 0 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_L" 0 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_S" 1 rows imported
. . importing table "AQ$_WHX_QUEUE_TABLE_T" 0 rows imported
. . importing table "WHX_QUEUE_TABLE" 0 rows imported
Import terminated successfully without warnings.

主数据库创建传播进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
connect strmadmin/strmadmin
begin
dbms_streams_adm.add_schema_propagation_rules(
schema_name => 'strmadmin',
streams_name => 'whx_to_orcl',
source_queue_name => 'strmadmin.whx_queue',
destination_queue_name => 'strmadmin.orcl_queue@orcl',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'whx',
inclusion_rule => true);
end;
/

注意:此段语句执行可能会报错,如果报错,不用管,继续执行后面的。

修改propagation休眠时间为0,表示实时传播LCR,latency以秒为单位

1
2
3
4
5
6
7
begin
dbms_aqadm.alter_propagation_schedule(
queue_name => 'whx_queue',
destination => 'orcl',
latency => 0);
end;
/

从数据创建应用进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
connect strmadmin/strmadmin
begin
dbms_streams_adm.add_schema_rules(
schema_name => 'strmadmin',
streams_type => 'apply',
streams_name => 'apply_orcl',
queue_name => 'strmadmin.orcl_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'whx',
inclusion_rule => true);
end;
/

启动Stream

从数据库启动应用进程

1
2
3
4
5
6
connect strmadmin/strmadmin
begin
dbms_apply_adm.start_apply(
apply_name => 'apply_orcl');
end;
/

主数据库启动捕获进程

1
2
3
4
5
begin
dbms_capture_adm.start_capture(
capture_name => 'capture_whx');
end;
/

现在就可以进行测试了,在whx用户中作何一个测试表新增数据,删除数据,增加表,修改表结构,进行同步测试

oracle学习

新增oracle用户并用oracle登录

增加到bash_profile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=wghl09; export ORACLE_HOSTNAME
ORACLE_UNQNAME=server; export ORACLE_UNQNAME
ORACLE_BASE=/data/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/app/product/11.2.0/db_1; export ORACLE_HOME
ORACLE_SID=hwdb; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

安装依赖

1
2
3
4
5
6

yum install binutils compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel zlib-devel elfutils-libelf-devel -y

yum install -y xorg-x11-xauth xorg-x11-font-utils xorg-x11-fonts-* xorg-x11-fonts-Type1


修改/etc/ssh/sshd_config

1
2
3
X11forwarding yes
X11UseLocalhost no
X11DisplayOffset 10

XQuartz中启动terminal,执行命令:

1
2
3
4
5
ssh -Y oracle@**** -o "XAuthLocation=/opt/X11/bin/xauth"
echo $DISPLAY
192.168.1.19:10.0
cd /data/database
./runInstaller

oracle手动启动

1
2
3
4
5
# lsnrctl start
# dbstart $ORACLE_HOME
# sqlplus / as sysadmin
startup

docker-compose 安装oracle

使用 https://hub.docker.com/r/oracleinanutshell/oracle-xe-11g 镜像

创建docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
version: '3'

services:
oracle-db:
image: oracleinanutshell/oracle-xe-11g:latest
ports:
- 1521:1521
- 5500:5500
environment:
- ORACLE_ALLOW_REMOTE:true

拉镜像

docker-compose pull

启动后

docker-compose up -d

复制/u01 到 ./data

docker cp dck_oracle-db_1:/u01 ./data

然后修改docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
version: '3'

services:
oracle-db:
image: oracleinanutshell/oracle-xe-11g:latest
ports:
- 1521:1521
- 5500:5500
environment:
- ORACLE_ALLOW_REMOTE:true
volumes:
- ./data:/u01

这时候数据就保存在本地硬盘了

docker-compose exec oracle-db bash

sql-plus

user:system

pwd: oracle

By default, the password verification is disable(password never expired)
Connect database with following setting:

1
2
3
4
5
hostname: localhost
port: 49161
sid: xe
username: system
password: oracle

Password for SYS & SYSTEM

1
oracle

最新版

https://container-registry.oracle.com/

这里是直接下载各种版本的地方 https://www.oracle.com/cn/database/technologies/oracle-database-software-downloads.html#db_free

为了安装docker版本, 必须在 https://container-registry.oracle.com/ords/f?p=113:1:102342682778790:::1:P1_BUSINESS_AREA:3&cs=3T4nkcPYyj6ewF0qib0dU82LAmedE6CKr9q8rmOAgxLycqz9EVlXaoK41xRcvBalvRR4e6uVOdKMXmQIvBkPMdw 同意oracle的terms后才能pull

1
2
3
4
5
docker login container-registry.oracle.com/

user: wanghongxing@gmail.com

pwd: *Y4r6?C@~c7jX!h

Tags

Download Mirror

Tag OS/Architecture Pull Command Last Updated
19.3.0.0 linux/amd64 docker pull container-registry.oracle.com/database/enterprise:19.3.0.0 5 weeks ago
latest linux/amd64 docker pull container-registry.oracle.com/database/enterprise:latest 5 weeks ago
21.3.0.0 linux/amd64 docker pull container-registry.oracle.com/database/enterprise:21.3.0.0 5 weeks ago
19.19.0.0 linux/arm64 docker pull container-registry.oracle.com/database/enterprise:19.19.0.0 2 months ago
12.2.0.1 linux/amd64 docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 6.1 years ago
12.2.0.1-slim linux/amd64 docker pull container-registry.oracle.com/database/enterprise:12.2.0.1-slim 6.1 years ago
12.1.0.2 linux/amd64 docker pull container-registry.oracle.com/database/enterprise:12.1.0.2 6.2 years ago

vagrant centos7 硬盘扩容

vagrantfile 中增加

1
config.disksize.size = "200GB"

vagrant reload

安装工具

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
vagrant ssh

yum install cloud-utils-growpart gdisk -y

# growpart /dev/sda 1
CHANGED: partition=1 start=2048 old: size=83884032 end=83886080 new: size=419428319 end=419430367

# xfs_growfs /
meta-data=/dev/sda1 isize=512 agcount=4, agsize=2621376 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=10485504, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=5119, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 10485504 to 52428539
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 8.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/sda1 200G 3.6G 197G 2% /
tmpfs 379M 0 379M 0% /run/user/1000

docker build multi platform

macbook arm版生成镜像时候是arm64 架构的,没法在服务器运行,为了给服务器打镜像。

改成 如下命令可以生成服务器运行的镜像

1
docker   build --platform linux/amd64 .

但是不能在本机运行,改成如下命令生成多架构的镜像

1
docker buildx build --platform linux/amd64,linux/arm64

但是提示

ERROR: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. “docker buildx create –use”)

然后学习后发现需要

1
2
3
4
5
6
7
8
9
10
11
12
13
14

docker buildx create --name mycustombuilder --driver docker-container --bootstrap
docker buildx use mycustombuilder



docker buildx build --platform linux/arm64,linux/amd64 \
--build-arg VCS_REF=`git rev-parse --short HEAD` \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg CANAL_COMPONENT_VERSION="1.1.7" \
--build-arg CANAL_COMPONENT_NAME="canal-adapter" \
--build-arg CANAL_DOWNLOAD_NAME="canal.adapter" \
-t wanghongxing/canal-adapter:v1.1.7 . --push

然后就可以了。