跳至主要内容
版本: 5.0

RocketMQ Connect 实战 2

PostgreSQL 源 (CDC) -> RocketMQ Connect -> MySQL Sink (JDBC)

准备

启动 RocketMQ

  1. Linux/Unix/Mac
  2. 64 位 JDK 1.8+
  3. Maven 3.2.x+
  4. 启动 RocketMQ;

提示 : ${ROCKETMQ_HOME} 位置说明

bin-release.zip 版本:/rocketmq-all-4.9.4-bin-release

source-release.zip 版本:/rocketmq-all-4.9.4-source-release/distribution

启动 Connect

编译 Connector 插件

Debezium RocketMQ Connector

$ cd rocketmq-connect/connectors/rocketmq-connect-debezium/
$ mvn clean package -Dmaven.test.skip=true

将编译后的 Debezium PostgreSQL RocketMQ Connector 包移动到运行时加载目录。命令如下:

mkdir -p /usr/local/connector-plugins
cp rocketmq-connect-debezium-postgresql/target/rocketmq-connect-debezium-postgresql-0.0.1-SNAPSHOT-jar-with-dependencies.jar /usr/local/connector-plugins

JDBC Connector

将编译后的 JDBC Connector 包移动到运行时加载目录。命令如下:

$ cd rocketmq-connect/connectors/rocketmq-connect-jdbc/
$ mvn clean package -Dmaven.test.skip=true
cp rocketmq-connect-jdbc/target/rocketmq-connect-jdbc-0.0.1-SNAPSHOT-jar-with-dependencies.jar /usr/local/connector-plugins

启动 Connect 运行时

cd  rocketmq-connect

mvn -Prelease-connect -DskipTests clean install -U

修改配置 connect-standalone.conf,主要配置如下

$ cd distribution/target/rocketmq-connect-0.0.1-SNAPSHOT/rocketmq-connect-0.0.1-SNAPSHOT
$ vim conf/connect-standalone.conf
$ cd distribution/target/rocketmq-connect-0.0.1-SNAPSHOT/rocketmq-connect-0.0.1-SNAPSHOT
$ vim conf/connect-standalone.conf
workerId=standalone-worker
storePathRootDir=/tmp/storeRoot

## Http port for user to access REST API
httpPort=8082

# Rocketmq namesrvAddr
namesrvAddr=localhost:9876

# RocketMQ acl
aclEnable=false
accessKey=rocketmq
secretKey=12345678

autoCreateGroupEnable=false
clusterName="DefaultCluster"

# Core configuration, configure the plugin directory of the previously compiled debezium package here
# Source or sink connector jar file dir,The default value is rocketmq-connect-sample
pluginPaths=/usr/local/connector-plugins
cd distribution/target/rocketmq-connect-0.0.1-SNAPSHOT/rocketmq-connect-0.0.1-SNAPSHOT

sh bin/connect-standalone.sh -c conf/connect-standalone.conf &

Postgres 镜像

使用 debezium 的 Postgres docker 环境来设置 Postgres 数据库

# starting a pg instance
docker run -d --name postgres -p 5432:5432 -e POSTGRES_USER=start_data_engineer -e POSTGRES_PASSWORD=password debezium/postgres:14

# bash into postgres instance
docker exec -ti postgres /bin/bash

Postgres 信息 端口:5432 账号:start_data_engineer/password 同步原始数据库:bank.holding 目标数据库表:bank1.holding

MySQL 镜像

使用 debezium 的 MySQL docker 环境来设置 MySQL 数据库。

docker run -it --rm --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=debezium -e MYSQL_USER=mysqluser -e MYSQL_PASSWORD=mysqlpw quay.io/debezium/example-mysql:1.9

MySQL 信息

端口:3306

账号:root/debezium

测试数据

使用 start_data_engineer/password 账号登录数据库

源数据库表:bank.holding

CREATE SCHEMA bank;
SET search_path TO bank,public;
CREATE TABLE bank.holding (
holding_id int,
user_id int,
holding_stock varchar(8),
holding_quantity int,
datetime_created timestamp,
datetime_updated timestamp,
primary key(holding_id)
);
ALTER TABLE bank.holding replica identity FULL;
insert into bank.holding values (1000, 1, 'VFIAX', 10, now(), now());
\q
insert into bank.holding values (1000, 1, 'VFIAX', 10, now(), now());
insert into bank.holding values (1001, 2, 'SP500', 1, now(), now());
insert into bank.holding values (1003, 3, 'SP500', 1, now(), now());
update bank.holding set holding_quantity = 300 where holding_id=1000;

目标数据库表:bank1.holding

create database bank1;
CREATE TABLE holding (
holding_id int,
user_id int,
holding_stock varchar(8),
holding_quantity int,
datetime_created bigint,
datetime_updated bigint,
primary key(holding_id)
);

启动 Connector

启动 Debezium 源 Connector

同步原始表数据:bank.holding 目的:解析 Postgres binlog 并封装成通用的 ConnectRecord 对象,发送到 RocketMQ Topic。

curl -X POST -H "Content-Type: application/json" http://127.0.0.1:8082/connectors/postgres-connector -d  '{
"connector.class": "org.apache.rocketmq.connect.debezium.postgres.DebeziumPostgresConnector",
"max.task": "1",
"connect.topicname": "debezium-postgres-source-01",
"kafka.transforms": "Unwrap",
"kafka.transforms.Unwrap.delete.handling.mode": "none",
"kafka.transforms.Unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"kafka.transforms.Unwrap.add.headers": "op,source.db,source.table",
"database.history.skip.unparseable.ddl": true,
"database.server.name": "bankserver1",
"database.port": 5432,
"database.hostname": "database ip",
"database.connectionTimeZone": "UTC",
"database.user": "start_data_engineer",
"database.dbname": "start_data_engineer",
"database.password": "password",
"table.whitelist": "bank.holding",
"key.converter": "org.apache.rocketmq.connect.runtime.converter.record.json.JsonConverter",
"value.converter": "org.apache.rocketmq.connect.runtime.converter.record.json.JsonConverter"
}'

启动 JDBC Sink Connector

目的:从 Topic 消费数据,并通过 JDBC 协议写入目标表。

curl -X POST -H "Content-Type: application/json" http://127.0.0.1:8082/connectors/jdbcmysqlsinktest201 -d '{
"connector.class": "org.apache.rocketmq.connect.jdbc.connector.JdbcSinkConnector",
"max.task": "2",
"connect.topicnames": "debezium-postgres-source-01",
"connection.url": "jdbc:mysql://database ip:3306/bank1",
"connection.user": "root",
"connection.password": "debezium",
"pk.fields": "holding_id",
"table.name.from.header": "true",
"pk.mode": "record_key",
"insert.mode": "UPSERT",
"db.timezone": "UTC",
"table.types": "TABLE",
"errors.deadletterqueue.topic.name": "dlq-topic",
"errors.log.enable": "true",
"errors.tolerance": "ALL",
"delete.enabled": "true",
"key.converter": "org.apache.rocketmq.connect.runtime.converter.record.json.JsonConverter",
"value.converter": "org.apache.rocketmq.connect.runtime.converter.record.json.JsonConverter"
}'

创建上述两个 Connector 任务后,使用 start_data_engineer/password 账号登录数据库。

对源数据库表 bankholding 进行任何添加、删除或修改操作,都会同步到目标表 bank1.holding