【附操作指南】从 Oceanbase 增量数据同步到 TiDB
作者: Billmay 表妹原文来源:https://tidb.net/blog/c7445005
背景
本次实践围绕 OceanBase Binlog Server + Canal + Canal Adapter 实现 OB 增量数据到 TiDB 的同步,核心流程涵盖搭建部署、配置调整、服务启动及同步验证等环节,具体如下
搭建 OceanBase Binlog Server
前提条件
在部署 Binlog Server(即 obbinlog)之前,请确保满足以下条件:
OceanBase 集群已配置
obconfig_url登录 OceanBase 集群后执行:若未配置,需手动安装
obconfigserver并设置。具体方法参见:使用命令行部署 obconfigserver 。ODP(OBProxy)已部署且版本兼容 Binlog 服务依赖 ODP 提供连接支持,并要求 ODP 和 OceanBase 数据库版本在支持范围内。参见:版本发布记录 。
网络互通性 确保 Binlog Server 能访问 OceanBase 实例的 SQL/RPC 端口、元数据库端口,同时 ODP 能访问
binlog_service_ip。
步骤一:安装
社区版安装方式(以 yum 安装为例)
安装完成后,默认路径为 /home/ds/oblogproxy。
注意:企业版用户需联系 OceanBase 技术支持获取安装包。详情见:Binlog 服务介绍
手动解压部署(可选)
也可下载 RPM 包后使用 rpm2cpio 解压至指定目录。
步骤二:初始化与启动节点
首次启动时需要初始化元数据表,后续节点无需重复初始化。
启动后可通过以下命令查询节点状态:
SHOW NODES;
详细说明见:节点管理
OceanBase 租户如何订阅 Binlog Server
步骤:创建 Binlog 任务
首先确认租户信息:
-- 查看集群名SHOW PARAMETERS LIKE 'cluster';-- 获取 config_urlSHOW PARAMETERS LIKE 'obconfig_url';
然后在 Binlog Server 上执行 CREATE BINLOG 命令,示例如下:
CREATE BINLOG INSTANCE binlog1 FOR `demo`.`obmysql` CLUSTER_URL='http://1xx.xx.xx.1:8080/services?Action=ObRootServiceInfo';
参数说明:
${cluster_name}:实际集群名${tenant_name}:租户名称${config_url}:通过SHOW PARAMETERS LIKE 'obconfig_url'获取的 value 值
参考文档:创建 Binlog 实例
如何检查 OceanBase 实例是否正常生成 Binlog
方法一:通过日志检查
查看 obbinlog 的运行日志,通常位于:
/home/ds/oblogproxy/log/logproxy.log
搜索关键错误或状态信息,例如是否有拉取 clog 成功的日志。
若出现资源不足报错,如:
[error] selection_strategy.cpp(519): [ResourcesFilter] The resource threshold of node ... does not meet requirements
请检查 CPU、内存、磁盘使用率是否超限。
详见:问题排查手册
方法二:监控与诊断工具
可使用 obdiag 工具进行一键诊断,收集集群和 Binlog 相关状态信息。
如何进入 OceanBase Binlog Server 的安装目录和 run 子目录并检查包含的文件
默认安装路径
社区版默认安装路径为:
/home/ds/oblogproxy
进入 run 目录并查看文件
cd /home/ds/oblogproxy/runls -la
常见子目录和文件包括:
bin/:可执行程序,如logproxy主进程conf/:配置文件目录log/:日志文件,特别是logproxy.logrun/:运行时产生的 PID 文件、socket 文件等lib/:依赖库文件
你可以查看当前运行的进程:
ps -ef | grep logproxy
补充说明
不适用场景:OceanBase 的 Binlog 服务暂不适用于主备搭建和增量恢复等场景。参见:Binlog 服务介绍
版本兼容性:不同版本的
obbinlog支持不同的 OceanBase 版本。如果版本不在支持范围,可手动安装对应版本的obcdc依赖。参见:obbinlog V4.3.2
总结
建议结合 OCP 或 obd 工具进行可视化管理和自动化部署,提升运维效率。
更多详情请参考官方文档:
安装 zookeeper
kafka 也是基于 zk 的,而这个包能直接把 zk 拉起来
wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.13-3.9.0.tgz tar zxvf kafka_2.13-3.9.0.tgzcd kafka_2.13-3.9.0bin/zookeeper-server-start.sh config/zookeeper.properties
安装 java
yum -y install javajava --version
结果输出
#openjdk 11.0.21 2023-10-17#OpenJDK Runtime Environment Bisheng (build 11.0.21+9)#OpenJDK 64-Bit Server VM Bisheng (build 11.0.21+9, mixed mode, sharing)
安装 canal
安装 canal.deployer-1.1.8.tar.gz 、canal.adapter-1.1.8.tar.gz
wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.deployer-1.1.8.tar.gzwget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.adapter-1.1.8.tar.gz
修改 deployer 的配置
需要修改两个配置文件 canal.properties 、instance.properties
配置 canal.properties 文件
vi /root/canal-for-ob-1.1.8/conf/canal.properties
canal.properties 配置文件
#################################################common argument#################################################tcp bind ipcanal.ip =register ip to zookeepercanal.register.ip =canal.port = 11111canal.metrics.pull.port = 11112canal instance user/passwdcanal.user = canalcanal.passwd = canal admin config#canal.admin.manager = 127.0.0.1:8089canal.admin.port = 11110canal.admin.user = admincanal.admin.passwd =admin auto register#canal.admin.register.auto = true#canal.admin.register.cluster =#canal.admin.register.name = canal.zkServers = 127.0.0.1:2181 <--- 填上 zk 的地址flush data to zkcanal.zookeeper.flush.period = 1000canal.withoutNetty = falsetcp, kafka, rocketMQ, rabbitMQ, pulsarMQcanal.serverMode = tcp <--- 填上 tcpflush meta cursor/parse position to filecanal.file.data.dir = ${canal.conf.dir}canal.file.flush.period = 1000memory store RingBuffer size, should be Math.pow(2,n)canal.instance.memory.buffer.size = 16384memory store RingBuffer used memory unit size , default 1kbcanal.instance.memory.buffer.memunit = 1024meory store gets mode used MEMSIZE or ITEMSIZEcanal.instance.memory.batch.mode = MEMSIZEcanal.instance.memory.rawEntry = true detecing configcanal.instance.detecting.enable = false#canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()canal.instance.detecting.sql = select 1canal.instance.detecting.interval.time = 3canal.instance.detecting.retry.threshold = 3canal.instance.detecting.heartbeatHaEnable = false support maximum transaction size, more than the size of the transaction will be cut into multiple transactions deliverycanal.instance.transaction.size = 1024mysql fallback connected to new master should fallback timescanal.instance.fallbackIntervalInSeconds = 60 network configcanal.instance.network.receiveBufferSize = 16384canal.instance.network.sendBufferSize = 16384canal.instance.network.soTimeout = 30 binlog filter configcanal.instance.filter.druid.ddl = truecanal.instance.filter.query.dcl = falsecanal.instance.filter.query.dml = falsecanal.instance.filter.query.ddl = falsecanal.instance.filter.table.error = falsecanal.instance.filter.rows = falsecanal.instance.filter.transaction.entry = falsecanal.instance.filter.dml.insert = falsecanal.instance.filter.dml.update = falsecanal.instance.filter.dml.delete = false binlog format/image checkcanal.instance.binlog.format = ROW,STATEMENT,MIXEDcanal.instance.binlog.image = FULL,MINIMAL,NOBLOB binlog ddl isolationcanal.instance.get.ddl.isolation = false parallel parser configcanal.instance.parser.parallel = trueconcurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors()#canal.instance.parser.parallelThreadSize = 16disruptor ringbuffer size, must be power of 2canal.instance.parser.parallelBufferSize = 256 table meta tsdb infocanal.instance.tsdb.enable = truecanal.instance.tsdb.dir = ${canal.file.data.dir:../conf}/${canal.instance.destination:}canal.instance.tsdb.url = jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE=1000;MODE=MYSQL;canal.instance.tsdb.dbUsername = canalcanal.instance.tsdb.dbPassword = canaldump snapshot interval, default 24 hourcanal.instance.tsdb.snapshot.interval = 24purge snapshot expire , default 360 hour(15 days)canal.instance.tsdb.snapshot.expire = 360 #################################################destinations#################################################canal.destinations = exampleconf root dircanal.conf.dir = ../confauto scan instance dir add/remove and start/stop instancecanal.auto.scan = truecanal.auto.scan.interval = 5set this value to 'true' means that when binlog pos not found, skip to latest.WARN: pls keep 'false' in production env, or if you know what you want.canal.auto.reset.latest.pos.mode = false canal.instance.tsdb.spring.xml = classpath:spring/tsdb/h2-tsdb.xml#canal.instance.tsdb.spring.xml = classpath:spring/tsdb/mysql-tsdb.xml canal.instance.global.mode = springcanal.instance.global.lazy = falsecanal.instance.global.manager.address = ${canal.admin.manager}#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml#canal.instance.global.spring.xml = classpath:spring/file-instance.xmlcanal.instance.global.spring.xml = classpath:spring/default-instance.xml#canal.instance.global.spring.xml = classpath:spring/ob-default-instance.xml ##################################################MQ Properties##################################################aliyun ak/sk , support rds/mqcanal.aliyun.accessKey =canal.aliyun.secretKey =canal.aliyun.uid= canal.mq.flatMessage = truecanal.mq.canalBatchSize = 50canal.mq.canalGetTimeout = 100Set this value to "cloud", if you want open message trace feature in aliyun.canal.mq.accessChannel = local canal.mq.database.hash = truecanal.mq.send.thread.size = 30canal.mq.build.thread.size = 8 ##################################################Kafka##################################################kafka.bootstrap.servers = 127.0.0.1:9092kafka.acks = allkafka.compression.type = nonekafka.batch.size = 16384kafka.linger.ms = 1kafka.max.request.size = 1048576kafka.buffer.memory = 33554432kafka.max.in.flight.requests.per.connection = 1kafka.retries = 0 kafka.kerberos.enable = falsekafka.kerberos.krb5.file = ../conf/kerberos/krb5.confkafka.kerberos.jaas.file = ../conf/kerberos/jaas.conf sasl demokafka.sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required \\n username=\"alice\" \\npassword="alice-secret\";kafka.sasl.mechanism = SCRAM-SHA-512kafka.security.protocol = SASL_PLAINTEXT ##################################################RocketMQ##################################################rocketmq.producer.group = testrocketmq.enable.message.trace = falserocketmq.customized.trace.topic =rocketmq.namespace =rocketmq.namesrv.addr = 127.0.0.1:9876rocketmq.retry.times.when.send.failed = 0rocketmq.vip.channel.enabled = falserocketmq.tag = ##################################################RabbitMQ##################################################rabbitmq.host =rabbitmq.virtual.host =rabbitmq.exchange =rabbitmq.username =rabbitmq.password =rabbitmq.queue =rabbitmq.routingKey =rabbitmq.deliveryMode = ##################################################Pulsar##################################################pulsarmq.serverUrl =pulsarmq.roleToken =pulsarmq.topicTenantPrefix =
配置 instance.properties 文件
vi /root/canal-for-ob-1.1.8/conf/example/instance.properties
配置文件参数配置,注意必选参数
#################################################mysql serverId , v1.0.26+ will autoGencanal.instance.mysql.slaveId=0 enable gtid use true/falsecanal.instance.gtidon=false rds oss binlogcanal.instance.rds.accesskey=canal.instance.rds.secretkey=canal.instance.rds.instanceId= position infocanal.instance.master.address=10.10.10.101:2883 <--- obproxy 的地址canal.instance.master.journal.name=canal.instance.master.position=canal.instance.master.timestamp=canal.instance.master.gtid= multi stream for polardbxcanal.instance.multi.stream.on=false ssl#canal.instance.master.sslMode=DISABLED#canal.instance.master.tlsVersions=#canal.instance.master.trustCertificateKeyStoreType=#canal.instance.master.trustCertificateKeyStoreUrl=#canal.instance.master.trustCertificateKeyStorePassword=#canal.instance.master.clientCertificateKeyStoreType=#canal.instance.master.clientCertificateKeyStoreUrl=#canal.instance.master.clientCertificateKeyStorePassword= table meta tsdb infocanal.instance.tsdb.enable=true#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb#canal.instance.tsdb.dbUsername=canal#canal.instance.tsdb.dbPassword=canal #canal.instance.standby.address =#canal.instance.standby.journal.name =#canal.instance.standby.position =#canal.instance.standby.timestamp =#canal.instance.standby.gtid= username/passwordcanal.instance.dbUsername=root@ob_user1#ob_test1 <--- ob usercanal.instance.dbPassword=PassworD123 <--- ob passwordcanal.instance.connectionCharset = UTF-8enable druid Decrypt database passwordcanal.instance.enableDruid=false#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ== table regexcanal.instance.filter.regex=.\\..table black regexcanal.instance.filter.black.regex=mysql\\.slave_.*table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/chtable field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch mq configcanal.mq.topic=exampledynamic topic route by schema or table regex#canal.mq.dynamicTopic=mytest1.user,topic2:mytest2\\..*,.*\\..*canal.mq.partition=0hash partition config#canal.mq.enableDynamicQueuePartition=false#canal.mq.partitionsNum=3#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6#canal.mq.partitionHash=test.table:id^name,.\\..#################################################
启动 canal server
sh /root/canal-for-ob-1.1.8/bin/startup.sh
正常日志输出没有报错,如有建议分析解决
2025-12-11 17:18:50.995 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler2025-12-11 17:18:51.001 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations2025-12-11 17:18:51.008 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.2025-12-11 17:18:51.089 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[172.17.0.1(172.17.0.1):11111]2025-12-11 17:18:52.038 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......
修改 canal adapter 的配置
vi /root/canal-for-adapter-ob-1.1.8/conf/application.yml
配置文件 canal adapter
zaserver: port: 8081spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 default-property-inclusion: non_null canal.conf: mode: tcp #tcp kafka rocketMQ rabbitMQ flatMessage: true zookeeperHosts: syncBatchSize: 1000 retries: -1 timeout: accessKey: secretKey: consumerProperties: # canal tcp consumer canal.tcp.server.host: 127.0.0.1:11111 <--- canalserver 的地址 canal.tcp.zookeeper.hosts: canal.tcp.batch.size: 500 canal.tcp.username: canal.tcp.password: # kafka consumer # rocketMQ consumer # rabbitMQ consumer
srcDataSources: defaultDS: url: jdbc:mysql://xx.xxx.xx.203:2883/db1?useUnicode=true <-- 源端的地址 username: root@ob_user1#ob_test1 <-- ob 用户名 password: PassworD123 <-- ob 密码 canalAdapters:instance: example # canal instance Name or mq topic namegroups:groupId: g1 outerAdapters:name: rdb key: mysql1 <--- 这个名字要记住,因为在后面的配置文件中要用到 properties: jdbc.driverClassName: com.mysql.jdbc.Driver jdbc.url: jdbc:mysql://xx.xxx.xxx.247:4000/db1?useUnicode=true <-- 目标端的地址 jdbc.username: tidb_test1 jdbc.password: PassworD123
修改 mytest_user.yml 配置订阅同步配置
vi /root/canal-for-adapter-ob-1.1.8/conf/rdb/mytest_user.yml
mytest_user.yml 配置参数
dataSourceKey: defaultDSdestination: examplegroupId: g1outerAdapterKey: mysql1 <--- 这个名字和前面的要一致concurrent: truedbMapping: mirrorDb: true database: db1
启动 canal-adapter
sh /root/canal-for-adapter-ob-1.1.8/bin/startup.sh
日志相关信息无报错
2025-12-11 15:30:28.800 [SpringApplicationShutdownHook] INFO ru.yandex.clickhouse.ClickHouseDriver - Driver registered2025-12-11 15:30:29.885 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## stop the canal client adapters2025-12-11 15:30:29.886 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example is waiting for adapters' worker thread die!2025-12-11 15:30:29.961 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example adapters worker thread dead!2025-12-11 15:30:30.158 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closing ...2025-12-11 15:30:30.162 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closed2025-12-11 15:30:30.162 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example all adapters destroyed!2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - All canal adapters destroyed2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ...2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed2025-12-11 15:30:30.163 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## canal client adapters are down.2025-12-11 17:26:01.842 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Starting CanalAdapterApplication using Java xx.0.21 on tidbxxx.xxx.xxx.xxx.net with PID 3965171 (/root/canal-for-adapter-ob-1.1.8/lib/client-adapter.launcher-1.1.8.jar started by root in /root/canal-for-adapter-ob-1.1.8/bin)2025-12-11 17:26:01.847 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - No active profile set, falling back to 1 default profile: "default"2025-12-11 17:26:02.300 [main] INFO org.springframework.cloud.context.scope.GenericScope - BeanFactory id=d4f2b56b-aacd-327d-9217-5ce4cfc378052025-12-11 17:26:02.480 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8081 (http)2025-12-11 17:26:02.487 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8081"]2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat]2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.75]2025-12-11 17:26:02.570 [main] INFO o.a.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext2025-12-11 17:26:02.570 [main] INFO o.s.b.w.s.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 692 ms2025-12-11 17:26:02.806 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited2025-12-11 17:26:03.104 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8081"]2025-12-11 17:26:03.115 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8081 (http) with context path ''2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## syncSwitch refreshed.2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## start the canal client adapters.2025-12-11 17:26:03.119 [main] INFO c.a.otter.canal.client.adapter.support.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin2025-12-11 17:26:03.166 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Start loading rdb mapping config ...2025-12-11 17:26:03.174 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Rdb mapping config loaded2025-12-11 17:26:03.198 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited2025-12-11 17:26:03.202 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: rdb succeed2025-12-11 17:26:03.207 [main] INFO c.alibaba.otter.canal.connector.core.spi.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin2025-12-11 17:26:03.221 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal-client mq topic: example-g1 succeed2025-12-11 17:26:03.222 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ......2025-12-11 17:26:03.222 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Start to connect destination: example <=============2025-12-11 17:26:03.228 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Started CanalAdapterApplication in 1.697 seconds (JVM running for 2.164)2025-12-11 17:26:03.354 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - =============> Subscribe destination: example succeed <=============
验证 Oceanbase 增量同步成功
OB 插入数据验证增量数据同步
mysql> select version();+------------------------------+| version() |+------------------------------+| 5.7.25-OceanBase_CE-v4.3.5.4 |+------------------------------+1 row in set (0.00 sec) mysql> use db1;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -A Database changedmysql> show tables;+---------------+| Tables_in_db1 |+---------------+| t1 |+---------------+1 row in set (0.00 sec) mysql> desc t1;+-------+-------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+-------+-------------+------+-----+---------+-------+| id | int(11) | NO | PRI | NULL | || col1 | varchar(20) | YES | | NULL | |+-------+-------------+------+-----+---------+-------+2 rows in set (0.01 sec) mysql> select * from t1;+----+------+| id | col1 |+----+------+| 1 | ccc || 2 | ccc || 3 | ccc |+----+------+3 rows in set (0.00 sec) mysql> insert into \cmysql> insert into t1 (id,col1) values (4,'ddd');Query OK, 1 row affected (0.01 sec) mysql> select * from t1;+----+------+| id | col1 |+----+------+| 1 | ccc || 2 | ccc || 3 | ccc || 4 | ddd |+----+------+4 rows in set (0.00 sec) tidb 同步数据mysql> select version();+--------------------+| version() |+--------------------+| 8.0.11-TiDB-v7.5.5 |+--------------------+1 row in set (0.00 sec) mysql> use db1;Reading table information for completion of table and column namesYou can turn off this feature to get a quicker startup with -A Database changedmysql> select * from t1;+----+------+| id | col1 |+----+------+| 1 | ccc || 2 | ccc || 3 | ccc || 4 | ddd |+----+------+4 rows in set (0.00 sec)
注意事项
版本兼容性:确保
obbinlog、OB 集群、ODP、Canal 版本匹配;日志监控:定期检查
logproxy.log、Canal Server/Adapter 日志,及时排查资源不足(CPU/ 内存 / 磁盘)或连接异常;运维效率:建议结合 OCP 或
obd工具实现可视化管理和自动化部署。
总结
TiDB 与 OceanBase 作为国产分布式数据库的代表性产品,均凭借各自技术特性成为运维 DBA 的优选工具。近年来,越来越多的 OceanBase 用户选择 TiDB 作为下游数据库,这一趋势反映了两者在功能、生态及用户需求适配性上的差异。OceanBase 用户选择 TiDB 作为下游的核心动因,如技术栈简化与运维降本、TiDB 对业务友好性与开发适配、跨城同步与稳定性需求 、活跃的社区与长期发展 。
随着企业对技术灵活性、运维效率及长期成本的关注,TiDB 凭借兼容性、扩展性与生态优势,正成为 OceanBase 用户拓展技术栈、降低绑定风险的优选下游数据库。这一趋势不仅体现了分布式数据库市场的多元化需求,也验证了 TiDB 在复杂场景下的综合竞争力。
版权声明: 本文为 InfoQ 作者【TiDB 社区干货传送门】的原创文章。
原文链接:【http://xie.infoq.cn/article/b34bb279f8c5bfb07299e9082】。文章转载请联系作者。
TiDB 社区干货传送门
TiDB 社区官网:https://tidb.net/ 2021-12-15 加入
TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/







评论