写点什么

解决方案之:DM relay 处理单元报错

  • 2022 年 7 月 11 日
  • 本文字数:4890 字

    阅读完需:约 16 分钟

作者: 代晓磊 _Mars 原文来源:https://tidb.net/blog/ab917b52


遇到 DM 的问题:包含了 relay+syncer。当遇到 mysql binlog 超过 4g 的问题时,你的 relay+syncer 都需要处理,这个问题来源于上游,并且阻塞了 relay 同步,你需要先解决 relay 同步,然后再解决 syncer 的问题


问题现象:


(1) 收到邮件报警


登录查看 DM 报错情况


query-status mba_app_repl_online1


发现 syncer 报错


{
“Type”: “UnknownError”,
“msg”: “\[code=10001:class=database:scope=downstream:level=high] database driver error: bad connection\\"[ngithub.com/pingcap/dm/pkg/terror.(\*Error](http://ngithub.com/pingcap/dm/pkg/terror.\(\*Error)).Delegate\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/terror.go:267\\"[ngithub.com/pingcap/dm/pkg/terror.DBErrorAdaptArgs\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/adapter.go:39\\"ngithub.com/pingcap/dm/pkg/terror.DBErrorAdapt\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/adapter.go:46\\"ngithub.com/pingcap/dm/pkg/conn.(\*BaseDB](http://ngithub.com/pingcap/dm/pkg/terror.DBErrorAdaptArgs%5Cn%5Ct/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/adapter.go:39%5Cngithub.com/pingcap/dm/pkg/terror.DBErrorAdapt%5Cn%5Ct/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/adapter.go:46%5Cngithub.com/pingcap/dm/pkg/conn.\(\*BaseDB)).GetBaseConn\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/conn/basedb.go:92\\"[ngithub.com/pingcap/dm/syncer.createConns.func1\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/syncer/db.go:299\\"ngithub.com/pingcap/dm/syncer.(\*DBConn](http://ngithub.com/pingcap/dm/syncer.createConns.func1%5Cn%5Ct/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/syncer/db.go:299%5Cngithub.com/pingcap/dm/syncer.\(\*DBConn)).resetConn\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/syncer/db.go:157\\"[ngithub.com/pingcap/dm/syncer.(\*Syncer](http://ngithub.com/pingcap/dm/syncer.\(\*Syncer)).resetDBs\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/syncer/syncer.go:513\\"[ngithub.com/pingcap/dm/syncer.(\*Syncer](http://ngithub.com/pingcap/dm/syncer.\(\*Syncer)).Resume\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/syncer/syncer.go:2277\\"nruntime.goexit\\"n\\"t/usr/local/go/src/runtime/asm\_amd64.s:1337”,
“error”: null
}
复制代码


binlog 同步报错: handle a potential duplicate event


{
“Type”: “UnknownError”,
“msg”: “\[code=30029:class=relay-unit:scope=internal:level=high] handle a potential duplicate event \\"u0026{Timestamp:1583906704 EventType:DeleteRowsEventV2 ServerID:2084430 EventSize:8132 LogPos:11943 Flags:0} in 4723-binlog.002270: check event \\"u0026{Timestamp:1583906704 EventType:DeleteRowsEventV2 ServerID:2084430 EventSize:8132 LogPos:11943 Flags:0} whether duplicate in /data/dm/relay\_log/3793c524-c319-11e8-95ba-6c92bf2f9975.000001/4723-binlog.002270: event from 3811 in /data/dm/relay\_log/3793c524-c319-11e8-95ba-6c92bf2f9975.000001/4723-binlog.002270 diff from passed-in event \\"u0026{Timestamp:1583906704 EventType:DeleteRowsEventV2 ServerID:2084430 EventSize:8132 LogPos:11943 Flags:0}\\"[ngithub.com/pingcap/dm/pkg/terror.(\*Error](http://ngithub.com/pingcap/dm/pkg/terror.\(\*Error)).Generatef\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/pkg/terror/terror.go:237\\"[ngithub.com/pingcap/dm/relay/writer.checkIsDuplicateEvent\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file\_util.go:157\\"ngithub.com/pingcap/dm/relay/writer.(\*FileWriter](http://ngithub.com/pingcap/dm/relay/writer.checkIsDuplicateEvent%5Cn%5Ct/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file\_util.go:157%5Cngithub.com/pingcap/dm/relay/writer.\(\*FileWriter)).handleDuplicateEventsExist\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file.go:369\\"[ngithub.com/pingcap/dm/relay/writer.(\*FileWriter](http://ngithub.com/pingcap/dm/relay/writer.\(\*FileWriter)).handlePotentialHoleOrDuplicate\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file.go:312\\"[ngithub.com/pingcap/dm/relay/writer.(\*FileWriter](http://ngithub.com/pingcap/dm/relay/writer.\(\*FileWriter)).handleEventDefault\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file.go:287\\"[ngithub.com/pingcap/dm/relay/writer.(\*FileWriter](http://ngithub.com/pingcap/dm/relay/writer.\(\*FileWriter)).WriteEvent\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/writer/file.go:130\\"[ngithub.com/pingcap/dm/relay.(\*Relay](http://ngithub.com/pingcap/dm/relay.\(\*Relay)).handleEvents\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/relay.go:434\\"[ngithub.com/pingcap/dm/relay.(\*Relay](http://ngithub.com/pingcap/dm/relay.\(\*Relay)).process\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/relay.go:292\\"[ngithub.com/pingcap/dm/relay.(\*Relay](http://ngithub.com/pingcap/dm/relay.\(\*Relay)).Process\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/relay/relay.go:191\\"[ngithub.com/pingcap/dm/dm/worker.(\*realRelayHolder](http://ngithub.com/pingcap/dm/dm/worker.\(\*realRelayHolder)).run\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/dm/worker/relay.go:164\\"[ngithub.com/pingcap/dm/dm/worker.(\*realRelayHolder](http://ngithub.com/pingcap/dm/dm/worker.\(\*realRelayHolder)).Start.func1\\"n\\"t/home/jenkins/agent/workspace/build\_dm\_master/go/src/github.com/pingcap/dm/dm/worker/relay.go:140\\"nruntime.goexit\\"n\\"t/usr/local/go/src/runtime/asm\_amd64.s:1337”,
}
复制代码


(2)排查问题发现上游的 mysql 有大 binlog 文件,并且大小超过 4G,查看 DM 的 relay-log 目录,DM 同步的 binlog 最大也停止在 4G,最新的 binlog 已经不再拉取,SQL 执行停止,拉取也停止。其实上游 mysql 的 binlog 有 6G 多


通过报错的 4723-binlog.002270 文件找下上游 mysql 的大小


show binary logs;


| 4723-binlog.002268 | 1074943453 |


| 4723-binlog.002269 | 1074017204 |


| 4723-binlog.002270 | 6977972415 |


| 4723-binlog.002271 | 1081537286 |


| 4723-binlog.002272 | 1073806137 |


(3)解决问题:


找到官方的解决方案:


https://pingcap.com/docs-cn/stable/reference/tools/data-migration/troubleshoot/error-handling/


首先解决 relay 日志同步的问题


在 DM 进行 relay log 拉取与增量同步过程中,如果遇到了上游超过 4GB 的 binlog 文件,就可能出现这两个错误。


原因是 DM 在写 relay log 时需要依据 binlog position 及文件大小对 event 进行验证,且需要保存同步的 binlog position 信息作为 checkpoint。但是 MySQL binlog position 官方定义使用 uint32 存储,所以超过 4G 部分的 binlog position 的 offset 值会溢出,进而出现上面的错误。


对于 relay 处理单元,可通过以下步骤手动恢复:


1、在上游确认出错时对应的 binlog 文件的大小超出了 4GB。


2、停止 DM-worker。


ansible-playbook stop.yml –tags=dm-worker -l dm-worker1


注意:这里尤其要注意不要认为 stop-task 就是停了 worker,需要将 woker 停了,在系统中 ps -ef|grep ‘dm-worker’没有了才算


3、将上游对应的 binlog 文件复制到 relay log 目录作为 relay log 文件 (替掉源文件时记得备份)


4、更新 relay log 目录内对应的 relay.meta 文件以从下一个 binlog 开始拉取。 (修改前记得备份)


例如:报错时有 binlog-name = “mysql-bin.004451” 与 binlog-pos = 2453,则将其分别更新为 binlog-name = “mysql-bin.004452” 与 binlog-pos = 4。


注意:如果是按照 gtid 配置的 DM,除了修改 binlog-name 以及 binlog-pos 外,还需要修改 binlog-gtid, 上游 mysql binlog-gtid 获取方式:


MySQL [(none)]> show binlog events in ‘4723-binlog.002271’ from 4 limit 4;


±——————-±—-±—————±———-±————±————————————————————————–+


| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |


±——————-±—-±—————±———-±————±————————————————————————–+


| 4723-binlog.002271 | 4 | Format_desc | 20849203 | 120 | Server ver: 5.6.36-log, Binlog ver: 4 |


| 4723-binlog.002271 | 120 | Previous_gtids | 20849203 | 191 | f0e914ef-54cf-11e7-813d-6c92bf2fa791:1-138218058 |


| 4723-binlog.002271 | 191 | Gtid | 2084430 | 239 | SET @@SESSION.GTID_NEXT= ‘f0e914ef-54cf-11e7-813d-6c92bf2fa791:138218059’ |


| 4723-binlog.002271 | 239 | Query | 2084430 | 302 | BEGIN |


±——————-±—-±—————±———-±————±————————————————————————–+


4 rows in set (0.03 sec)


5、重启 DM-worker。


ansible-playbook start.yml –tags=dm-worker -l dm-worker1


(4)解决了 binlog 同步问题,但是还是会遇到 syncer 执行不成功。


对于 binlog replication 处理单元,可通过以下步骤手动恢复:


1、在上游确认出错时对应的 binlog 文件的大小超出了 4GB。


2、通过 stop-task 停止同步任务。


3、将下游 dm_meta 数据库中 global checkpoint 与每个 table 的 checkpoint 中的 binlog_name 更新为出错的 binlog 文件,将 binlog_pos 更新为已同步过的一个合法的 position 值,比如 4。


例如:出错任务名为 dm_test,对应的 source-id 为 replica-1,出错时对应的 binlog 文件为 mysql-bin|000001.004451,则执行 UPDATE dm_test_syncer_checkpoint SET binlog_name=‘mysql-bin|000001.004451’, binlog_pos = 4 WHERE id=‘replica-1’;。


注意:具体的值可以登录 dm_meta 库 select 相应的 syncer 表即可


update mba_app_repl_online1_syncer_checkpoint set binlog_name=‘4723-binlog|000001.002270’,binlog_pos=978592437;


4、在同步任务配置中为 syncers 部分设置 safe-mode: true 以保证可重入执行。


5、通过 start-task 启动同步任务。


通过 query-status 观察同步任务状态,当原造成出错的 relay log 文件同步完成后,即可还原 safe-mode 为原始值并重启同步任务。


发布于: 刚刚阅读数: 3
用户头像

TiDB 社区官网:https://tidb.net/ 2021.12.15 加入

TiDB 社区干货传送门是由 TiDB 社区中布道师组委会自发组织的 TiDB 社区优质内容对外宣布的栏目,旨在加深 TiDBer 之间的交流和学习。一起构建有爱、互助、共创共建的 TiDB 社区 https://tidb.net/

评论

发布
暂无评论
解决方案之:DM relay 处理单元报错_TiDB 社区干货传送门_InfoQ写作社区