Slave 被误写入数据如何恢复到主库
- 2024-02-26  福建
 本文字数:3001 字
阅读完需:约 10 分钟
背景
在 GreatSQL 主从复制环境中,有时候可能会出现一些误操作,将本应该写入到主库的数据写入到了从库,导致主从数据不一致,影响数据同步。是否可以将写入从库的数据同步写入主库呢?
测试环境
复制链路:
greatsql> show slave status\G;*************************** 1. row ***************************               Slave_IO_State: Waiting for source to send event                  Master_Host: 192.168.137.179                  Master_User: root                  Master_Port: 3308                Connect_Retry: 60              Master_Log_File: binlog.000001          Read_Master_Log_Pos: 157               Relay_Log_File: oracle_dts-relay-bin.000002                Relay_Log_Pos: 367        Relay_Master_Log_File: binlog.000001             Slave_IO_Running: Yes            Slave_SQL_Running: Yes
表数据
主库
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME      | LOC      |+--------+------------+----------+|     10 | ACCOUNTING | NEW YORK ||     20 | RESEARCH   | DALLAS   ||     30 | SALES      | CHICAGO  ||     40 | OPERATIONS | BOSTON   ||     60 | it         | 成都     |+--------+------------+----------+5 rows in set (0.00 sec)
greatsql> insert into dept select 70,'IT','CTU';Query OK, 1 row affected (0.01 sec)Records: 1  Duplicates: 0  Warnings: 0
greatsql> commit;Query OK, 0 rows affected (0.00 sec)
从库
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME      | LOC      |+--------+------------+----------+|     10 | ACCOUNTING | NEW YORK ||     20 | RESEARCH   | DALLAS   ||     30 | SALES      | CHICAGO  ||     40 | OPERATIONS | BOSTON   ||     60 | it         | 成都     ||     70 | IT         | CTU      |+--------+------------+----------+6 rows in set (0.00 sec)
主库写入的数据正常同步到从库
在从库写入数据
greatsql> insert into dept select 80,'IT','SZ';Query OK, 1 row affected (0.01 sec)Records: 1  Duplicates: 0  Warnings: 0
greatsql> insert into dept select 90,'SALES','SZ';Query OK, 1 row affected (0.01 sec)Records: 1  Duplicates: 0  Warnings: 0
从库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME      | LOC      |+--------+------------+----------+|     10 | ACCOUNTING | NEW YORK ||     20 | RESEARCH   | DALLAS   ||     30 | SALES      | CHICAGO  ||     40 | OPERATIONS | BOSTON   ||     60 | it         | 成都     ||     70 | IT         | CTU      ||     80 | IT         | SZ       ||     90 | SALES      | SZ       |+--------+------------+----------+8 rows in set (0.00 sec)
主库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME      | LOC      |+--------+------------+----------+|     10 | ACCOUNTING | NEW YORK ||     20 | RESEARCH   | DALLAS   ||     30 | SALES      | CHICAGO  ||     40 | OPERATIONS | BOSTON   ||     60 | it         | 成都     ||     70 | IT         | CTU      |+--------+------------+----------+6 rows in set (0.01 sec)
此时从库写入的数据在主库中并没有出现
解析从库的二进制日志
$ mysqlbinlog -vv --base64-output=decode-rows  binlog.000002>b002.sql
 BEGIN/*!*/;
#at 354#240221 16:10:25 server id 18001  end_log_pos 416 CRC32 0xcc81584b      Table_map: `scott`.`dept` mapped to number 101#has_generated_invisible_primary_key=0#at 416#240221 16:10:25 server id 18001  end_log_pos 462 CRC32 0x5149e38a      Write_rows: table id 101 flags: STMT_END_F
###INSERT INTO `scott`.`dept`###SET###@1=80 /* INT meta=0 nullable=0 is_null=0 */###@2='IT' /* VARSTRING(56) meta=56 nullable=1 is_null=0 */###@3='SZ' /* VARSTRING(52) meta=52 nullable=1 is_null=0 */#at 462#240221 16:10:25 server id 18001  end_log_pos 493 CRC32 0xab795e4a      Xid = 34
可以看到写入的从库写入的数据在 binlog.000002,我们可以通过 grep 从库的 server id 确定日志文件中有没有在从库写入的数据。
复制从库日志到主库
$ scp binlog.000002  192.168.137.179:/tmp/Warning: Permanently added '192.168.137.179' (ECDSA) to the list of known hosts.root@192.168.137.179's password: binlog.000002                                                        100%  836     1.1MB/s   00:00 
应用从库的二进制日志
应用从库的日志到主库
$ mysqlbinlog binlog.000002|mysql -uroot -p -h127.1 -P3308
主库应用从库二进制日志时,从库二进制日志信息未发生变化
greatsql> show binary logs;+---------------+-----------+-----------+| Log_name      | File_size | Encrypted |+---------------+-----------+-----------+| binlog.000001 |       498 | No        || binlog.000002 |       836 | No        || binlog.000003 |       237 | No        |+---------------+-----------+-----------+3 rows in set (0.00 sec)
主从复制链路状态正常
greatsql> show slave status\G;*************************** 1. row ***************************               Slave_IO_State: Waiting for source to send event                  Master_Host: 192.168.137.179                  Master_User: root                  Master_Port: 3308                Connect_Retry: 60              Master_Log_File: binlog.000001          Read_Master_Log_Pos: 1059               Relay_Log_File: oracle_dts-relay-bin.000002                Relay_Log_Pos: 1269        Relay_Master_Log_File: binlog.000001             Slave_IO_Running: Yes            Slave_SQL_Running: Yes
可以看到主库在应用从库产生的二进制日志时,从库没有重复应用这些二进制日志(By default, the replication I/O (receiver) thread does not write binary log events to the relay log if they have the replica's server ID (this optimization helps save disk usage). ),出现主键冲突,导致复制状态出错
查看主库数据
greatsql> select * from dept;+--------+------------+----------+| DEPTNO | DNAME      | LOC      |+--------+------------+----------+|     10 | ACCOUNTING | NEW YORK ||     20 | RESEARCH   | DALLAS   ||     30 | SALES      | CHICAGO  ||     40 | OPERATIONS | BOSTON   ||     60 | it         | 成都     ||     70 | IT         | CTU      ||     80 | IT         | SZ       ||     90 | SALES      | SZ       |+--------+------------+----------+8 rows in set (0.00 sec)
后续测试,主库写入数据可正常同步到从库。
版权声明: 本文为 InfoQ 作者【GreatSQL】的原创文章。
原文链接:【http://xie.infoq.cn/article/68700a5dcdec922633621335e】。文章转载请联系作者。
GreatSQL
GreatSQL社区 2023-01-31 加入
GreatSQL是由万里数据库维护的MySQL分支,专注于提升MGR可靠性及性能,支持InnoDB并行查询特性,是适用于金融级应用的MySQL分支版本。 社区:https://greatsql.cn/ Gitee: https://gitee.com/GreatSQL/GreatSQL







    
评论