网站建设中页面设计,企业官网首页设计,做网站先学什么,学习型网站空间背景 分析 测试 实施 索引优化后 delete大表优化为小批量删除 总结 前段时间刚入职一家公司#xff0c;就遇上这事#xff01;
背景
XX实例#xff08;一主一从#xff09;xxx告警中每天凌晨在报SLA报警#xff0c;该报警的意思是存在一定的主从延迟#xff08;… 背景 分析 测试 实施 索引优化后 delete大表优化为小批量删除 总结 前段时间刚入职一家公司就遇上这事
背景
XX实例一主一从xxx告警中每天凌晨在报SLA报警该报警的意思是存在一定的主从延迟若在此时发生主从切换需要长时间才可以完成切换要追延迟来保证主从数据的一致性 XX实例的慢查询数量最多执行时间超过1s的sql会被记录XX应用那方每天晚上在做删除一个月前数据的任务 分析
使用pt-query-digest工具分析最近一周的mysql-slow.log
pt-query-digest --since148h mysql-slow.log | less
结果第一部分 最近一个星期内总共记录的慢查询执行花费时间为25403s最大的慢sql执行时间为266s平均每个慢sql执行时间5s平均扫描的行数为1766万
结果第二部分 select arrival_record操作记录的慢查询数量最多有4万多次平均响应时间为4sdelete arrival_record记录了6次平均响应时间258s。
select xxx_record语句 select arrival_record 慢查询语句都类似于如下所示where语句中的参数字段是一样的传入的参数值不一样 select count(*) from arrival_record where product_id26 and receive_time between 2019-03-25 14:00:00 and 2019-03-25 15:00:00 and receive_spend_ms0\G select arrival_record 语句在mysql中最多扫描的行数为5600万、平均扫描的行数为172万推断由于扫描的行数多导致的执行时间长
查看执行计划
explain select count(*) from arrival_record where product_id26 and receive_time between 2019-03-25 14:00:00 and 2019-03-25 15:00:00 and receive_spend_ms0\G; *************************** 1. row *************************** id: 1 select_type: SIMPLE table: arrival_record partitions: NULL type: ref possible_keys: IXFK_arrival_record key: IXFK_arrival_record key_len: 8 ref: const rows: 32261320 filtered: 3.70 Extra: Using index condition; Using where 1 row in set, 1 warning (0.00 sec)
用到了索引IXFK_arrival_record但预计扫描的行数很多有3000多w行
show index from arrival_record; ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | arrival_record | 0 | PRIMARY | 1 | id | A | 107990720 | NULL | NULL | | BTREE | | | | arrival_record | 1 | IXFK_arrival_record | 1 | product_id | A | 1344 | NULL | NULL | | BTREE | | | | arrival_record | 1 | IXFK_arrival_record | 2 | station_no | A | 22161 | NULL | NULL | YES | BTREE | | | | arrival_record | 1 | IXFK_arrival_record | 3 | sequence | A | 77233384 | NULL | NULL | | BTREE | | | | arrival_record | 1 | IXFK_arrival_record | 4 | receive_time | A | 65854652 | NULL | NULL | YES | BTREE | | | | arrival_record | 1 | IXFK_arrival_record | 5 | arrival_time | A | 73861904 | NULL | NULL | YES | BTREE | | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- show create table arrival_record; .......... arrival_spend_ms bigint(20) DEFAULT NULL, total_spend_ms bigint(20) DEFAULT NULL, PRIMARY KEY (id), KEY IXFK_arrival_record (product_id,station_no,sequence,receive_time,arrival_time) USING BTREE, CONSTRAINT FK_arrival_record_product FOREIGN KEY (product_id) REFERENCES product (id) ON DELETE NO ACTION ON UPDATE NO ACTION ) ENGINEInnoDB AUTO_INCREMENT614538979 DEFAULT CHARSETutf8 COLLATEutf8_bin | 该表总记录数约1亿多条表上只有一个复合索引product_id字段基数很小选择性不好 传入的过滤条件 where product_id26 and receive_time between 2019-03-25 14:00:00 and 2019-03-25 15:00:00 and receive_spend_ms0 没有station_nu字段使用不到复合索引 IXFK_arrival_record的 product_id,station_no,sequence,receive_time 这几个字段 根据最左前缀原则select arrival_record只用到了复合索引IXFK_arrival_record的第一个字段product_id而该字段选择性很差导致扫描的行数很多执行时间长 receive_time字段的基数大选择性好可对该字段单独建立索引select arrival_record sql就会使用到该索引
现在已经知道了在慢查询中记录的select arrival_record where语句传入的参数字段有 product_idreceive_timereceive_spend_ms还想知道对该表的访问有没有通过其它字段来过滤了 神器tcpdump出场的时候到了
使用tcpdump抓包一段时间对该表的select语句
tcpdump -i bond0 -s 0 -l -w - dst port 3316 | strings | grep select | egrep -i arrival_record /tmp/select_arri.log
获取select 语句中from 后面的where条件语句
IFS_OLD$IFS IFS$\n for i in cat /tmp/select_arri.log ;do echo ${i#*from}; done | less IFS$IFS_OLD arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id17 and arrivalrec0_.station_no56742 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id22 and arrivalrec0_.station_noS7100 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id24 and arrivalrec0_.station_noV4631 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id22 and arrivalrec0_.station_noS9466 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id24 and arrivalrec0_.station_noV4205 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id24 and arrivalrec0_.station_noV4105 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id24 and arrivalrec0_.station_noV4506 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id24 and arrivalrec0_.station_noV4617 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id22 and arrivalrec0_.station_noS8356 arrival_record arrivalrec0_ where arrivalrec0_.sequence2019-03-27 08:40 and arrivalrec0_.product_id22 and arrivalrec0_.station_noS8356 select 该表 where条件中有product_id,station_no,sequence字段可以使用到复合索引IXFK_arrival_record的前三个字段
综上所示优化方法为删除复合索引IXFK_arrival_record建立复合索引idx_sequence_station_no_product_id并建立单独索引indx_receive_time
delete xxx_record语句 图片 该delete操作平均扫描行数为1.1亿行平均执行时间是262s delete语句如下所示每次记录的慢查询传入的参数值不一样
delete from arrival_record where receive_time STR_TO_DATE(2019-02-23, %Y-%m-%d)\G
执行计划
explain select * from arrival_record where receive_time STR_TO_DATE(2019-02-23, %Y-%m-%d)\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: arrival_record partitions: NULL type: ALL possible_keys: NULL key: NULL key_len: NULL ref: NULL rows: 109501508 filtered: 33.33 Extra: Using where 1 row in set, 1 warning (0.00 sec) 该delete语句没有使用索引没有合适的索引可用走的全表扫描导致执行时间长 优化方法也是 建立单独索引indx_receive_time(receive_time) 测试
拷贝arrival_record表到测试实例上进行删除重新索引操作XX实例arrival_record表信息
du -sh /datas/mysql/data/3316/cq_new_cimiss/arrival_record* 12K /datas/mysql/data/3316/cq_new_cimiss/arrival_record.frm 48G /datas/mysql/data/3316/cq_new_cimiss/arrival_record.ibd select count() from cq_new_cimiss.arrival_record; ----------- | count() | ----------- | 112294946 | ----------- 1亿多记录数 SELECT table_name, CONCAT(FORMAT(SUM(data_length) / 1024 / 1024,2),M) AS dbdata_size, CONCAT(FORMAT(SUM(index_length) / 1024 / 1024,2),M) AS dbindex_size, CONCAT(FORMAT(SUM(data_length index_length) / 1024 / 1024 / 1024,2),G) AS table_size(G), AVG_ROW_LENGTH,table_rows,update_time FROM information_schema.tables WHERE table_schema cq_new_cimiss and table_namearrival_record; -------------------------------------------------------------------------------------------------------- | table_name | dbdata_size | dbindex_size | table_size(G) | AVG_ROW_LENGTH | table_rows | update_time | -------------------------------------------------------------------------------------------------------- | arrival_record | 18,268.02M | 13,868.05M | 31.38G | 175 | 109155053 | 2019-03-26 12:40:17 | --------------------------------------------------------------------------------------------------------
磁盘占用空间48Gmysql中该表大小为31G存在17G左右的碎片大多由于删除操作造成的记录被删除了空间没有回收) 备份还原该表到新的实例中删除原来的复合索引重新添加索引进行测试
mydumper并行压缩备份
userroot passwdxxxx socket/datas/mysql/data/3316/mysqld.sock dbcq_new_cimiss table_namearrival_record backupdir/datas/dump_$table_name mkdir -p $backupdir nohup echo date %T mydumper -u $user -p $passwd -S $socket -B $db -c -T $table_name -o $backupdir -t 32 -r 2000000 echo date %T
并行压缩备份所花时间52s和占用空间1.2G实际该表占用磁盘空间为48Gmydumper并行压缩备份压缩比相当高
Started dump at: 2019-03-26 12:46:04 ........ Finished dump at: 2019-03-26 12:46:56 du -sh /datas/dump_arrival_record/ 1.2G /datas/dump_arrival_record/
拷贝dump数据到测试节点
scp -rp /datas/dump_arrival_record root10.230.124.19:/datas
多线程导入数据
time myloader -u root -S /datas/mysql/data/3308/mysqld.sock -P 3308 -p root -B test -d /datas/dump_arrival_record -t 32
real 126m42.885s user 1m4.543s sys 0m4.267s
逻辑导入该表后磁盘占用空间
du -h -d 1 /datas/mysql/data/3308/test/arrival_record.* 12K /datas/mysql/data/3308/test/arrival_record.frm 30G /datas/mysql/data/3308/test/arrival_record.ibd 没有碎片和mysql的该表的大小一致 cp -rp /datas/mysql/data/3308 /datas
分别使用online DDL和 pt-osc工具来做删除重建索引操作 先删除外键不删除外键无法删除复合索引外键列属于复合索引中第一列
nohup bash /tmp/ddl_index.sh 2019-04-04-10:41:39 begin stop mysqld_3308 2019-04-04-10:41:41 begin rm -rf datadir and cp -rp datadir_bak 2019-04-04-10:46:53 start mysqld_3308 2019-04-04-10:46:59 online ddl begin 2019-04-04-11:20:34 onlie ddl stop 2019-04-04-11:20:34 begin stop mysqld_3308 2019-04-04-11:20:36 begin rm -rf datadir and cp -rp datadir_bak 2019-04-04-11:22:48 start mysqld_3308 2019-04-04-11:22:53 pt-osc begin 2019-04-04-12:19:15 pt-osc stop online ddl 花费时间为34 分钟pt-osc花费时间为57 分钟使用onlne ddl时间约为pt-osc工具时间的一半
*做DDL 参考 * 实施
由于是一主一从实例应用是连接的vip删除重建索引采用online ddl来做。停止主从复制后先在从实例上做不记录binlog主从切换再在新切换的从实例上做不记录binlog)
function red_echo () { local what$* echo -e $(date %F-%T) ${what} } function check_las_comm(){ if [ $1 ! 0 ];then red_echo $2 echo exit 1 exit 1 fi } red_echo stop slave mysql -uroot -p$passwd --socket/datas/mysql/data/${port}/mysqld.sock -estop slave check_las_comm $? stop slave failed red_echo online ddl begin mysql -uroot -p$passwd --socket/datas/mysql/data/${port}/mysqld.sock -eset sql_log_bin0;select now() as ddl_start;ALTER TABLE $db_.\${table_name}\ DROP FOREIGN KEY FK_arrival_record_product,drop index IXFK_arrival_record,add index idx_product_id_sequence_station_no(product_id,sequence,station_no),add index idx_receive_time(receive_time);select now() as ddl_stop ${log_file} 2 1 red_echo onlie ddl stop red_echo add foreign key mysql -uroot -p$passwd --socket/datas/mysql/data/${port}/mysqld.sock -eset sql_log_bin0;ALTER TABLE $db_.${table_name} ADD CONSTRAINT _FK_${table_name}_product FOREIGN KEY (product_id) REFERENCES cq_new_cimiss.product (id) ON DELETE NO ACTION ON UPDATE NO ACTION; ${log_file} 2 1 check_las_comm $? add foreign key error red_echo add foreign key stop red_echo start slave mysql -uroot -p$passwd --socket/datas/mysql/data/${port}/mysqld.sock -estart slave check_las_comm $? start slave failed
*执行时间 *
2019-04-08-11:17:36 stop slave mysql: [Warning] Using a password on the command line interface can be insecure. ddl_start 2019-04-08 11:17:36ddl_stop 2019-04-08 11:45:132019-04-08-11:45:13 onlie ddl stop 2019-04-08-11:45:13 add foreign key mysql: [Warning] Using a password on the command line interface can be insecure. 2019-04-08-12:33:48 add foreign key stop 2019-04-08-12:33:48 start slave
*再次查看delete 和select语句的执行计划 *
explain select count(*) from arrival_record where receive_time STR_TO_DATE(2019-03-10, %Y-%m-%d)\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: arrival_record partitions: NULL type: range possible_keys: idx_receive_time key: idx_receive_time key_len: 6 ref: NULL rows: 7540948 filtered: 100.00 Extra: Using where; Using index explain select count(*) from arrival_record where product_id26 and receive_time between 2019-03-25 14:00:00 and 2019-03-25 15:00:00 and receive_spend_ms0\G; *************************** 1. row *************************** id: 1 select_type: SIMPLE table: arrival_record partitions: NULL type: range possible_keys: idx_product_id_sequence_station_no,idx_receive_time key: idx_receive_time key_len: 6 ref: NULL rows: 291448 filtered: 16.66 Extra: Using index condition; Using where 都使用到了idx_receive_time 索引扫描的行数大大降低
索引优化后
delete 还是花费了77s时间
delete from arrival_record where receive_time STR_TO_DATE(2019-03-10, %Y-%m-%d)\G 图片 delete 语句通过receive_time的索引删除300多万的记录花费77s时间* delete大表优化为小批量删除
*应用端已优化成每次删除10分钟的数据每次执行时间1s左右xxx中没在出现SLA主从延迟告警 * *另一个方法是通过主键的顺序每次删除20000条记录 *
#得到满足时间条件的最大主键ID #通过按照主键的顺序去 顺序扫描小批量删除数据 #先执行一次以下语句 SELECT MAX(id) INTO need_delete_max_id FROM arrival_record WHERE receive_time2019-03-01 ; DELETE FROM arrival_record WHERE idneed_delete_max_id LIMIT 20000; select ROW_COUNT(); #返回20000 #执行小批量delete后会返回row_count(), 删除的行数 #程序判断返回的row_count()是否为0不为0执行以下循环为0退出循环删除操作完成 DELETE FROM arrival_record WHERE idneed_delete_max_id LIMIT 20000; select ROW_COUNT(); #程序睡眠0.5s
总结 表数据量太大时除了关注访问该表的响应时间外还要关注对该表的维护成本如做DDL表更时间太长delete历史数据。 对大表进行DDL操作时要考虑表的实际情况如对该表的并发表是否有外键来选择合适的DDL变更方式。 对大数据量表进行delete用小批量删除的方式减少对主实例的压力和主从延迟。