Working with large data sets in MySQL

What does working with large data sets in mySQL teach you ? Of course you have to learn a lot about query optimization, art of building summary tables and tricks of executing queries exactly as you want. I already wrote about development and configuration side of the problem so I will not go to details again.

Two great things you’ve got to learn when working with large data things in MySQL is patience and careful planning. Both of which relate two single property of large data sets – it can take hell a lot of time to deal with. This may sound obvious if you have some large data set experience but it is not the case for many people – I constantly run into the customers assuming it will be quick to rearrange their database or even restore from backup.

You need Patience simply because things are going to take a lot of time. Think about 500GB table for example – ALTER TABLE make take days or even weeks depending on your storage engine and set of indexes, Batch Jobs can take quite similar time. Binary Backup and restore will be faster but it can still take hours especially if database is already loaded. So operating wit such large databases you need to be patient and learn to have bunch of tasks running in the background while you’re doing something else.

You need Careful Planning because if you do not plan things properly you easily get into the trouble as well as because you can’t often use simple “online” solutions but have to do more complicated things instead. You typically can’t simply run ALTER TABLE because table will stay locked for too long you would need to do careful process of ALTERing table on the slave and switching roles or some other techniques. You can’t run many simple reporting queries because for MyISAM they will lock tables for very long time and Innodb can get too many old row versions to deal with which can slow down some queries considerably. You need to be planning for your handling of crashed MyISAM after power failure as check and repair may take long hours (this is indeed one of the big reasons to use Innodb even if you do not care about Table Locks or transactions).

Besides these various trips and gotchas you simply need to plan carefully how you’re going to alter your database because it takes a lot of time and may require waiting for maintainance window or bringing the site down. If you have tiny 1GB table you pretty much can use trial and error approach even for production – found some bad queries fixed them by adding indexes and got back to fix some more. For large data sets this does not work and you really need to have some playground with smaller data sets to play with different schema designs and index structures… this however results in the challenge as results you’ve gotten for small data set may not apply to large data set so you need to re-test your “final design” again with large set.

One thing I often find people miscount is assuming data management operations will be proportional to the database size. Say it takes 30 minutes to alter 10GB table so it will take 5 hours to alter 100GB one. It can be close to that if you’re lucky but it can be much much slower if you’re not. Many operations require certain size of table to fit in memory for decent performance. Typically it would be some portion of Index BTREE (even MyISAM which builds “normal” indexes by sort builds primary key and unique indexes using keycache) If it does not performance may drop performance order of magnitude.

This is actually one of the reasons I try to keep data in smaller tables whenever possible. But this is something I’ve written about in another article.

参考:

http://www.mysqlperformanceblog.com/2007/07/05/working-with-large-data-sets-in-mysql/

原文地址:https://www.cnblogs.com/xiaotengyi/p/3604275.html