In my last blog post I provided eight tips to surviving a live database migration to Amazon Relational Database Service (RDS). Today, I’m going to give you five ways (plus a bonus) to ensure your database migration is a failure. In other words, these are five common mistakes you should avoid if you don’t want to make your life miserable while doing a live database migration to Amazon RDS.
Mistake #1: Use STATEMENT replication instead of ROW
STATEMENT replication isn’t bad, per se. It’s proven with MySQL, and it works. I don’t use it for replication because not all DML goes through STATEMENT. I use ROW. Despite the fact that there is more overhead it will give you A=B for everything. If you know your data well enough to see that no unsafe queries are ever made, then use STATEMENT. However, if you are even the slightest bit unsure, use ROW. Personally, I use ROW regardless.
Mistake #2: Go crazy with writes
Masters are multithreaded, and replicas are single threaded. If you actively write on your master while migrating to Aurora, you’ll put extra overhead on the system. In my experience, I haven’t seen a single thread exceed 2,000 IOPS. If your master is writing at 4,000, your only choice is to pray for a drop in traffic so that it can catch up.
Mistake #3: Migrate from a replica
Don’t do it! That replica is already chugging along trying to keep up with its master. The added overhead of trying to pull data off this slave to replicate over to your new environment will only increase the replica lag.
Mistake #4: Don’t do your homework
You’re going to 5.1 to 5.6. Did you check through the changelog to make sure there were no major changes to data structure that are going to break everything? Is your application interacting with your data tier via a deprecated method? Test the waters with a small dataset you can tinker with.
Mistake #5: Don’t plan for error handling
If you’re writing your own import procedures and you have a batch script or Python script that’s doing the dump for you, be sure to include error handling. The worse thing in the world is doing a database migration and finding that it’s been erroring out while you were away. Always give yourself some kind of catch mechanism. If the script exits non-zero, use ssmtp (for example) to send yourself an email letting you know things got borked while you were asleep.
Bonus: Assume that Aurora is good for everything
Aurora is ideal for many types of applications, but it’s not for everyone. In some cases, I’ve found that running MySQL on EC2 r3.8xlarge outperforms the largest Aurora instance. And you can’t always take Aurora’s metrics at face value. The IOPs are reflective of the API through which all queries to the databases are made and don’t reflect the file IO. It’s also worth noting that you don’t get any SUPER privileges on Aurora. For some people that can be showstopper.
Aurora is perfect for an application like WordPress that’s already developed and has its own database structure preconfigured into it. If you have a very particular build or application, there may be situations where Aurora isn’t a good fit, especially things with really high concurrent rights. Finally, take a close look to make sure your sprocs, triggers and routines will migrate properly.
Hopefully reading these mistakes will save you from making them yourself. If you’re interested in learning more about migrating to Amazon RDS, watch my recorded presentation, Living with Database Migrations. I provide more advice for managing a successful database migration to Amazon RDS and what to do once you get there.