Best Practice AWS Architecture for Magento
- Published 6th Nov 2016
Last edited 10th Jun 2017
Recently I had the opportunity of migrating a world renowned Australian women’s fashion brand to Amazon Web Services (AWS). After going through a few trial and errors, I settled for the following setup which I consider the best practice setup on AWS for Magento 1 and 2:
- 3 x AWS EC2 running Ubuntu 16.04 on different availability zones for the front-end
- 1 x EC2 with Elastic IP acting as the admin instance
- Auto-scaling with scale up action set for above 50% CPU usage, scale down below 30%
- Media directory mounted via AWS EFS
- Amazon Aurora for the database
- Elastic Load Balancer
- CloudFlare handling DNS and extra caching
This set up runs super smoothly. It’s a Magento 1 Enterprise site – the previous hosting had Varnish on it, but this runs better even without Varnish. I’ve set up sessions to be stored in the database.
AWS EFS really works a treat for the media directory. I don’t think you now have to bother with using extensions to use an S3 bucket or use a 3rd party NFS solution like S3FS.
Mounting the NFS drive via EFS is super simple. All you need to do is add something like the following to the root crontab:
@reboot cd /home/ubuntu/magento2 && mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).fs-123456789.efs.us-west-2.amazonaws.com:/ media
Once you have your EC2 set up the way you wanted with the SSL certs etc. create the AMI and set up your auto-scaling. I love how it scales up and scales down seamlessly every day during peak and off-peak times. Those times usually have a pattern and is similar.
Deployment happens via Git automatically for any commits pushed to the master branch. After a few attempts with CodeDeploy, I figured all I needed was a cronjob running every 2 minutes doing a pull, followed by a post-merge git hook to clear the cache and full-page cache on each Magento instance.
Loader.io has once again come in handy with load testing. I have not tried to hit the server with a sudden DDoS-like traffic but under realistic, heavy traffic circumstances it was able to withstand it with no sweat. I’ve got it to scale between 3 to 12 servers (m3.large) but it hasn’t needed more than 5 yet. It would be interesting to really push this test, though unfortunately, the system is now live and it’ll be harder to do a test like that unless there is additional funding for that.
At the end of the day, the client is happy and I’m happy that the site is live with minimal interruptions. Now it’s a truly fault-tolerant system with much better security and theoretically near-unlimited scalability. The client now does not have to deal with server outages that used to happen very frequently.
By the way, have you heard that most of the Shopify stores were down during the massive worldwide DDoS attack that took out one of the major DNS servers? All my clients on AWS were happily still selling. Not that AWS has never gone down but, could this be the ultimate reason against a SaaS eCommerce model?