Moving to the Cloud, Part 1
February 18, 2015
February 18, 2015
Should your business move to the cloud? If you’re interested in this question — and understand its implications — read this article.
But if this isn’t your area of expertise, pass the article along to your IT person. Here at Paulsen, we can help advise you on digital issues so you don’t need to understand it all yourself.
This two-part article evaluates the benefits of using cloud-based services to augment your business’ digital needs.
In Part 1, we explore what a cloud-based system could look like as a hosting infrastructure. In Part 2, we’ll explore case studies that may spark ideas how you can leverage cloud-based services.
Cloud-based services are becoming a natural part of business infrastructure and operation. As businesses continue to refine their best practices, it’s smart to remember the adage that any tool is only as good as the process that supports it, and that includes your digital infrastructure. When was the last time you evaluated your hosting platform? Are you using the technology that can pave the way for success in the next three, five or 10 years?
While you may have a great relationship with a local hosting provider (and there are many instances where this works well), we have found that the benefits of moving a hosting infrastructure to the cloud can be tremendous. These benefits include decreasing code deployment to mere minutes (five or fewer in many cases), load balancing traffic and auto-scaling servers based on traffic needs, streamlining server and website upgrades in 10 minutes or fewer and creating secure backup solutions.
One cloud-hosting provider we found to be reliable is Amazon’s AWS platform. This is especially the case if you have a Unix/Linux based architecture system. Otherwise, Microsoft’s Azure platform is equally satisfactory with Windows hosting. Google is now getting into the cloud hosting game as well.
The first major benefit is Amazon’s AWS automated deployment of your websites. You can store code within a repository service and set up your infrastructure so it looks at that repository when deploying websites. Now, instead of manually logging into that server and pulling down the last code build, it happens automatically when you deploy. With this new capability, you can establish a good foundation for the second major benefit — creating a distributed server environment.
Moving from a partially distributed, non-virtualized server environment to one distributed at every major component (or server) in a virtualized manner may take some initial legwork. However, once you establish a process, you’ll either be able to utilize a service provided by Amazon called OpsWorks or deploy your own automation software, such as Chef or Puppet. As described by Amazon, but also applicable for Chef and Puppet, OpsWorks “… is an application management service that makes it easy to deploy and operate applications of all shapes and sizes.”
These services allow you to write a list of instructions for each component or server and then create a new, or deploy an existing, website out to your environment. Combining the first and second step is where the third primary benefit really begins to shine — the auto-scaling of your servers.
We recommend that you set up a load balancer, such as Amazon’s Elastic Load Balancer (ELB), that detects the traffic load of all incoming website visitors (or requests) and then directs traffic accordingly.
One of the additional benefits of ELB is that if any of your components (or servers) become overloaded, it can call your OpsWorks service for a list of instructions on how to build a specific component (or server), as well as the necessary code to fill it with. Within 15-20 minutes, your environment can have an additional server up and running in a totally seamless process, without the end users knowing they’re now looking at your website or app on a different server.
This not only comes in handy when there’s an increase in visitor traffic, but also allows you to respond accordingly if a hacker takes a crack at bringing down your server environment. You can have your environment set up so that it will boot up a new server to handle the additional traffic load until the hacker either stops the attack or you can intervene and block its progress.
This arrangement helps with the auto-scaling and with system/server upgrades, security and hot fixes, website and app testing and upgrades, environment testing and duplication, failure and redundancy planning and much more.
However, in order to use these benefits without impacting your IT budget, you may have to consider moving away from a heavily IT-friendly server architecture to a DevOps-friendly one. With this new infrastructure, you may end up using five to six different services and platforms from Amazon, or any other cloud provider, for that matter.
It used to be that having one server run a database environment and another run your web server was the best practice for any web-based architecture. Now, using a distributed environment allows you to adapt and adjust as your needs change and business grows.
In Amazon’s AWS platform, we recommend you use the following services:
The Elastic Load Balancer is your first point of entry for any of your domain names, such as http://paulsen.ag/. Combine that with a micro EC2 (see below) instance to handle redirecting your A records to your CNAME records (we refer to it as the Apex Domain Controller). This simply redirects http://paulsen.ag/ to http://www.paulsen.ag/. We recommend you redirect all your bare (non www) domains to your www domain for a number of reasons. This article does a great job outlining the details.
Once your web server and code deployment are mapped out with one of the IT automation services outlined above, you’ll need a database service such as RDS to store your data. Amazon does a great job outlining the purpose of their RDS infrastructure.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easy to set up, operate and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database management tasks, freeing you up to focus on your applications and business.
Along with abstracting the database out, you’ll want to also do the same for your user-uploaded assets, such as images, video files, audio files, documents and other types of electronic media. This is where integrating with S3 comes in handy. Any time a user uploads a document to your site, it’s stored in the S3 service, thus reducing the number of calls to your website.
S3 and RDS not only reduce the amount of memory and processing power on any one particular server, but also keep your web server clutter-free. If ever you need to spin up another virtual server, you need only concern yourself with code and the configuration of the server, instead of also waiting for the database to download and the images to be copied.
There are many other services you can use to create an even more distributed environment, such as abstracting out your site caching or offloading your website search. In either case, Amazon’s AWS and Microsoft’s Azure platform are excellent options.
In Part 2 of this article we’ll take a look at how some well-known companies use the benefits of the cloud. We’ll also encourage you to think of ways to expand what you currently offer visitors. In the meantime, we’ll be here to help. Just email me: firstname.lastname@example.org