Saturday, January 16, 2016

Amazon Web Services over a cup of coffee by Lou Person

Amazon Web Services over a cup of coffee by Lou Person


The other day, with about an hour and half to invest over a cup of coffee (of course, during non selling hours!) I decided to noodle around with Amazon Web Services (AWS).  I consider myself an early adopter and evangelist of Cloud technology (having built, marketed and sold www.theflexoffice.com and www.brightstack.com).  I am also a "Microsoft guy", having aligned myself to Microsoft Azure because my company is a Microsoft Gold partner.  The Amazon brand never resonated with me as a Cloud Services provider, I had viewed them as a book retailer or the company who rings your doorbell with your order almost (seemingly) minutes after you hit submit on one click check out.  In 1.5 hours and a cup of coffee my entire perception has been changed.

Rewinding a little bit, I had drinks last week with our CEO and the CIO of one of our larger clients.  The CIO's organization is also one of AWS' largest client.  In fact, there is a case study about them on AWS' site.  His passion for AWS was very contagious and really peaked my interest.  I am a Sales Professional but have a degree in Computer Science and was an IT Auditor, Consultant, Developer and Engineer earlier in my career.  In order for me to believe in something, I need to know first hand how it works.  This approach helps me position technology better to my clients and understand practical use cases.  So I came up with a pretty common scenario on the way home from our meeting. 

The scenario I came up with is a somewhat complex application to see what is involved in building it on AWS.  It was amazing what I was able to accomplish in 1.5 hours FOR FREE.  That's right, FOR FREE.  AWS has an EC2 (Elastic Computing) that includes 750 service hours at no charge.  Granted, I added a service, such as an RDS (Amazon Relational Database Service), which may incur charges, but whatever I wind up paying over 3 years will be far less than had I purchased the hardware, software, disk, virtualization layer, firewall, bandwidth and consulting to set things up.  It also only took, in case I forgot to mention, 1.5 hours to complete.  Purchasing and deploying in my datacenter would have taken over 20 hours and spanned at least a week.

My test scenario consisted of a web server publically accessible running an application connecting to a database server behind a firewall that is accessible only to the web server over a single port, 1433.  The web server is only accessible to the world over port 80 and to administrators over RDP. 

Literally, it only took 1.5 hours.  I'd even say less that that but I'll stay conservative.

The first thing I did was create an AWS account.  This was pretty easy as I already have Amazon prime which I was able to leverage for AWS.  I did have to link a credit card to keep on file, I didn't mind doing so because I don't expect any charges in the first year and it does provide AWS with another layer of security.  Since I share my prime account with my family, I'm going to change this eventually to avoid any accidents.  I looked at Identity and Access Management (IAM) to secure user access, I'll do that at another time.  I was happy to see that it included multi factor authentication as an option. (Note: After this article was originally published, I added IAM as described by clicking here).

Next, I choose an EC2 plan that fell under the "Free Tier".  Still learning AWS, I perceive this to be analogous to the "Host Layer" in more traditional designs.

Once provisioned (it took less than 2 minutes), I was able to select a server image.  I think there were 22 Windows and Linux options to choose from, ranging from OS Versions (2003, 2008, SUSE, Readhat, Amazon Flavor, etc) to included OS/application bundles (such as SQL Server).  I first choose an image with 2008 and SQL server, but once I spun it up, I realized it was not included in the Free tier, so I spun it down, and spun up a 2008 server included with the free tier.  All this spinning and provisioning took less than 3 minutes.

I then spent about 10 minutes poking around the Console.  At first, it was very daunting, but very quickly I realized it was laid out exactly how I think and break down problems and tasks.  This enabled me to learn it very quickly.  I made some mental notes on things I needed to research.  I spent 5 minutes poking around the documentation to figure out how to do things that weren't obvious in the console.

My next task was to setup Security Groups.  These function almost the same way as do rule based firewalls so it was easy for me.  I created one called WebserverSG.  I added the Windows (Web/Application) server to this group and created rules to allow all inbound traffic from anywhere to port 80.  I did this for RDP as well, knowing later I'll need to go back and lock this down further.  This article "Scenario 3", laid out perfectly what I needed to do.   I tweaked it slightly, but it took me less than 5 minutes to setup.  The Console is just that easy.  Anticipating I may need a database server, I created a DatabaseSG and configured it per "Scenario 3".

I went to Network Solutions, my registrar and public DNS provider, and in the advanced DNS settings, for the host "www" field for my domain, I added the IP address provided in the AWS Console. (By the way, it turns out AWS offers registrar and Public DNS services. I will probably keep mine with Network Solutions for redundancy purposes). Knowing that it would take some time to propagate out, I tested my new server by IP.  Ugh, it didn't serve up a page.  My first road block, but it lasted less than 30 seconds, because I quickly realized that AWS locks down the servers to such an extent, expected Windows components may not be installed.   In this case, there was no IIS installed.  A quick scan of AWS Documentation instructed me to download the Microsoft Web Platform Installer.   This gave me everything I needed and I could pick and choose so many FREE components from the installer, including IIS, Drupal, .net, SQL Server 2008 Express, Visual Studio, Word Press, Joomla, Suger CRM, Frameworks, Tools, Databases etc.  About 100 FREE packages!!  I installed the minimum of what I thought was required but was judicious because I wasn't sure how much processing I was going to get in the "FREE" tier and wanted to be careful with resources used by what was installed on the server until I could chart actual usage.  This process took less than 5 minutes, and once done, I tested again by IP and the default IIS page was served up!  By the way, I also installed SQL Server Management Console anticipating needing it later, it was a simple check from within the installer, I figured why not.

I thought I hit my next road block when it came to the database.  As mentioned above, I was a bit concerned about installing a Microsoft SQL Server on this server as I wasn't yet sure how it would perform under stress.  Since I am on a "FREE" Tier, I thought it best to distribute the database server.  In other words, installing the free version of SQL Server (up to 10GB) as part of the Microsoft Web Platform Installer wasn't an option because I was being frugal with the resources in the free tier having no real performance previous benchmark I could trust.  I went back to the Web Services page in AWS and found the RDS section.  I simply clicked "Launch a database instance".  I was able to choose from the following Database servers:
  • Amazon Aurora (not where Wayne and Garth live, but what appears to be an Amazon branded database server).
  • MySQL (the perennial shareware database)
  • MariaDB
  • PostgresSQL
  • Oracle
  • Microsoft SQL Server (came up last, but the list was sorted alphabetical and this was technically listed as SQL Server)
Being a "Microsoft Guy", I choose Microsoft SQL Server because I had the most familiarity with it (having worked with MySQL and Oracle as well in the past).  It took about 2 minutes to spin up the database server and I had a database server within 5 minutes of pursuing one.  I did make a mistake in this step which caused me some pain (see below).  I left the new database server in the default security group, I did not put it into security group I properly configured (pending testing) above.

In the back of my mind, I started thinking about how the Webserver would connect to this server, most likely via ODBC.  If I could connect to the database server using SQL Server Management Console installed earlier, I figured I would be in good shape.  With the coffee gone and the kids jumping off the walls to go to our local diner, it did not work!!  I was starting to get perplexed.  I quickly pulled myself together and realized that the database server was not in the DatabaseSG previously configured, but rather, in the default security group I was not using and had not configured to talk to the WebserverSG.  I made the change and tested again.  Still didn't work.  No coffee and hyper kids.  I re-read scenario 3, and either through my typo or misunderstanding, I thought the instructions said to append the port number in the address of the server.  I removed the port number from the server name and was able to connect directly to the database server running on RDS through SQL Server Management Console. 




I was at the 1.5 hour mark and had two servers, protected from the Internet, serving web pages, integrated with a database server ready for whatever application is deployed on it.  I built a "Hello World" application that can be scaled and enhanced.  Not bad for 1.5 hours of non selling time!!

What amazes my even more is how I would have done this not more than a year or so ago.  The amount of people hours, calendar days/weeks and cost of hardware, software and consulting that would have gone into this had I done it "old school".  Here is a brief compare and contrast, and at some point this will link to a nicely formatted table.

The first area to review is the Procurement process and how it varies between the traditional approach (purchase and own) and AWS.  I estimate that hardware and software alone, excluding cost of labor to assemble everything, is about $30,000.  This isn't too far fetched, considering I would need a pretty beefy host server with local or shared storage, memory, redundant CPU's, redundant fans and power supplies, network switches and firewall.  I'd also need the VMWare virtualization layer, Microsoft Server for the two instances and SQL server.  On top of all that, I'd have the annual hardware and software warranty renewals.  I'd need a place to host the equipment, a traditional IT closet, with power and cooling and rack space for the equipment.  The offset with AWS is essential free in the first year.  I would estimate it would take 7-8 years to come close to $30,000 in AWS service fees, and within that time, chances are, I'd be replacing the purchased equipment as it ages out.  Before I even order the equipment, I need to review the configuration with tech support from my distributor to make sure all the parts are compatible, get a PO from my procurement department and wait for an order confirmation.  It would take a few days for all the equipment to arrive, assuming it is in stock, over multiple shipments in multiple boxes.  We'd need to inventory it all in, match the packing slips to the order and reconcile all the deliveries.  It would take at least a business week to have everything assembled and ready to go.  Then it needs to go through a burn in and testing phase to make sure everything is working correctly and as expected.  There are many tasks to get right: the server and it's components, setting up the storage, network ports, firewall and Virtualization layer, installing and configuring Windows, updating patches and firmware on all devices.  On the other hand, with AWS, as mentioned above, this all took 1.5 hours and worked as expected right from the start (aka Jump Street).  Chances are, in the old school model, I am over procuring and over provisioning.  Why buy a server that won't have enough capacity to scale beyond a short period of time?  The old school of thought would be to load it up with processing resources (memory, CPU, disk, etc) to not have to worry in the future.  Chances are, in the future, I may not need those resources anyway so I may be wasting my money.  With AWS, I consume (pay for) what I need when I need it.  That is the EC "Elastic Computing" model.  I can subscribe to exactly what I need now and simply add more when needed (or automatically) later. 

In the old model, it would take expertise across a number of disciplines and manufacturer technologies.  One or more engineers would be involved with the project and would need expertise with Microsoft Server, VMWare (Virtualization), Database Administration (DBA), Network and Security technologies.  A project manager would need to coordinate the scheduling and resourcing.  With AWS, this all goes away.  AWS EC2 comes preconfigured, which covers the Virtualization Layer, Hardware, Network and Security layers.  The server image spun up on EC2 replaces the need to configure the Operating System layer and RDS replaces the need for a DBA.

I spent time, after my initial setup, researching AWS IAM (Identity Access Management) and AWS Directory Services.  AWS provides three versions of Directory Services that are, for the most part, compatible with Microsoft Active Directory.  The management tools are free and I expect very little monthly usage charges for using AWS IAM and Directory Services.  I will add my servers to a domain later, create users through IAM and add to user security groups through directory services to better control and manage security. Previously, I would have had to provision an active directory environment, set it up, make sure it was functioning as expected, back it up and manage it on an ongoing basis. Someone would have to decipher cryptic messages in an event log and apply various fixes and patches.  The deployment, configuration and management of Directory Services is done by AWS, that's one less layer to worry about.  I just need to focus on creating users and groups and setting permissions correctly.  What a relief.

Bandwidth to an IT closet, office or datacenter is a relatively expensive and finite resource in a traditional environment.  Bandwidth also requires redundancy (more than one connection), and often multiple providers, over different technologies, hopefully through diverse paths into the building. And as above, chances are, it is dramatically over provisioned relative to actual usage.  In other words, in the old model, I may be paying for more than I need.  And if I do run over, it could take days or weeks to add more bandwidth, assuming it is available.  With AWS, as with all resources, I subscribe to just what I need, and if I go over, it is readily available and I pay a nominal increase in my monthly fee.  In the AWS documentation, I read that I can cap all charges, monitor fees, alert on overages and view usage in real time so I am not surprised by additional charges and I can manage on a real time basis.

My vast experience in IT has taught me a few things.  One of which is that a good backup can get you out of any jam.  End users understand if performance degrades (as long as it gets improved in a reasonable time frame) and can even tolerate some down time for good reason (but not much).  But what cannot be tolerated is data loss.  Furthermore, CXOs often talk in terms of Disaster Recovery and Business Continuity.  I've learned something else over the years.  Disaster Recovery gets you fired, Business Continuity gets you promoted.  If you have to "recover" from a disaster, it most likely means systems went down during a disaster.  During Hurricane Sandy, our Cloud, FlexOffice, had 100% uptime for our customers.  I know what it takes.  In most use cases (I hesitate saying all because nothing is absolute), I would trust my infrastructure to AWS for maximum Business Continuity over what can be done via internal management and hosting.  Backups are also crucial especially if a user accidentally deletes a file or gets hit with a  virus.  After I spent my 1.5 hours on this initial setup, I did go back and research backups with AWS.  It is impressive.  In the old model, I'd have to manage a backup system (such as tape, server images to a local appliance with offsite storage of the current image and associated backup software) as well as the performance and successful execution of the backup jobs.  

AWS makes the process so much more reliable and efficient, it will reclaim many hours on a monthly basis since it is simply included as a function of AWS.  Also looking at Recovery Point Objectives (RPO), how far back I can go via a backup and Recovery Time Objective (RTO), how long it would take to recover, AWS is the clear winner.  RTO is almost instant through the console and RPO goes back as far as I'm willing to consume storage on AWS for backups.

One of the main arguments against AWS is security.  I have noticed over the past few years that CXO's are understanding that AWS is secure and there is less and less resistance to moving to AWS due to security concerns.  The trend will continue in this direction, after all, I doubt a CXO can become any more concerned about security than they are now, so it really can only move in one direction.  I am very paranoid when it comes to security, but after drilling through the console and security options, the security inherent with AWS is AT WORST equal to the "old school" method.  There is a certain degree of trust with AWS that is comparable to the physical security over an IT closet in an office.  That's a wash for me.  In order to obtain the Administrator password, I had to upload a key file.  More secure than the way passwords are stored "old school", either in the Administrators head, in a password database, or texted per the "no passwords in email" rule.  The firewall is built into AWS and a native component of the AWS stack.  The offset in the old model is a third party hardware firewall appliance sitting between the Internet and my servers which is no longer required in AWS.  The AWS server image is completely locked down.  I haven't fully gotten my head around other security measures such as patching and anti-virus in the AWS world.

I accomplished all this over a cup of coffee on a Saturday morning.  But I can't stop thinking about AWS.  Ironically, as I was walking from a customer meeting to a networking meeting last Thursday, someone in one of my other networking groups called (which is in the "you can't make this stuff up category").  He asked if I could help him with a solution so all his users, spread out all over the country, could access a database application at the same time.  His strategy is to be "born in the cloud" and not have large offices.  He does not want to use his capital for depreciating assets such as desktops.  Sunday's cup of coffee is going to be all about Amazon Workspaces!

Post by Lou Person


No comments:

Post a Comment