Sunday, January 24, 2016

Amazon Web Services (AWS) Simple Email Services - Lou Person

Amazon Web Services (AWS) Simple Email Services - Lou Person
Sales, IT and Business Professionals such as myself are very conscience when it comes to security.  A security breach can cause irreparable brand damage and financial losses.  It also can kill credibility and trust with users.  One of the easiest and most sinister exploits is around phishing, or email server hijacking.  In the same category is Spam, which at best is really annoying and at worst can be a front door for a virus or malware.  The vast majority of applications today, especially mobile, rely on email in one form or another for one or more of the following use cases: 

  • New account creation confirmation
  • Password reset
  • Notifications, such as receipts, an action the app wants you to take or account changes
  • Marketing or offers
There are a number of ways to send email from an application.  The others I am working with are not as Sales or Business facing as myself, so I didn't get the sense that they were overly motivated to research another solution to increase security, brand protection and minimize financial risk.  So I volunteered to attack the problem for the team.  I did a great deal of research to find a solution which was secure, contained audit controls, provided reporting, is a managed service so it could limit risk and is very cost effective.  I looked at a number of solutions, some I was familiar with, others were new to me.  I'm going to compare SMTP as part of IIS and AWS SES.

The first thing I did to evaluate environments was create a sample application to test sending email.  Here is the code:



<?php 

$action=$_REQUEST['action']; 

if ($action=="")    /* display the contact form */ 
    { 
    ?> 
    <form  action="" method="POST" enctype="multipart/form-data"> 
    <input type="hidden" name="action" value="submit"> 
    Your name:<br> 
    <input name="name" type="text" value="" size="30"/><br> 
    Email to:<br> 
    <input name="emailto" type="text" value="" size="30"/><br> 
Email from:<br> 
    <input name="emailfrom" type="text" value="" size="30"/><br>
    Your message:<br> 
    <textarea name="message" rows="7" cols="30"></textarea><br> 
    <input type="submit" value="Send email"/> 
    </form> 
    <?php 
    }  
else                /* send the submitted data */ 
    { 
    $name=$_REQUEST['name']; 
    $emailto=$_REQUEST['emailto'];
 $emailfrom=$_REQUEST['emailfrom']; 
    $message=$_REQUEST['message']; 
    if (($name=="")||($emailto=="")||($message=="")) 
        { 
        echo "All fields are required, please fill <a href=\"\">the form</a> again."; 
        } 
    else{         
        $from="From: $name<$emailfrom>\r\nReturn-path: $emailfrom"; 
        $subject="Message sent using your contact form"; 
        mail($emailto, $subject, $message, $from); 
        echo "Email sent!"; 
        } 
    }   
?> 

I then setup SMTP within my Windows server instance.  SMTP was a component of IIS through IIS 6.0.  The Web Platform Installer I discussed in an earlier post deploys IIS 8.0.  I am guessing Microsoft stopped shipping an SMTP server by default due to the same issues I am concerned with around security.  I was able to install SMTP as part of IIS 6.0 by enabling it through server management as a new feature.  It took a little bit to get working because I had to add SMTP rules to the Webserver security group and configure the server appropriately for AWS.  I was able to test the setup by using the sample php application above.  I modified the php.ini file to instruct php to use the mail server within the Windows instance as follows:

[mail function]

smtp = localhost

smtp_port = 25

Everything worked great!  Too great in fact!  I was able to send email to ANYONE from ANYONE.  I noticed that emails arriving to my gmail account were flagged as "phising" and test emails to other accounts went immediately to spam.  I had no way to control, monitor or audit what was going on.  In a real world scenario, where this could be used to email hundreds of thousands of people a day, this could be a disaster!  For internal or intranet based applications, it is probably an easy solution, but not for a public facing mobile application.

I then dug into AWS SES.  Referencing here: 
"Building a large-scale email solution is often a complex and costly challenge for a business. You must deal with infrastructure challenges such as email server management, network configuration, and IP address reputation. Additionally, many third-party email solutions require contract and price negotiations, as well as significant up-front costs. Amazon SES eliminates these challenges and enables you to benefit from the years of experience and sophisticated email infrastructure Amazon.com has built to serve its own large-scale customer base."

I had to go through a number of verifications to set it up, which was fine by me.  First, from the SES console, I had to add a txt record to my public DNS through network solutions.  Once this was added, and it had about an hour to propogate out, the SES console indicated that the domain was "verified".  Then, for each email I wanted to use as the "From" email, I had to verify through SES.  I would receive an email with a link from AWS.  Clicking on the link allowed sending from the email address.  Next, in order to send email through SES, I had to generate a username and password to embed in the application code.  The username is 20 characters long and the password 40, mixed randomly with uppercase, lowercase, and symbols.  I was not sure at this point how to test, so I found the following sample application using Visual Studio (it was actually very easy to configure):
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-using-smtp-net.html

I was able to get the code to execute the first try, after changing username, password, smtp host address and port.  But, it would not connect, I received a message that it was sending the email, but nothing was happening, it almost appeared as if it wasn't connecting.  I looked at the security groups and realized I needed to add the standard SMTP port, 25, but also the port used by SES, 587.  I was now able to send the email and the application confirmed it was sent, but it never arrived.

I didn't realized at the time, until I contacted support via chat at 12:30AM Sunday morning, that I could only work in a "sandbox".  Meaning, I could only send email to and from verified email accounts.  I had to go through another security measure and request to have production access granted.  I submitted the form, and by the next morning, I awoke to an email congratulating me for moving out of the sandbox:

"Congratulations! After reviewing your case, we have increased your sending quota to 50,000 messages per day and your maximum send rate to 14 messages per second in AWS Region. Your account has also been moved out of the sandbox, so you no longer need to verify recipient addresses."

There are a number of other security protocols built in.  For example, I have to comply with Amazon's Acceptable Use Policy found here.  If there is a high rate of bounces in a given period, the service is suspended.  I can also view the following statistics in real time, through the SES Console, which has nice charts and graphs of the data points:  Successful Delivery Attempts, Rejected Messages, Bounces and Complaints.  Most importantly, I can "shut things down" immediatley if something goes crazy.  This is worth any nominal charges for the brand protection and financial loss avoidance it provides.

I did have one challenge that took some figuring out.  When I setup SES against the PHP application above, I received authentication errors.  In the php.ini file, I first simply changed the host name to the SES host provided in the console.  Since the C# application I ran in Visual Studio was working, I was sure things were setup correctly so the issue must be at the application level.  It took a little while to realize that the default send command within php was limited because it does not pass username and password through to the smtp server.  Thus, the authentication error.  This wasn't unix, afterall, and there isn't a sendmail service built into Windows.  Oh, wait, why not install sendmail for Windows?  A quick search of "sendmail for Windows" took me to sendmail.org and I was able to download sendmail for Windows.  I had to modify the php.ini file as follows, basically telling php to call out to sendmail:

[mail function]
sendmail_path = c:\sendmail\sendmail.exe
;smtp = localhost
;smtp_port = 25

Then, I had to modify sendmail.ini to include the hostname of the SES server, the port, the SES generated username and password.  

All of these configurations took about 15 minutes once I figured out what the problem was and came up with a solution.

I circled back with the team and simply told them that SMTP outbound email is working and they could only use a verified outbound email.  It really is a black box to the developers, but as a Sales and Business Professional, I have tremendous peace of mind that the application has much less risk and much great security controls, metrics and reporting in place because of SES.

Post by Lou Person

Saturday, January 23, 2016

Amazon Web Service Part 2 - Lou Person

Amazon Web Service Part 2 - Lou Person


This is a follow-up to my AWS experience so far.  I've learned a great deal since the first 1.5 hours and have more to learn.  I figured today was a good day because we are snowed in by Blizzard Jonas.

I was having some configuration issues that I'm sure I could have resolved on my own, but wanted to cut down the cycle.  If I could save myself an hour or two, I figure it was worth close to $200 of my time.  My free subscription to AWS included the "Basic" plan and I'm sure it would have been fine, but it did not include any SLA's, chat or phone support.  Developer support seemed reasonable at $49/month, but was limited to local business hours and email contact only and since I am learning AWS during non selling hours, I needed 24/7 support that I could communicate with easily (which for me meant chat and phone).  I choose Business for $100/month because it provided 24/7 access via chat, phone and email.  I have used it a few times and it is fantastic.  The engineers I work with solved my issues very quickly and never said "sorry, not covered".  They always provide me with more knowledge above and beyond the specific  issue I called in about. Since I am engaging with support, the issues I have are complicated otherwise I would have been able to solve them on my own.  I can call as many times as I need and the support engineers are experts with technology and have outstanding communication skills.  They are also super nice. They help me with complex issues such as network routing, windows server configuration, dns, active directory, VPN, etc.  For $100/month, I feel as if I have a the world's best engineers on my team (on par with the team at brightstack CIS). 

While talking with the engineers, they pointed out that opening up RDP over port 3389 to the world is not secure.  This took me to the marketplace.  From the AWS help topics: "AWS Marketplace is an online store that helps customers find, buy, and immediately start using the software and services they need to build products and run their businesses.  AWS Marketplace complements programs like the Amazon Partner Network and is another example of AWS’s commitment to growing a strong ecosystem of software and solution partners. Visitors to the marketplace can use AWS Marketplace’s 1-Click deployment to quickly launch pre-configured software and pay only for what they use, by the hour or month.  AWS handles billing and payments, and software charges appear on customers’ AWS bill".  I chose an OpenVPN access server which I spun up in a few minutes.  I then followed the directions to create an encryption key, connect via SSH (putty) and complete the configuration.  I installed the VPN client on my local computer and had others on my team do the same.  Connecting was very easy!  However, accessing the server over RDP took some configuring, and that is where support was so helpful.  The marketplace automatically created a new security group with inbound ports open to allow access to the VPN server.  I then had to allow access to the WebserverSG inbound access from the OpenvpnSG.  Finally, the support engineer helped me, over the phone via a screen share, realize that we couldn't use a host name because local DNS (on my computer) did not have the host names for the Amazon servers.  So, for now, we'll just live with accessing services by IP address over the VPN. 

The next service I leveraged was AWS Directory service.  I had the choice of Microsoft AD (recommended for workloads requiring up to 50,000 users) and Simple AD (recommended for workloads up to 5,000 users).  There was also an option for an AD Connector to create a Hybrid Cloud by integrating to an on-premises Active Directory.  I chose the Simple AD.  I tested it by installing AD Users and Computers on the Web server, but couldn't connect to the new domain.  I knew it should be easy, but I was having too much trouble.  My troubleshooting was wasting my time as I wasn't getting anywhere.  I value my time more than $100/month (see above) and this issue prompted me to sign up for the Business support plan.  The engineer who assisted, over chat, pointed out that I need to add inbound traffic for Microsoft AD between the directory security group created when I setup the directory and the subnet for my entire VPC.  That did the trick, I was able to connect using AD Users and Computers.  I added my Web Server Windows AMI to the domain and was now able to create domain users and security groups. Eventually we'll use it to setup group policies.

Next, I needed to add additional storage for the Web Server because the volume that came with the AMI (the c: drive) was filling up and it was needed for the operating system and supporting files.  The services I could chose from currently included:
  • S3 - Simple Storage used to store any amount of data.  I see this more as file storage for low latency applications, such as file sharing, software distribution and large content such as videos.
  • CloudFront - This is storage for a Content Delivery network. 
  • Import/Export with Snowball - This allows IT administrators to send "seed" images of data to AWS to be imported into the AMI using a secure appliance.  Seems smart if there are terrabytes of data to upload with bandwidth limitations at the local site. 
  • Glacier - This is a low latency storage service primarily used for backups and archives.
  • Storage Gateway - This is used to store local copies of files stored at Amazon for local access on the same network that the user resides.  Meaning, the user would pull down a copy of the file closest to where they reside, without having to traverse the public Internet.  Changes are synchronized using the gateway to/from the local image and AWS.
  • Elastic File System - This is the option I would like to implement for the new volume, but it is only in preview.  AWS EFS is a file storage service for Amazon Elastic Computing instances. 
I wound up adding an Elastic Block Storage volume by clicking on the Elastic Block Storage section of the console.  I had to make sure to prevision it in the same Availability zone as my Webserver AMI.  Once created, I simply clicked on it and attached it to the instance of my Webserver (I had to make sure to select the right instance which I first didn't do correctly and couldn't figure out why it wasn't appearing in the instance itself).  I then used disk management tools to format it, allocate it, create a simple partition and assign it a drive letter.  The business benefits are tremendous.  I can add storage as needed, pretty much on the file, without having to reboot the server, attach a SAN or power it down to install hard drives.  I am exited, however, to learn more about EFS once accepted into the preview!

Now, on to Identity and Access Management (IAM).  IAM is described here as: "AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources and what resources they can use and in what ways authorization."  I think the name is funny, in a good way.  IAM as in "I am" for an identity offering, very clever.  "I am able to access resources because I am authenticated".  The first thing I did was add MFA "Multi Factor Authentication" by linking my Google Authenticator to my AWS account.  I scanned a QR Code and opened a web browser and now have an AWS MFA verification code in Authenticator.  This is a very important benefit because I've always said "username and password are not enough".  MFA provides a third level of authentication after username and password.  Once an IAM user is authenticated to the console either through the root Amazon account or another IAM account, the user is then prompted for an authentication code.  This code is generated by the Google Authenticator application on the user's phone.  The google authentication service and the Amazon IAM login service must be synchronizing the Authentication code to validate the user.  Below is how Goolge Authenticator looks on my phone:


I created users and groups and assigned roles to the groups.  Finally I set a password policy and I now have multi factor authentication setup for login to my AWS console!  As soon as I set the policy, AWS console forced a logout and I logged back in using the new IAM integrated root account with multi-factor authentication.  I am also able to login using this account on my Samsung Galaxy to use mobile to manage my AWS console.  I will eventually add other IAM users and grant granular access to my AWS console.  This business benefit of IAM is that I can delegate control of the console to multiple members of my team and only give them access to what they need. Since it uses multi factor authentication, I don't have to worry about security controls and the others mistakenly granting access to unauthorized users.  The best part about it?  It's FREE with my AWS plan.

My next task was to setup Monitoring and Alerting of the system. This was pretty easy to do using Amazon CloudWatch. This service is FREE (excessive alarm notifications may incurr additional charges) and the business benefit is tremendous.  Being able to easily monitor and alarm on key peformance metrics ensures a well run operation.  Monitoring reduces incidents and highlights areas which are low on resources before they run out of capacity.  Quoting from here: "Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health. You can use these insights to react and keep your application running smoothly."  It was pretty easy to setup and very fulfilling to see the key metrics on the dashboard.  I also feel comfortable that I'm not missing key events (such as disk storage running low) because I can set alarms based on various metrics and thresholds.  I can then email the team when an alarm is reached.  While creating the alarms, I saw previous history in chart form so I can see what historical usage is so I could easily determine the alarm threshold. 

After my initial experience with Amazon, I figure I am about 4 hours into the project so far.  The support has been world class, I have a fully integrated Active Directory with Multi Factor Authentication, as much storage as I need when I need it and great tools to manage the enviornment.  Things have come a very long way since we built our first Cloud applications back in 1994!

Saturday, January 16, 2016

Amazon Web Services over a cup of coffee by Lou Person

Amazon Web Services over a cup of coffee by Lou Person


The other day, with about an hour and half to invest over a cup of coffee (of course, during non selling hours!) I decided to noodle around with Amazon Web Services (AWS).  I consider myself an early adopter and evangelist of Cloud technology (having built, marketed and sold www.theflexoffice.com and www.brightstack.com).  I am also a "Microsoft guy", having aligned myself to Microsoft Azure because my company is a Microsoft Gold partner.  The Amazon brand never resonated with me as a Cloud Services provider, I had viewed them as a book retailer or the company who rings your doorbell with your order almost (seemingly) minutes after you hit submit on one click check out.  In 1.5 hours and a cup of coffee my entire perception has been changed.

Rewinding a little bit, I had drinks last week with our CEO and the CIO of one of our larger clients.  The CIO's organization is also one of AWS' largest client.  In fact, there is a case study about them on AWS' site.  His passion for AWS was very contagious and really peaked my interest.  I am a Sales Professional but have a degree in Computer Science and was an IT Auditor, Consultant, Developer and Engineer earlier in my career.  In order for me to believe in something, I need to know first hand how it works.  This approach helps me position technology better to my clients and understand practical use cases.  So I came up with a pretty common scenario on the way home from our meeting. 

The scenario I came up with is a somewhat complex application to see what is involved in building it on AWS.  It was amazing what I was able to accomplish in 1.5 hours FOR FREE.  That's right, FOR FREE.  AWS has an EC2 (Elastic Computing) that includes 750 service hours at no charge.  Granted, I added a service, such as an RDS (Amazon Relational Database Service), which may incur charges, but whatever I wind up paying over 3 years will be far less than had I purchased the hardware, software, disk, virtualization layer, firewall, bandwidth and consulting to set things up.  It also only took, in case I forgot to mention, 1.5 hours to complete.  Purchasing and deploying in my datacenter would have taken over 20 hours and spanned at least a week.

My test scenario consisted of a web server publically accessible running an application connecting to a database server behind a firewall that is accessible only to the web server over a single port, 1433.  The web server is only accessible to the world over port 80 and to administrators over RDP. 

Literally, it only took 1.5 hours.  I'd even say less that that but I'll stay conservative.

The first thing I did was create an AWS account.  This was pretty easy as I already have Amazon prime which I was able to leverage for AWS.  I did have to link a credit card to keep on file, I didn't mind doing so because I don't expect any charges in the first year and it does provide AWS with another layer of security.  Since I share my prime account with my family, I'm going to change this eventually to avoid any accidents.  I looked at Identity and Access Management (IAM) to secure user access, I'll do that at another time.  I was happy to see that it included multi factor authentication as an option. (Note: After this article was originally published, I added IAM as described by clicking here).

Next, I choose an EC2 plan that fell under the "Free Tier".  Still learning AWS, I perceive this to be analogous to the "Host Layer" in more traditional designs.

Once provisioned (it took less than 2 minutes), I was able to select a server image.  I think there were 22 Windows and Linux options to choose from, ranging from OS Versions (2003, 2008, SUSE, Readhat, Amazon Flavor, etc) to included OS/application bundles (such as SQL Server).  I first choose an image with 2008 and SQL server, but once I spun it up, I realized it was not included in the Free tier, so I spun it down, and spun up a 2008 server included with the free tier.  All this spinning and provisioning took less than 3 minutes.

I then spent about 10 minutes poking around the Console.  At first, it was very daunting, but very quickly I realized it was laid out exactly how I think and break down problems and tasks.  This enabled me to learn it very quickly.  I made some mental notes on things I needed to research.  I spent 5 minutes poking around the documentation to figure out how to do things that weren't obvious in the console.

My next task was to setup Security Groups.  These function almost the same way as do rule based firewalls so it was easy for me.  I created one called WebserverSG.  I added the Windows (Web/Application) server to this group and created rules to allow all inbound traffic from anywhere to port 80.  I did this for RDP as well, knowing later I'll need to go back and lock this down further.  This article "Scenario 3", laid out perfectly what I needed to do.   I tweaked it slightly, but it took me less than 5 minutes to setup.  The Console is just that easy.  Anticipating I may need a database server, I created a DatabaseSG and configured it per "Scenario 3".

I went to Network Solutions, my registrar and public DNS provider, and in the advanced DNS settings, for the host "www" field for my domain, I added the IP address provided in the AWS Console. (By the way, it turns out AWS offers registrar and Public DNS services. I will probably keep mine with Network Solutions for redundancy purposes). Knowing that it would take some time to propagate out, I tested my new server by IP.  Ugh, it didn't serve up a page.  My first road block, but it lasted less than 30 seconds, because I quickly realized that AWS locks down the servers to such an extent, expected Windows components may not be installed.   In this case, there was no IIS installed.  A quick scan of AWS Documentation instructed me to download the Microsoft Web Platform Installer.   This gave me everything I needed and I could pick and choose so many FREE components from the installer, including IIS, Drupal, .net, SQL Server 2008 Express, Visual Studio, Word Press, Joomla, Suger CRM, Frameworks, Tools, Databases etc.  About 100 FREE packages!!  I installed the minimum of what I thought was required but was judicious because I wasn't sure how much processing I was going to get in the "FREE" tier and wanted to be careful with resources used by what was installed on the server until I could chart actual usage.  This process took less than 5 minutes, and once done, I tested again by IP and the default IIS page was served up!  By the way, I also installed SQL Server Management Console anticipating needing it later, it was a simple check from within the installer, I figured why not.

I thought I hit my next road block when it came to the database.  As mentioned above, I was a bit concerned about installing a Microsoft SQL Server on this server as I wasn't yet sure how it would perform under stress.  Since I am on a "FREE" Tier, I thought it best to distribute the database server.  In other words, installing the free version of SQL Server (up to 10GB) as part of the Microsoft Web Platform Installer wasn't an option because I was being frugal with the resources in the free tier having no real performance previous benchmark I could trust.  I went back to the Web Services page in AWS and found the RDS section.  I simply clicked "Launch a database instance".  I was able to choose from the following Database servers:
  • Amazon Aurora (not where Wayne and Garth live, but what appears to be an Amazon branded database server).
  • MySQL (the perennial shareware database)
  • MariaDB
  • PostgresSQL
  • Oracle
  • Microsoft SQL Server (came up last, but the list was sorted alphabetical and this was technically listed as SQL Server)
Being a "Microsoft Guy", I choose Microsoft SQL Server because I had the most familiarity with it (having worked with MySQL and Oracle as well in the past).  It took about 2 minutes to spin up the database server and I had a database server within 5 minutes of pursuing one.  I did make a mistake in this step which caused me some pain (see below).  I left the new database server in the default security group, I did not put it into security group I properly configured (pending testing) above.

In the back of my mind, I started thinking about how the Webserver would connect to this server, most likely via ODBC.  If I could connect to the database server using SQL Server Management Console installed earlier, I figured I would be in good shape.  With the coffee gone and the kids jumping off the walls to go to our local diner, it did not work!!  I was starting to get perplexed.  I quickly pulled myself together and realized that the database server was not in the DatabaseSG previously configured, but rather, in the default security group I was not using and had not configured to talk to the WebserverSG.  I made the change and tested again.  Still didn't work.  No coffee and hyper kids.  I re-read scenario 3, and either through my typo or misunderstanding, I thought the instructions said to append the port number in the address of the server.  I removed the port number from the server name and was able to connect directly to the database server running on RDS through SQL Server Management Console. 




I was at the 1.5 hour mark and had two servers, protected from the Internet, serving web pages, integrated with a database server ready for whatever application is deployed on it.  I built a "Hello World" application that can be scaled and enhanced.  Not bad for 1.5 hours of non selling time!!

What amazes my even more is how I would have done this not more than a year or so ago.  The amount of people hours, calendar days/weeks and cost of hardware, software and consulting that would have gone into this had I done it "old school".  Here is a brief compare and contrast, and at some point this will link to a nicely formatted table.

The first area to review is the Procurement process and how it varies between the traditional approach (purchase and own) and AWS.  I estimate that hardware and software alone, excluding cost of labor to assemble everything, is about $30,000.  This isn't too far fetched, considering I would need a pretty beefy host server with local or shared storage, memory, redundant CPU's, redundant fans and power supplies, network switches and firewall.  I'd also need the VMWare virtualization layer, Microsoft Server for the two instances and SQL server.  On top of all that, I'd have the annual hardware and software warranty renewals.  I'd need a place to host the equipment, a traditional IT closet, with power and cooling and rack space for the equipment.  The offset with AWS is essential free in the first year.  I would estimate it would take 7-8 years to come close to $30,000 in AWS service fees, and within that time, chances are, I'd be replacing the purchased equipment as it ages out.  Before I even order the equipment, I need to review the configuration with tech support from my distributor to make sure all the parts are compatible, get a PO from my procurement department and wait for an order confirmation.  It would take a few days for all the equipment to arrive, assuming it is in stock, over multiple shipments in multiple boxes.  We'd need to inventory it all in, match the packing slips to the order and reconcile all the deliveries.  It would take at least a business week to have everything assembled and ready to go.  Then it needs to go through a burn in and testing phase to make sure everything is working correctly and as expected.  There are many tasks to get right: the server and it's components, setting up the storage, network ports, firewall and Virtualization layer, installing and configuring Windows, updating patches and firmware on all devices.  On the other hand, with AWS, as mentioned above, this all took 1.5 hours and worked as expected right from the start (aka Jump Street).  Chances are, in the old school model, I am over procuring and over provisioning.  Why buy a server that won't have enough capacity to scale beyond a short period of time?  The old school of thought would be to load it up with processing resources (memory, CPU, disk, etc) to not have to worry in the future.  Chances are, in the future, I may not need those resources anyway so I may be wasting my money.  With AWS, I consume (pay for) what I need when I need it.  That is the EC "Elastic Computing" model.  I can subscribe to exactly what I need now and simply add more when needed (or automatically) later. 

In the old model, it would take expertise across a number of disciplines and manufacturer technologies.  One or more engineers would be involved with the project and would need expertise with Microsoft Server, VMWare (Virtualization), Database Administration (DBA), Network and Security technologies.  A project manager would need to coordinate the scheduling and resourcing.  With AWS, this all goes away.  AWS EC2 comes preconfigured, which covers the Virtualization Layer, Hardware, Network and Security layers.  The server image spun up on EC2 replaces the need to configure the Operating System layer and RDS replaces the need for a DBA.

I spent time, after my initial setup, researching AWS IAM (Identity Access Management) and AWS Directory Services.  AWS provides three versions of Directory Services that are, for the most part, compatible with Microsoft Active Directory.  The management tools are free and I expect very little monthly usage charges for using AWS IAM and Directory Services.  I will add my servers to a domain later, create users through IAM and add to user security groups through directory services to better control and manage security. Previously, I would have had to provision an active directory environment, set it up, make sure it was functioning as expected, back it up and manage it on an ongoing basis. Someone would have to decipher cryptic messages in an event log and apply various fixes and patches.  The deployment, configuration and management of Directory Services is done by AWS, that's one less layer to worry about.  I just need to focus on creating users and groups and setting permissions correctly.  What a relief.

Bandwidth to an IT closet, office or datacenter is a relatively expensive and finite resource in a traditional environment.  Bandwidth also requires redundancy (more than one connection), and often multiple providers, over different technologies, hopefully through diverse paths into the building. And as above, chances are, it is dramatically over provisioned relative to actual usage.  In other words, in the old model, I may be paying for more than I need.  And if I do run over, it could take days or weeks to add more bandwidth, assuming it is available.  With AWS, as with all resources, I subscribe to just what I need, and if I go over, it is readily available and I pay a nominal increase in my monthly fee.  In the AWS documentation, I read that I can cap all charges, monitor fees, alert on overages and view usage in real time so I am not surprised by additional charges and I can manage on a real time basis.

My vast experience in IT has taught me a few things.  One of which is that a good backup can get you out of any jam.  End users understand if performance degrades (as long as it gets improved in a reasonable time frame) and can even tolerate some down time for good reason (but not much).  But what cannot be tolerated is data loss.  Furthermore, CXOs often talk in terms of Disaster Recovery and Business Continuity.  I've learned something else over the years.  Disaster Recovery gets you fired, Business Continuity gets you promoted.  If you have to "recover" from a disaster, it most likely means systems went down during a disaster.  During Hurricane Sandy, our Cloud, FlexOffice, had 100% uptime for our customers.  I know what it takes.  In most use cases (I hesitate saying all because nothing is absolute), I would trust my infrastructure to AWS for maximum Business Continuity over what can be done via internal management and hosting.  Backups are also crucial especially if a user accidentally deletes a file or gets hit with a  virus.  After I spent my 1.5 hours on this initial setup, I did go back and research backups with AWS.  It is impressive.  In the old model, I'd have to manage a backup system (such as tape, server images to a local appliance with offsite storage of the current image and associated backup software) as well as the performance and successful execution of the backup jobs.  

AWS makes the process so much more reliable and efficient, it will reclaim many hours on a monthly basis since it is simply included as a function of AWS.  Also looking at Recovery Point Objectives (RPO), how far back I can go via a backup and Recovery Time Objective (RTO), how long it would take to recover, AWS is the clear winner.  RTO is almost instant through the console and RPO goes back as far as I'm willing to consume storage on AWS for backups.

One of the main arguments against AWS is security.  I have noticed over the past few years that CXO's are understanding that AWS is secure and there is less and less resistance to moving to AWS due to security concerns.  The trend will continue in this direction, after all, I doubt a CXO can become any more concerned about security than they are now, so it really can only move in one direction.  I am very paranoid when it comes to security, but after drilling through the console and security options, the security inherent with AWS is AT WORST equal to the "old school" method.  There is a certain degree of trust with AWS that is comparable to the physical security over an IT closet in an office.  That's a wash for me.  In order to obtain the Administrator password, I had to upload a key file.  More secure than the way passwords are stored "old school", either in the Administrators head, in a password database, or texted per the "no passwords in email" rule.  The firewall is built into AWS and a native component of the AWS stack.  The offset in the old model is a third party hardware firewall appliance sitting between the Internet and my servers which is no longer required in AWS.  The AWS server image is completely locked down.  I haven't fully gotten my head around other security measures such as patching and anti-virus in the AWS world.

I accomplished all this over a cup of coffee on a Saturday morning.  But I can't stop thinking about AWS.  Ironically, as I was walking from a customer meeting to a networking meeting last Thursday, someone in one of my other networking groups called (which is in the "you can't make this stuff up category").  He asked if I could help him with a solution so all his users, spread out all over the country, could access a database application at the same time.  His strategy is to be "born in the cloud" and not have large offices.  He does not want to use his capital for depreciating assets such as desktops.  Sunday's cup of coffee is going to be all about Amazon Workspaces!

Post by Lou Person