Amazon Web Services Assignment
ITM 380
Dr. Knapp
Danielle Gresia
Table of Contents
Introduction…………………………………………………………………3
Billing Alerts………………………………………………………………..4
Linux EC2 Instance…………………………………………………………7
Steps to Creating a Linux EC2 Instance…………………………………7
Using PuTTY and PuTTYgen to Connect………………………………11
Linux Command Line………………………………………….….…….16
Security Group Rules and Pinging………………………………………19
4. IAM Accounts………………………………………………………………21
5. Trusted Advisor…………………………………………………….……….27
6. Visio Diagram………………………………………………………………28
7. Visio Functional Description……………………………………………….29
8. References………………………………………………………….……….31
Introduction
Amazon Web Services, or AWS, has a bunch of different virtualized web services and applications that are relatively user-friendly. AWS has management tools, mobile services, storage services, developer tools, security services, computing services and much more. I know a little bit about AWS from ITM 280, but I am interesting in honing in on the specifics of a lot of the tools and services AWS has to offer.
Billing Alerts
Under my ‘daniellegresia’ tab, if you click ‘My Billing Dashboard’ and then click ‘Preferences’ on the left hand side, there is a box to check to receive billing alerts. From there I clicked the blue hyperlink ‘Manage Billing Alerts’
I had modified my Billing Alarm settings in class when it was demonstrated, but I never confirmed the email, so I modified the settings to include both the email I use for AWS and my UT email, as seen under the ‘Actions’ section.
Here is the confirmation email I received on my UT email address and I clicked the hyperlink to confirm the subscription to the Billing Alerts.
When I clicked on the link mentioned in the previous screenshot, it brought me to this page in AWS, just to confirm my subscription confirmation.
Since I wanted to include all of the details and the graph in this screenshot, I couldn't get the picture to not be blurry. However, this picture is showing the alarm that I made, called ‘Monthly Charges’ and its state is OK with a green check meaning it’s set up and running properly. Once you select an alarm, you can view the details of it, including the state of the alarm and the reasoning for it, the threshold the alarm has, the actions it will take if that threshold is broken, the Namespace, the metric name, the dimensions, and some other small details. For my Monthly Charges alarm, the state was changed to OK a little over a week ago when it was first created and it was OK because it didn't break the threshold, the threshold I had set was $1, the actions to be taken were to send a message to both my UT email and the email I used to set up my AWS account, and the Namespace, metric name and dimensions were all the same as it was when I was modifying my billing alarm, three screenshots prior. The graph on the right show the amount of money charged on the Y-axis and the estimated charges on the X-axis. The red line is my threshold and the blue data points are how much I’m being charging for AWS, which is currently $0.
Linux EC2 Instance
Steps to Creating a Linux EC2 Instance
Since this portion of what I wanted to screenshot was long and thin, it also came out blurry. However, this was Step 1 of creating a Linux EC2 Instance, which was choosing which Amazon Machine Image (AMI), in which I chose the 64-bit Amazon Linux AMI.
Step 2 was choosing an Instance Type. Since I am in Free Tier I chose the default General Purpose, which was t2.micro, had 1 virtual CPU, 1 GB of memory and used EBS (Elastic Block Storage).
Step 3 was Configuring the Instance Details, which included how many instances there were being created, the network, which was the VPC by default, the subnet, the IAM role, which was set to none by default, the shutdown behavior, which was between the two options of ‘stop’ and ‘terminate’ of which I chose stop because I didn't want to terminate and delete my instance whenever I stopped running it, I would want to start it up again if need be. There was also termination protection to protect from accidental termination and available CloudWatch detailed monitoring, however, neither of those are included in the Free Tier of AWS. There was also options for Tenancy, the one I picked was shared, aka multi-tenancy, because again, I am on the Free Tier and I also don't have sensitive data on here. There was the option of ‘dedicated’ tenancy, which organizations like banks or hospitals should use to protect their sensitive data, although dedicated tenancy gets pretty expensive. Last but not least, T2 Unlimited allows the instance to be priced at the hourly price even if it goes above the CPU utilization baseline, however, it is also not included in the Free Tier level of AWS.
Step 4 of creating an instance was to Add Storage. I went with the default settings that included 8 GB size, and a general purpose SSD (solid state drive) volume type, that is not encrypted and will deleted on termination. The lack of encryption is a huge security issue.
Step 5 was Adding Tags, almost like an adjective to describe the instance or volume or both. Tags aren't a big deal for a small-scale user like me with only a few instances, but for bigger organizations using AWS with many instances, it helps with management of the instances and volumes.
Step 6 of creating an EC2 Instance was to configure the security group. A security group is on the outside of an instance, but it protects the whole instance. AWS mainly uses Stateful Inspection firewalls in their security groups, which monitors the state of connections and connection status’ and keeps that information in a state table. With Stateful Inspection firewalls, decisions about what traffic is allowed inbound is made by the rules the user adds during this step and also the expected response of packets they send out. At the top, there is an option of creating a new security group, which I did, and there is another option to use an existing security group and apply those rules. Rules are essentially holes in the firewall to allow specified traffic through. For my security group, there was already SSH TCP rule in place, so that PuTTY and PuTTYgen could be used to connect the instance, but the source was set to 0.0.0.0, meaning that anybody and everybody could access it. I changed it to be my IP address only, and then added two ICMP rules, Traceroute and Echo Request, so that I could ping my instance later on. I also specified these rules to only be sourced from my IP address. There is also a description section where you can make it easier to identify the rules.
Step 7 is the final step in creating an instance and its purpose is just to review all the choices that were made in the previous steps before the instance is launched.
Using PuTTY and PuTTYgen to Connect the Linux EC2 Instance
When you highlight your instance and click ‘Connect’ this window pops up. Then, click on the blue hyperlink ‘connect using PuTTY’ and it will direct you to an AWS page about using PuTTY.
This is the page I was directed to after clicking ‘connect with PuTTY’. At the bottom of the screenshot, under the bullet point labeled ‘Install PuTTY’, there is another hyperlink to the PuTTY download page. There is another hyperlink at the top of that page that will bring you to the Download page of PuTTY’s newest release (0.70). To connect to your instance, you have to download the 64-bit PuTTY installer and the 64-bit PuTTYgen executable file.
This was also on the AWS page with the link to the PuTTY download page. It tells you what prefix to use before the host name in PuTTY for a variety of different instances. Since I am launching a Linux AMI, I will need to use the prefix “ec2-user@“ and then copy and paste my Public DNS for my instance after it. The Public DNS address is given under your list of instances under the ‘Description’ tab.
I will be using the key pair I had generated before with PuTTY to connect and launch my instance (AWSGresiaKey).
Here, I loaded my .pem file of my key that was automatically saved when I made my key pair into PuTTYgen.
After loading the .pem file into PuTTYgen, I clicked ‘Save private key’ and saved the new key file as a .ppk file.
Next, I opened PuTTY and loaded the .ppk file of my key pair in the ‘Auth’ section on the left under ‘SSH’ and ‘Connection’. I then went back to ‘Session’ and used the prefix AWS had given me of “ec2-user@“ and copied and pasted my Public DNS address after the ‘@‘ symbol. All I had to do after was click open and it launched my instance.
Shown above is the successful connection and launch of my Linux EC2 instance.
Linux Command Line
Here, I completed the necessary updates using the code “sudo yum update”, as suggested above the line I carried out the command in. It is up to the user to update and patch their server, and it’s a necessary step to avoid your instance or data being compromised.
After updating the server, I used the Linux command ‘top’, which shows the open and running CPU processes.
I then used the Linux command ‘pwd’, which shows your present working directory. The directory I was in was /home/ec2-user.
My last Linux command was ‘ifconfig’. This command shows the interface configuration, as suggested by the shortened name. The active interfaces on this sytem were ‘eth0’ and ‘lo’. ‘eth0' is the Ethernet interface and ‘lo’ is the loopback interface, which the system uses to communicate with itself.
Security Group Rules and Pinging
Earlier, when I was setting up my instance, I had specified the rules in my security group to allow ICMP Echo Request, meaning I could ping my instance. If the firewall is working correctly, I should be able to successfully ping it using the private IP address given to me under the ‘Description’ tab of my selected instance.
My Private IP address was 172.31.93.106.
I used the Linux command ‘ping’ followed by my private IP address and successfully pinged my instance, meaning that my firewall and security group rules were working correctly.
I then deleted the ICMP Echo Request and the ICMP Traceroute rule from my security group for my instance, leaving only the SSH TCP rule. After deleting the ICMP Echo Request rule, if I tried to ping my instance, it wouldn't work because the firewall would block the request.
Using the Windows command prompt, I tried to ping my instance once more. As you can see, all four packet requests ‘timed out’ and none were received, resulting in a 100% loss of packets. This proved that my firewall and security group rules were working correctly.
IAM Accounts
One of the best IAM practices is creating individual users and not giving out access to the root account. This is more useful in a bigger organization so that the Principle of Least Privilege can be incorporated, but it is still useful on a small scale.
In the IAM Dashboard, under the ‘IAM Resources’ section, there is a link for users with the number of current users the root user has made. If you click that link, it will take you to your list of individual users and you have the option of adding or deleting users.
When you click on ‘Add User’, you are brought to this page. You must enter in the User name and select the access type, the type of console password, and whether or not there will be a required password reset, in which the user must create a new password every time they sign in. For my user, I named them “SecondUser”, chose the AWS Management Console access, which gives a password so that the individual user can sign in to the AWS Management Console, and I chose an auto-generated password, because that seemed more secure to me than a custom password. I also checked off the box to require a password reset every time the individual user signs on because that too seemed more secure in the event of a breach or compromised account.
The next step is setting permissions for the individual user. There are options to add the user to an existing group, create a new group for the user, use the same permissions from an existing user, or attach existing policies directly. I had no groups so I created a group called ‘ITM380-EC2only’. The attached policy limited the user was called “AmazonEC2ContainerRegistryPowerUser”. This was an existing policy that allows full access to EC2 Container Registry repositories, but does not allow the user to delete any repositories or make any policy changes.
The second to last step in creating a new user was reviewing all the choices the root user made of their details and permissions.
After reviewing the new user, you click ‘Create User’ and will be directed to a screen showing that the creation of the new user was a success, as shown here.
Another best practice of IAM is creating a strong password policy. Here, my password policy has a minimum password length of 12 characters, requires at least one uppercase letter, one lowercase letter, one number, and one non-alphanumeric character, like a symbol. It has been shown that a singular long and strong password is better than changing out your password every so often, so I wanted to make my password policy lead to long and strong passwords. Although it isn't as secure to consistently change out your password, I thought setting the password expiration period to 120 days was a good choice, as it is a longer period of time than usual and I have a lot of other strong password requirements.
An additional best practice of IAM use is to use Multi-Factor Authentication, or MFA. However, this should only be used for privileged users, like the root account, or people with administrative privileges in an organizational setting. Even though I am using IAM on a much smaller scale, it is still useful to enable MFA to make your account more secure.
In the IAM Dashboard, there is a drop-down option for activating Multi-Factor Authentication on your account. Under this, there is a button to activate and manage your MFA settings.
After pressing that button, a pop-up window will appear prompting you to choose which type of MFA device to activate. I chose a virtual MFA device and had to download the Google Authenticator on my phone.
Once you choose the MFA device you want to activate, this screen pops up. This is the first step in activating your MFA. This screen is saying that you must have an AWS MFA-compatible application on another device, which for me was the Google Authenticator on my phone.
I scanned the above QR code with my Google Authenticator app and it gave me two 6-digit codes to enter into the appropriate boxes to activate virtual MFA. After I entered the codes, I clicked on the blue button in the bottom right-hand corner and my MFA was set up.
In the IAM Dashboard, there are AWS’s five best practices for IAM with boxes that get checked after each practice is completed. This shows that my security status is 5 out of 5 complete.
Trusted Advisor
Trusted Advisor is an AWS resource that helps to optimize your AWS environment leading to reduced costs, increased performance and improved security. Trusted Advisor audits the AWS VPC for the best practices and identifies the problems within five categories, giving you links to fix the problems to get back to implementing the best practices of each service.
On my Trusted Advisor Dashboard, it shows the varying categories of cost optimization, performance, security, fault tolerance and service limits and the problems (or lack thereof) of each. Since I am not a heavy user of AWS, I only had security and service limit practices running, but as you can see, they are all green and checked off, meaning I have no problems with my VPC.
Visio Diagram
172.31.93.106
172.31.93.106
Visio Functional Explanation
In my Visio diagram, there are two subnets with Amazon Linux EC2 instances in each. There is a private subnet and there is a public subnet. The box with the private subnet is in one regional zone, Regional Zone A. The box with the public subnet is in another regional zone, Regional Zone B. Amazon Linux EC2 instances in the private subnet cannot send traffic outbound directly to the Internet. Instead, the instances in the private subnet have to use Network Address Translation gateway that are in the public subnet to gain access to the Internet. However, Amazon Linux EC2 instances that reside in the public subnet can access the Internet directly. Under each Amazon Linux EC2 instance is the IP address of my instance from the above screenshots, which is 172.31.93.106. Both the Amazon Linux EC2 instance in the private subnet and the public subnet are connected to the router in between the two regional zones. The router has a connection going to the VPC Gateway, which is a device used to connect the VPC to your Amazon Linux EC2 instance through VPC endpoints. Amazon’s network, or the VPC (Virtual Private Cloud) gives you the ability to launch a plethora of AWS resources in a virtual network, in which you have complete control over things like IP ranges, subnets, routing tables and network gateways. Amazon’s VPC lets you connect to the internet using Network Address Translation (NAT), your data center, other VPC’s and also privately to Amazon Web Service’s bountiful array of services without using an Internet Gateway, NAT, a firewall proxy, or a VPC endpoint. I didn't connect the Amazon VPC to anything in my diagram because it would've connected to basically everything and it would've gotten even more confusing. The VPC NAT Gateway is connected to the customer gateway, which connects the customer network to the Virtual Private Cloud. Going back to the router, it has another connection leading out of it toward the Internet Gateway. The Internet Gateway allows the connection of your Amazon Linux EC2 instances in the VPC to the Internet. The Internet Gateway also can preform Network Address Translation for instances that have public IPv4 addresses assigned to them. In the top right-hand corner of my diagram is AWS Direct Connect. AWS Direct Connect gives you the ability to establish a private connection between AWS and a datacenter or an office. Because of the direct and private connection, AWS Direct Connect can reduce your network costs, increase your bandwidth and provide a more stable and consistent experience on the Internet, much more so than typical Internet-based connections. AWS Direct Connect uses VLANs so that the connection can be separated into multiple virtual interfaces. By Direct Connect using VLANs, the user is able to have the same connection to access public and private resources, like objects and buckets stored in S3 and Amazon Linux EC2 instances, respectively. Direct Connect is able to maintain network partitioning between the public and private virtual environments. These virtual interfaces can also be changed and reconfigured at any time if your needs change. AWS Cloud, in the top left portion of my diagram, allows users to access virtualized storage, servers, databased and a variety of application services over the internet. Everything in Amazon Web Services is stored in the Amazon Cloud, so it should be connected to almost, if not everything, in my diagram, but like the Amazon VPC, it would get too busy and complicated. In the center of my diagram is Amazon’s CloudFront. Amazon’s CloudFront is a content delivery network that transmits data, APIs, videos and applications securely and quickly to the viewers. Amazon’s CloudFront is located at edge locations so that your data delivers high availability, scalability and performance anywhere in the world. Amazon’s CloudFront was specifically designed to be on the edge for that exact purpose. Amazon’s CloudFront can also be used to secure and speed up API calls, and is already fused with Amazon API Gateway as a default. CloudFront also smoothly and continuously integrates with AWS Shield for Layer 3 and 4 DDoS protection and also with AWS WAF for Layer 7 protection. AWS Shield is a service that protects against Distributed Denial of Service attacks that can target your website, applications and many hardware devices. AWS Shield is always on in terms of detecting DDoS attacks and minimizing downtime and latency problems with applications. These DDoS attacks occur on Layer 3 and 4 of the OSI model. AWS Shield gives you a real-time look into the attacks, 24/7 access to the AWS DDoS Response Team, and protection against potential spikes in charges with your EC2 instances, ELB, and CloudFront due to DDoS attacks. AWS Shield is also integrated with AWS WAF. AWS WAF, or its Web Application Firewall, is a virtual firewall that protects your web applications from common web exploitations. These exploitations could lead to consuming excessive resources and compromised security. AWS WAF gives the user control over which traffic they allow or block inbound and outbound to their web applications. The user can create custom rules within the AWS WAF if there is a certain attack pattern they would want to block. While using AWS WAF, the user also has improved visibility of their web traffic, and can use AWS WAF to monitor requests that match the specified filters, or be used to create new rules and/or alerts.