Lab: Install and Configure the CloudWatch Logs Agent on a Running EC2 Linux Instance

With Amazon CloudWatch you can monitor your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. below lab focus on installing cloud watch agent to gather logs on a running EC2 linux instance.

Step 1: Configure Your IAM Role policy for CloudWatch Logs

This JASON codes enables to log stream : { “Version”: “2012-10-17”, “Statement”: [ { “Effect”: “Allow”, “Action”: [ “logs:CreateLogGroup”, “logs:CreateLogStream”, “logs:PutLogEvents”, “logs:DescribeLogStreams” ], “Resource”: [ “*” ] } ] }

saved policy with name

Step 2: Install and Configure CloudWatch Logs on an Existing Amazon EC2 Instance

The process for installing the CloudWatch Logs agent differs depending on whether your Amazon EC2 instance is running Amazon Linux instance

selected cloud watch monitoring for this instance

Create a role and attach above IAM policy created in Task1 and finally attach to this instance

This image has an empty alt attribute; its file name is image-145.png
Attaching IAM role to the instance

connect to the instance

run yum updates

install AWSlogs service from the yum repository
This image has an empty alt attribute; its file name is image-146.png
Start AWS log service
This image has an empty alt attribute; its file name is image-147.png
Place awslog service to start at the run level in every time the instance starts
Go to Cloudwathg logs can see that instance has already sent logs

Additional filtering would help to filter errors in the logs

Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

Real world use cases of Cloudwatch:

Cloud watch can be used with Amazon EC2 Auto Scaling automatically launch or terminate EC2 instances based on user-defined policies

Cloud watch can be used along with CloudTrail service which CloudWatch writes log files to the S3 bucket specified when you configured CloudTrail.

Cloudwatch can be used along with Amazon SNS service in order send messages when an alarm threshold has been reached in the instances.

Lab: Scale and Load Balance your Architecture

This lab walks you through using the Elastic Load Balancing (ELB) and Auto Scaling services to load balance and automatically scale your infrastructure.

Lab Scenario

Task 1: Create an AMI for Auto Scaling

In this task, we create an AMI from the existing Web Server 1. This will save the contents of the boot disk so that new instances can be launched with identical content.

Task 2: Create a Load Balancer

Create a load balancer that can balance traffic across multiple EC2 instances and Availability Zones.

Routing configures where to send requests that are sent to the load balancer. You will create a Target Group that will be used by Auto Scaling.
Auto Scaling will automatically register instances as targets later in the lab.

Task 3: Create a Launch Configuration and an Auto Scaling Group

In this task, we create a launch configuration for your Auto Scaling group. A launch configuration is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the AMI, the instance type, a key pair, security group and disks.

we will use AMI image which we created in Task 01
Selecting AMI image
Select instance type
enable cloud watch detail monitoring in order to allows Auto Scaling to react quickly to changing utilization.

select preconfigured security groups to allow HTTP and SSH traffic inbound direction

This will launch EC2 instances in private subnets across both Availability Zones.
Traffic reception and traget group, health monitoring has been selected (general ELB requirement)

This will allow Auto Scaling to automatically add/remove instances, always keeping between 2 and 6 instances running.
This tells Auto Scaling to maintain an average CPU utilization across all instances at 60%. Auto Scaling will automatically add or remove capacity as required to keep the metric at, or close to, the specified target value. It adjusts to fluctuations in the metric due to a fluctuating load pattern.
create tag for identification purpose

Task 4: Verify that Load Balancing is Working

In this task, we verify that Load Balancing is working correctly.

Instances that are created in Task3 are now status check passed
Two instances two intentness can seen in Target groups
DNS name of LB : LabELB-122504540.us-east-1.elb.amazonaws.com

you can see ELB has been load balances across two instance

Task 5: Test Auto Scaling

Auto Scaling group with a minimum of two instances and a maximum of six instances. Currently two instances are running because the minimum size is two and the group is currently not under any load. we will now increase the load to cause Auto Scaling to add additional instances

Two alarms will be displayed. These were created automatically by the Auto Scaling group. They will automatically keep the average CPU load close to 60% while also staying within the limitation of having two to six instances
Alrams are changing every 60 sec

after load test…Alarm high can be seen which enables cpu utilization high
Auto scaling has launched additional instances after high CPU utilization of the web server instance

Task 6: Terminate Web Server 1

This instance web server 1 was used to create the AMI ..

Critical Thinking :

With AWS Elastic Load Balancing, you can achieve fault tolerance for any application by ensuring scalability, performance and security Elastic Load Balancing automatically distributes incoming application traffic across multiple targets (i.e. EC2)
AWS ELB supports three types of load balancers:

  • Network Load Balancers – works on OSI Layer 4
  • Classic Load Balancers
  • Application Load Balancers – Works on OSI Layer 4 -7 constrain

AWS ELBs comparison can be found in below link:
https://aws.amazon.com/elasticloadbalancing/features/

AWS Load Balance Architecture Component :

The Load Balancer is the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple
Availability Zones, which results in increased availability of your application. The Listener checks for connection requests from clients,using the protocol/port configured and forwards requests to one or more target groups. We will define Rules for traffic forwarding, including Target Groups, condition and priority. The Target Group (TG) routes requests to one or more
registered targets, such as EC2 instances, using protocol/port number that we configured.
A Target can be registered with multiple target groups Health checks are run on all targets registered to a TG.

AWS Auto-scaling :

Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application (up or down). Scaling policies will automatically launch or terminate instances as your application demands.

EC2 instances are grouped in Auto Scaling Groups:

  • Minimum number of EC2 instances
  • Desired number of EC2 instances
  • Maximum number of EC2 instances


Lab : Build Your DB Server and Interact With Your DB Using an App

This lab is designed to reinforce the concept of leveraging an AWS-managed database instance for solving relational database needs.

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and re sizable capacity while managing time-consuming database administration tasks, which allows you to focus on your applications and business. Amazon RDS provides you with six familiar database engines to choose from: Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB.

Task 1: Create a Security Group for the RDS DB Instance

create a security group to allow your web server to access your RDS DB instance. The security group will be used when you launch the database instance.

This security group will be used when launching the Amazon RDS database.

This configures the Database security group to permit inbound traffic on port 3306 from any EC2 instance that is associated with the Web Security Group.

Task 2: Create a DB Subnet Group

We have successfully crated DB subnet across two AZs keeping ideas to fault tolerance DB servers.
this DB subnet group when creating the database in the next task.

Task 3: Create an Amazon RDS DB Instance

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB instance, Amazon RDS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).

Select compute platform to run DB servers
select DB storage
select VPC and Assign security group
This will turn off backups, which is not normally recommended, but will make the database deploy faster for this lab

Now the database is ready to modify changes
Endpoint name: lab-db.cblsbdwyvgx9.us-east-1.rds.amazonaws.com

Task 4: Interact with Your Database

In this task we will open a web application running on your web server and configure it to use the database.

pre configured web server details

we trying to connnect database server with provided details
Aye …..Successfully connected to the database from the application and retrieved data from it
Final Lab architecture after completing this lab

Reflect and discuss what you have created in the lab on your blog:

In this Lab we have created Amazon RDS service across multi-AZ and successfully connect and retrieved data from it.

AWS Relational Database Service (RDS) makes it easy to set up, operate & scale a relational database in the cloud It provides cost-efficient and re-sizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business Amazon RDS is fast and easy to administer, highly scalable, available and durable,
inexpensive and secure.

Amazon RDS to be used fro OLTP (Online transaction processing) and it comes with six different flavors mostly they are SQL, MySQL, PostgresSQL, Oracle, Aurora ,MariaDB and DynmoDB (this is no SQL).

Amazon RDS runs on virtual machines and cannot log into their operating system. patching of the RDS operating system and update patching is solely Amazon responsibility. Amazon RDS is not serverless except Aurora and dynmoDB

There is two different types of backup can be made in RDS first one is automated backups and second one is snapshots.

little reading about Amazon RDS Read Replicas which enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region and this is used to increase performance of the DB access. Amazon RDS read replicas are muti AZ , must turn on backup prior to do read replica enable. alsomost all the amazon RDS databases support read replicas except SQL server.

Amazon RDS mulit AZ implementation to be used only for DR and not for increase performance. which enables you to fail-over one AZ to another by rebooting the RDS instance.

Encryption for Amzon RDS is performed by Amazon key management services. once RDS instance is encrypted, data stored in snapshots,automated backups and read replicas with encryption.

Lab: Working with EBS

This lab focuses on Amazon Elastic Block Store (Amazon EBS), a key underlying storage mechanism for Amazon EC2 instances. In this lab, you will learn how to create an Amazon EBS volume, attach it to an instance, apply a file system to the volume, and then take a snapshot backup.

Task 1: Create a New EBS Volume

Navigating Volumes.
Click Create Volume then configure:
Volume Type: General Purpose SSD (gp2)
Size (GiB): 1NOTE: You may be restricted from creating large volumes.
Availability Zone: Select the same availability zone as your EC2 instance.
Click Add Tag
In the Tag Editor, enter:Key: Name
Value: My Volume

Task 2: Attach the Volume to an Instance

new volume is attached to use

Task 3: Connect to Your Amazon EC2 Instance

Task 4: Create and Configure Your File System

displaying the disk statistics in the instance ( df -h)

create mounting directory then mount the volume
adding mounting directory into fstab
The output will now contain an additional line – /dev/xvdf:
create a file into the newly created mounted volume

Task 5: Create an Amazon EBS Snapshot

File has been deleted..

Task 6: Restore the Amazon EBS Snapshot

created new mount directory and mounted snapshot volume…deleted file is now appears

Reflect and discuss what you have created in the lab on your blog

In this lab we have ,

  • Created an Amazon EBS volume
  • Attached the volume to an EC2 instance
  • Created a file system on the volume
  • Added a file to volume
  • Created a snapshot of your volume
  • Created a new volume from the snapshot
  • Attached and mounted the new volume to your EC2 instance
  • Verified that the file you created earlier was on the newly created volume

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same AZ. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance.

For security reasons, data stored on EBS volumes may need to be encrypted you can launch your EBS volumes as encrypted volumes . If you choose to create an encrypted EBS volume and attach it to your EC2, data stored and snapshots are encrypted (”at rest”) . With data encrypted on your EBS volumes,you also ensure security for data “in-transit”

Amazon EBS pricing depends on the following:
Volumes : Total storage of all EBS volumes, charged as GB/month
Snapshots : Total snapshot storage consumed in AWS S3, EBS snapshot copying between regions is charged.
Data Transfer : inbound is free, outbound is charged

Lab: AWS Elastic Beanstalk

This activity provides you with an Amazon Web Services (AWS) account where an AWS Elastic Beanstalk environment has been pre-created for you. You will deploy code to it and observe the AWS resources that make up the Elastic Beanstalk environment.

Task 1: Access the Elastic Beanstalk environment

Task 2: Deploy a sample application to Elastic Beanstalk

Task 3: Explore the AWS resources that support your application

security group with port 80 open

load balancer that both instances belong to

An Auto Scaling group that runs from two to six instances, depending on the network load

Reflect and discuss what you have created in the lab on your blog

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS without having to learn about the infrastructure that runs those applications. You simply upload your application and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling and application health monitoring. Elastic Beanstalk will provision one or more AWS resources, (i.e. Amazon EC2 instances) to run your App.

To use Elastic Beanstalk, you create an app, upload an app version as a package (app.zip) to Elastic Beanstalk and then provide some information about the application. Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After your environment is launched, you can then manage your environment and deploy new application versions.

There is no additional charge for Elastic Beanstalk usage . You pay only for the underlying AWS resources that your application consumes. For example, if deploying the App with Elastic Beanstalk fires up one or several EC2 instances,You will pay for EC2 usage only and not for Elastic Beanstalk.

Lab : AWS Lambda



 In this hands-on activity, you will create an AWS Lambda function. You will also create an Amazon CloudWatch event to trigger the function every minute. The function uses an AWS Identity and Access Management (IAM) role. This IAM role allows the function to stop an Amazon Elastic Compute Cloud (Amazon EC2) instance that is running in the Amazon Web Services (AWS) account.

Task 1: Create a Lambda function

Task 2: Configure the trigger

In this task, you will configure a scheduled event to trigger the Lambda function by setting a CloudWatch event as the event source (or trigger). The Lambda function can be configured to operate much like a cron job on a Linux server, or a scheduled task on a Microsoft Windows server. However, you do not need to have a server running to host it.

Note: A more realistic, schedule-based stopinator Lambda function would probably be triggered by using a cron expression instead of a rate expression. However, for the purposes of this activity, using a rate expression ensures that the Lambda function will be triggered soon enough that you can see the results.

Task 3: Configure the Lambda function

In this task, you will paste a few lines of code to update two values in the function code. You do not need to write code to complete this task.


one of the charts shows you how many times your function has been invoked. There is also a chart that shows the error count and the success rate as a percentage.

Task 4: Verify that the Lambda function worked

Stopped after 1 min

Discuss one of the following points in your blog

  • How could you modify the Lambda function that you created in the activity, to make it more real-world? 
  • Extend the activity by creating your own custom Lamda function

——————————————————————————————————————————–

I am going to log the State of an Amazon EC2 Instance Using CloudWatch Events. logging the state of the EC2 instance is important in order to monitor it’s health perspective in real cloud environment

Step 1: Create an AWS Lambda Function

Create a Lambda function to log the state change events. You specify this function when you create your rule.

Using built in template for this
create basic rule to execute lambda function

Use this script to monitor instance change from the console logs of the running instance :

‘use strict’; exports.handler = (event, context, callback) => { console.log(‘LogEC2InstanceStateChange’); console.log(‘Received event:’, JSON.stringify(event, null, 2)); callback(null, ‘Finished’); };

Step 2: Create a Rule

Create a rule to run your Lambda function whenever you launch an Amazon EC2 instance.

This is the final out of created event pattern . this event pattern will trigger running state of the instance

Step 3: Create a EC2 instance and start

To test your rule, launch an Amazon EC2 instance. After waiting a few minutes for the instance to launch and initialize, you can verify that your Lambda function was invoked.

wait for instance is fully started and running

Step 4: Monitor cloudwatch metrics

In the navigation pane, choose EventsRules, select the name of the rule that you created, and choose Show metrics for the rule.

you can see the lambda function was executed. instance state has been recorded in cloudwatch

Lab: Introduction to Amazon EC2

Task 1: Launch Your Amazon EC2 Instance

In this task, we launch an Amazon EC2 instance with termination protection

Launching new EC2 instance

Step1 : we are going to choose Amazon Linux 2 as Machine image

Step 2: Choose an Instance Type

Now we need to choose instance type where instance types comprise varying combinations of CPU, memory, storage, and networking capacity. (hardware profile)

Step 3: Configure Instance Details

This page is used to configure the instance to suit your requirements. This includes networking and monitoring settings.

Here for Network, selected Lab VPC. / or Enable termination protection, selected  Protect against accidental termination.
Included custom script and this will be executed once the instance is started as per below, The script will:
Install an Apache web server (httpd), Configure the web server to automatically start on boot,Activate the Web server, Create a simple web page
——————————————————
!/bin/bash
yum -y install httpd
systemctl enable httpd
systemctl start httpd
echo ‘
Hello From Your Web Server!
‘ > /var/www/html/index.html

Step 4: Add Storage

 Amazon EC2 stores data on a network-attached virtual disk called Elastic Block Store. this will be the boot volume of this instance

keep 8GB capacity as default

Step 5: Add Tags

Tags are like physical label in the server in the AWS cloud enviroment to identify or categorized resources

Key: Name
Value: Web Server

Step 6: Configure Security Group

Security group is a set of firewall rules that control the traffic for your instance. On this page, you can add rules to allow specific traffic to reach your instance

Security group name: Web Server security group
Description: Security group for my web server
In this lab, you will not log into your instance using SSH. Removing SSH access will improve the security of the instance.

Step 7: Review Instance Launch

Amazon EC2 uses public–key cryptography to encrypt and decrypt login information. To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance.

In this lab we will not log into your instance, so you do not require a key pair.

Instance got public DNS name to launch over internet: Instance:  i-085301d58e0d679d4 (Web Server)
Public DNS : ec2-3-235-17-178.compute-1.amazonaws.com
Instance is running properly as it display the status as running and 2/2 checs passed

Task 2: Monitor Your Instance

System status and instance status checks are passed
Displays Amazon CloudWatch metrics for your instance. Currently, there are not many metrics to display because the instance was recently launched.

System Log displays the console output of the instance, which is a valuable tool for problem diagnosis. It is especially useful for troubleshooting kernel problems and service configuration issues that could cause an instance to terminate or become unreachable before its SSH daemon can be started. If you do not see a system log…Need to wait little time to display logs

Installing httpd package as per the inputs mentioned in user data in previous task

Below step to get screen shot of instance console

This provides visibility as to the status of the instance, and allows for quicker troubleshooting.

Task 3: Update Your Security Group and Access the Web Server

When you launched the EC2 instance, you provided a script that installed a web server and created a simple web page. In this task, you will access content from the web server.

Click on “Description” tab
Copy public IP and paste on local web browser however not currently able to access your web server because the security group is not permitting inbound traffic on port 80, which is used for HTTP web requests. This is a demonstration of using a security group as a firewall to restrict the network traffic that is allowed in and out of an instance. To correct this, we will have to update the security group to permit web traffic on port 80.

Go back to AWS console then access “security Groups”

Select “Web server Security Groups”
Currently no Inbound rules has been configured
Edit and configure Type= HTTP, Source from Anywhere
Now web server is accessible via public IP

Task 4: Resize Your Instance: Instance Type and EBS Volume

Here we will resize the instance type based on on going requirement, Similarly, you can change the size of a disk.

First need to stop the running instance

Instance is now fully stoped
Now changing instance type to t2-small which has twice of memory than previous
Accessing volumes
go to modify volumes
Increased the size from 8 to 10

Task 5: Explore EC2 Limits

Amazon EC2 provides different resources that you can use. These resources include images, instances, volumes, and snapshots. When you create an AWS account, there are default limits on these resources on a per-region basis.

Task 6: Test Termination Protection

You can delete your instance when you no longer need it. This is referred to as terminating your instance. You cannot connect to or restart an instance after it has been terminated.

Navigate to “instance”

About to click on instance termination
Termination is not allowed since we have configured termination protection in previous tasks

Need to change termination protection in order to remove termination protection

Disabling “Termination Protection”
Now the instance termination is progressing
Instance is now successfully terminated !!!

Discuss about the LAB:

Throughout the lab, it is noticed that best way to start instance is instance description and instance type. The instance´s “type” is determined by the configuration of the hypervisor – which can be “General Purpose,” “Computing Optimized,” “Memory Optimized,” etc. Hence the hardware used will decide the virtual machine’s memory, storage, computing capabilities, and efficiency. AWS instances types are grouped together into families with several subcategories in each family. These subcategories are based on the hardware on which they are run such as the number of virtual CPUs, memory (RAM), storage volume, and bandwidth capacity into and out of the instances. AWS instance types should be selected based on the CPU and memory needs of different workload.

EC2 – Basic Terminology

Amazon Elastic Compute Cloud (EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud.  AWS virtual compute environments are called instances. Amazon Machine Images (AMIs) are available to choose from – preconfigured templates for EC2 instances

Instance types – different configurations of CPU, memory, storage and networking capacity.

Secure login to EC2 instances with key pairs (you store private key, AWS stores the public key).

You can attach storage volumes to your EC2 instances – instance storage volumes – ephemeral storage.

Persistent storage volumes for your data are available through Elastic Block Store (EBS) – Amazon EBS Volumes

Store data in multiple locations (Regions and AZs).

You can define basic security using AWS built-in firewall – security group; protocol, port, source IPs that you permit or deny to reaching your EC2 instances.

Elastic IP address – static IPv4 public address that you can attach to your EC2 instance (i.e. for a website)

Create and attach tags (labels) to your EC2 instances.

EC2 AMI Types

When you launch an EC2 instance, you first have to select an AMI – Amazon Machine Image, which basically represents software selection All AMIs are categorized as either backed by Amazon EBS or backed by instance store For AMIs with root volume backed by EBS, data is deleted when the instance terminates vs instance store volumes, where data persists only while instance is live.

EC2 pricing

There are four ways to pay for Amazon EC2 instances: On-Demand Instances, Reserved Instances, Spot Instance and Dedicated Hosts With On-Demand Instances, you pay for compute capacity per hour or per second, depending on which instances you run.

Amazon EC2 Spot Instances allow you to request spare Amazon EC2 computing capacity for up to 90% off the OnDemand price Common use cases: Applications that have flexible start and end times Applications that are only feasible at very low compute prices Users with urgent computing needs for a lot of additional capacity.

Amazon EC2 Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand Instance pricing For applications that have predictable usage, Reserved Instances can provide significant savings compared to On-Demand Instances Best for customers that commit to using EC2 over a 1-3 year term to reduce their total computing costs.

An Amazon Dedicated Host is a physical EC2 server dedicated for your use Dedicated Hosts can help you reduce costs by allowing you to use your existing server-bound software licenses, incl. Windows Server, SQL Server, etc They can also help you meet compliance requirements.

Lab: Build your VPC and Launch a Web Server

Lab Scenario

Task 1: Create Your VPC

In this task, you will use the VPC Wizard to create a VPC an Internet Gateway and two subnets in a single Availability Zone. An Internet gateway (IGW) is a VPC component that allows communication between instances in your VPC and the Internet.

In Task 1 we are going to configure VPC of first avaiblity Zone A

Log into the AWS Console

In the AWS Management Console, on the Services  menu, click VPC.
Click Launch VPC Wizard

In the left navigation pane, click VPC with Public and Private Subnets (the second option).
Configured VPC name,Availability Zone,Public subnet name,Private subnet name and
Elastic IP Allocation ID then created VPC
VPC creation process takes little time longer

Task 2: Create Additional Subnets

In this task, you will create two additional subnets in a second Availability Zone. This is useful for creating resources in multiple Availability Zones to provide High Availability.

In Task 02 , we are going to configure VPC for availability Zone B
In the left navigation pane, click Subnets.

Configure second public subnet with Name tag: Public Subnet 2,VPC: Lab VPC ,second Availability Zone and
IPv4 CIDR block: 10.0.2.0/24

Configure second private subnet with Name tag: Private Subnet 2 ,VPC,Availability Zone, CIDR block: 10.0.3.0/24

now configure the Private Subnets to route internet-bound traffic to the NAT Gateway so that resources in the Private Subnet are able to connect to the Internet, while still keeping the resources private. This is done by configuring a Route Table.

route table contains a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in a VPC must be associated with a route table; the route table controls routing for the subnet

Below are the steps to configure private routing table :

This is private routing table as we can see nat gateway device name in it for 0.0.0.0/0 destination

Names it as Private Route Table 
Added  Private Subnet 1 and Private Subnet 2. into this private routing table

Below are the steps to configure public routing table :

Select the route table with Main = No and VPC = Lab VPC
Named it as Public Route Table
Note that Destination 0.0.0.0/0 is set to Target igw-xxxxxxxx, which is the Internet Gateway. This means that internet-bound traffic will be sent straight to the internet via the Internet Gateway.
Associate Public subnets to this Public route table

Task 3: Create a VPC Security Group

In this task, you will create a VPC security group, which acts as a virtual firewall. When you launch an instance, you associate one or more security groups with the instance. You can add rules to each security group that allow traffic to or from its associated instances.

 left navigation pane, click Security Groups.

Create security group with Security group name: Web Security Group,Description: Enable HTTP access,VPC: Lab VPC

Allow HTTP request inbound direction from anywhere

Task 4: Launch a Web Server Instance

In this task, you will launch an Amazon EC2 instance into the new VPC. You will configure the instance to act as a web server.

Invoking EC2 services
Invoking EC2 services
Select “Amazon Linux 2” instance
Select “t2.micro” as instance profile

Configure Network: Lab VPC, Subnet: Public Subnet 2 (not Private!),Auto-assign Public IP: Enable

Copy and paste this code into the User data box:
!/bin/bash
Install Apache Web Server and PHP
yum install -y httpd mysql php
Download Lab files
wget https://aws-tc-largeobjects.s3.amazonaws.com/AWS-TC-AcademyACF/acf-lab3-vpc/lab-app.zip
unzip lab-app.zip -d /var/www/html/
Turn on web server
chkconfig httpd on
service httpd start

Leave default settings

Click Add Tag then configure: Key: Name, Value: Web Server 1
Select  Select an existing security group, Select  Web Security Group.

When prompted with a warning that you will not be able to connect to the instance through port 22, click Continue

Copy public DNS : ec2-54-164-0-97.compute-1.amazonaws.com

Open a new web browser tab, paste the Public DNS value and press Enter.
You should see a web page displaying the AWS logo and instance meta-data values.

Lab Completed !!!

Discuss about the LAB :

VPC component brief in the LAB

Looking at this LAB, In summery of VPC component that need to make web servers accessible publicly while keeping application and database servers are private are ,

1) One VPC with public subnet for web servers and private subnet for database or app servers.

2) In order to communicate with these two subnets you need a router. router in AWS direct communicate within subnets. Implied router term in AWS means routing table. it is a gatekeeper in vpc networking denote how traffic flows in and out subnet in the VPC. implied router does not make intelligent routing decision hence we need to configure routing decision manually.

3) In order get internet access for these subnets we need a internet gateway. Internet gateways is highly available component in the VPC that connects vpc to the internet.

4) If private subnet need to access internet ( such as host update patches etc…) we need to configure NAT gateway which is a VPC service reside in public subnet. however in order make this work,it is recommended to configure route tables for each and every subnet that point interested traffic whether to internet gateway or NAT gateway.

5) If private subnet need to access vpc services such as S3 bucket ,we need to set up a vpc endpoint that makes private subnet to connect S3 services in AWS backbone directly rather than using internet gateway (saves cost for egress traffic)

6) Security within the VPC provided by Network access control list and security groups. NACL secure inbound and out bound traffic for subnets and Security groups provide secure access to the EC2 instance

Little thing about VPC Peering and transit VPC

In summery of VPC peering is a network terminology in AWS which enables instance to instance connection between two VPCs. suppose if we really concern about host vulnerability of one of the instance in above example then we’ll need to separate DB and WEB subnet over two VPC. To enable communication between these instances we’ll need vpc peering. however VPC peering cannot be used for transit network traffic which means instance in one VPC cannot be used internet gateway in a another VPC in order to reach out internet.

if you really need to enable transit traffic (typically hub and spoke design) then you need to define a new vpc dedicated for transit traffic which also has a router software as a instance that can peer with other instance (eventually connect to their implied routing tables) and install static or dynamic routing protocol to route traffic. this type of scenario will be used hybrid cloud enviroment when multiple AWS VPCs in a single or multi region need to connect to On premises network. Site-to site VPN connectivity would be involved in this type of VPC design.

Activity: AWS Security Best Practice (IAM & S3 Bucket)

IAM is an AWS service that provides user provisioning and access control capabilities for AWS users. AWS administrators can use IAM to create and manage AWS users and groups and apply granular permission rules to users and groups of users to limit access to AWS APIs and resources (watch the intro to IAM video below). To make the most of IAM, organizations should:

  • When creating IAM policies, ensure that they’re attached to groups or roles rather than individual users to minimize the risk of an individual user getting excessive and unnecessary permissions or privileges by accident.
  • Provision access to a resource using IAM roles instead of providing an individual set of credentials for access to ensure that misplaced or compromised credentials don’t lead to unauthorized access to the resource.
  • Ensure IAM users are given minimal access privileges to AWS resources that still allows them to fulfill their job responsibilities.
  • As a last line of defense against a compromised account, ensure all IAM users have multifactor authentication activated for their individual accounts, and limit the number of IAM users with administrative privileges.
  • Rotate IAM access keys regularly and standardize on a selected number of days for password expiration to ensure that data cannot be accessed with a potential lost or stolen key.
  • Enforce a strong password policy requiring minimum of 14 characters containing at least one number, one upper case letter, and one symbol. Apply a password reset policy that prevents users from using a password they may have used in their last 24 password resets.

The AWS Account Root User : When you first create an Amazon Web Services (AWS) account, you begin with a single sign-in identity that has complete access to all AWS services and resources in the account. it is strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks. To view the tasks that require you to sign in as the root user, see AWS Tasks That Require Root User.

An Amazon S3 bucket is a public cloud storage resource available in Amazon Web Services’ (AWS) Simple Storage Service (S3). The following best practices for Amazon S3 can help prevent security incidents.

  • Ensure that your Amazon S3 buckets use the correct policies and are not publicly accessible unless required by the business.
  • Implement least privilege such as access grant only the permissions that are required to perform a task. Implementing least privilege access is fundamental in reducing security risk and the impact that could result from errors or malicious intent.
  • Monitor your S3 resources
  • You can use HTTPS (TLS) to help prevent potential attackers from eavesdropping on or manipulating network traffic using person-in-the-middle or similar attacks.

Lab: Introduction to AWS IAM

Accessing the AWS Management Console

Starting LAB
AWS Management Console 

LAB Diagrame

Task 1: Explore the Users and Groups

Viewing used in IAM console

user-1 does not have any permissions.

user-1 does not have any permissions.
user-1 also is not a member of any groups.

Groups in IAM dashboard

Summary page for the EC2-Support group. This group has a Managed Policy associated with it, called AmazonEC2ReadOnlyAccess.

Under Actions, click the Show Policy link.

Summery page of S3-support group, The S3-Support group has the AmazonS3ReadOnlyAccess policy attached.

Actions menu, click the Show Policy link.

Summery page of EC2-Admin Group,This Group is slightly different from the other two. Instead of a Managed Policy, it has an Inline Policy, which is a policy assigned to just one User or Group. Inline Policies are typically used to apply permissions for one-off situations.

Under Actions, click Show Policy to view the policy.

Task 2: Add Users to Groups

In the left navigation pane, click Groups.,Click the S3-Support group. Click “Add users to Group”

select User-1 then, click Add Users.

In the Users tab you will see that user-1 has been added to the group.

Summery page of Users

Summer page of Groups

Task 3: Sign-In and Test Users

Copy IAM user SSO link

Paste above SSO link to mozilla browser and login with user-1 credentials

User-1’s AWS Management Console ,click on S3 in here

Since your user is part of the S3-Support Group in IAM, they have permission to view a list of Amazon S3 buckets and their contents.

Since your user is part of the S3-Support Group in IAM, they have permission to view a list of Amazon S3 buckets and their contents.

In the left navigation pane, click Instances.

You cannot see any instances! Instead, it says An error occurred fetching instance data: You are not authorized to perform this operation.. This is because your user has not been assigned any permissions to use Amazon EC2.


Sign user-1 out of the AWS Management Console 
Sign-in with User-2

In the navigation pane on the left, click Instances.You are now able to see an Amazon EC2 instance because you have Read Only permissions

perform to Stop the instance

you will not be able to make any changes to Amazon EC2 resources.

You will receive an error stating You are not authorized to perform this operation. This demonstrates that the policy only allows you to information, without making changes.

In the Services, click S3.

You will receive an  Error Access Denied because user-2 does not permission to use Amazon S3.
Sign-in as user-3, who has been hired as your Amazon EC2 administrator

Click on S3 Services

As an EC2 Administrator, you should now have permissions to Stop the Amazon EC2 instance

In the Actions menu, click Instance State > Stop.

Click on “Yes,Stop”

The instance will enter the stopping state and will shutdown.

Ending LAB

A panel will appear, indicating that “DELETE has been initiated…

Discuss the use of the users, groups, roles and policies withing your AWS account :

The key in understanding AWS Identity and Access Management (IAM) is represented by these two concept called authentication and authorization which enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

IAM user is created for access to the AWS Management Console. Refer to a  collection of IAM users. It helps to simplify assignment of permissions.

 A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.

Authentication credentials are permanent in User and Groups on the other hand authentication credentials are temporary in Roles.