AWS Interview Questions and Answers: Part 2

Q) What is Route 53?
Amazon Route 53 is a highly available and scalable cloud DNS web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end-user to Internet applications by translating names like into the numeric IP addresses like that computers use to connect to each other.
Amazon Route 53 performs three main functions:
1. Register domain names.
2. Route internet traffic to the resources for your domain.
3. Check the health of your resources.

Q) What is Elastic Load Balancing?
The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy targets and routes traffic only to healthy targets.
The load balancer serves as a single point of contact for clients. This increases the availability of your application. You can add and remove targets from your load balancer as your needs change, without disrupting the overall flow of requests to your application. Elastic Load Balancing scales your load balancer as traffic to your application changes over time. Elastic Load Balancing can scale to the vast majority of workloads automatically.
You configure your load balancer to accept incoming traffic by specifying one or more listeners. A listener is a process that checks for connection requests. It is configured with a protocol and port number for connections from clients to the load balancer and a protocol and port number for connections from the load balancer to the instances.
Elastic Load Balancing supports four types of load balancers: Application Load Balancer, Network Load Balancer, Gateway Load Balancer and Classic Load Balancer.

Reference Link: How ELB works? 

Q) What is Connection Draining?
In AWS, when you enable connection draining on a load balancer, any back-end instances that you deregister will complete any requests that are in progress before deregistration.
Likewise, if any back-end instance fails a health check, then the load balancer stops sending requests to the unhealthy instance but will allow existing requests to complete.
Connection Draining is also integrated with Auto Scaling, making it even easier to manage the capacity behind your load balancer. When Connection Draining is enabled, Auto Scaling will wait for outstanding requests to complete before terminating instances.
When you enable connection draining, you can specify a maximum time for the load balancer to keep connections alive before reporting the instance as de-registered. The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds). When the maximum time limit is reached, the load balancer forcibly closes connections to the de-registering instance.

Q) What is a Sticky session?
Sticky sessions allow load balancer to stick client session to specific backend EC2 instance with the cookies. If a client makes a request to ELB, it will be cookied and request routed to a specific backend server. All future requests from the client will be routed to the same backend server.
This is useful when your application is stateful and requires specific client requests to be routed to the same backend server each time.

Q) Cross-zone load balancing?
By default, the load balancer distributes traffic evenly across the availability zone that you enabled for your load balancer.
Cross-zone load balancing distributes traffic evenly across all registered instances in all enabled Availability Zones.
If cross-zone load balancing is disabled, the load balancer distributes traffic evenly across all enabled Availability Zones.

Reference Link: CrossZone LoadBalancing

Q) What is Internet Gateway?
An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet.

Q) What is NAT Gateways?
NAT Gateway to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but prevent the Internet from initiating connections with the instances. A NAT device forwards traffic from the instances in the private subnet to the Internet or other AWS services, and then sends the response back to the instances. When traffic goes to the Internet, the source IPv4 address is replaced with the NAT device’s address and similarly, when the response traffic goes to those instances, the NAT device translates the address back to those instances private IPv4 addresses.

Q) Auto Scaling?
Auto Scaling is a service that allows you to scale amazon EC2 capacity automatically by scaling out or scaling in according to criteria that you defined. Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. 

Q) Auto Scaling Components
Autoscaling has following 3 components.
1. Launch Configuration or Launch template.
2. Auto Scaling Group.
3. Optional Scaling Policy.

1. Launch Configuration/Template.
A Launch configuration is the template that auto-scaling uses to launch new instances and it is composed of the configuration name, Amazon Machine Image (AMI), Amazon EC2 instance type, security group, and instance key pair.
Each Autoscaling group can have only one launch configuration at a time.

2. Auto Scaling Group.
An Autoscaling group is a collection of Amazon EC2 instances managed by auto-scaling services. Each auto-scaling group contains a configuration option that control when auto-scaling should launch new instances and terminate existing instances.
An auto-scaling group must contain a name and a minimum and a maximum number of instances that can be in the group. You can optionally specify the desired capacity, which is the number of instances that the group must have at all times. If you do not specify the desired capacity, then the default desired capacity is the minimum number of instances that you specify.

3. Scaling Plan.
It is a set of instructions that tell auto-scaling whether to scale out, launching new EC2 instances referenced in the associated launch configuration or scale in and terminate instances.

Amazon EC2 auto-scaling provides several ways for you to scale the auto scaling group:
1. Maintaining Current instance-level at all time
2. Manual Scaling
3. Scale based on Schedule
4. Scale based on demand
    Types of Scaling policies:
    i. Target tracking Scaling
    ii. Step scaling
    iii. Simple scaling

Q) VPC Endpoint.
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS private link without requiring an Internet Gateway, NAT Device, VPN Connection, or AWS Direct Connect connection.
Instances in your VPC do not require Public IP addresses to communicate with resources in service.
Traffic between your VPC and the other service do not leave the AWS network.

Interface Endpoint:
The interface endpoint is an elastic network interface with a Private IP Address. ENI will act as the entry point for the traffic that is destined for a particular service.
Gateway Endpoint:
Gateway endpoint is the gateway that you specify as a target for the route in your route table for traffic destined to supported services.

Q) What is AWS Direct Connect?
AWS Direct Connect enables you to securely connect your AWS environment to your on-premises data center or office location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic connection. AWS Direct Connect offers dedicated high speed, low latency connection, which bypasses internet service providers in your network path.

Q) Amazon Elastic File System (Amazon EFS).
Amazon Elastic File System (EFS) provides simple, scalable file storage in the cloud for use with Amazon EC2 or On-premise servers.
Amazon EFS file system can be mounted on Amazon EC2 instance or on-premises servers through AWS Direct connect connection.
Amazon EFS has a simple web services interface that allows to create and configure file system quickly and easily. EFS service manages all the file storage infrastructure for you.
EFS allows you to avoid the complexity of deploying, patching, and maintaining a file system configuration.
Amazon EFS service is designed to be highly scalable, highly available, and highly durable.
Amazon EFS file system stores data and metadata across multiple AZ in a region.
Amazon EFS supports NFS version 4.

Q) Hosted Zone in Route53?
Amazon Route53 Hosted zone is a collection of records for the specified domains.
You create a hosted zone for your domain and then create the records to tell the DNS how you want the traffic to be routed for that domain.
Basically hosted zone is a container that holds information about how you want to route traffic for your domain and subdomain.
There are two types of hosted zones:
1. Public hosted zones
Public hosted zones contain records that specify how you want to route traffic on the internet.
2. Private hosted zones
Private hosted zones contain records that specify how you want to route traffic in an Amazon VPC.

Q) What is a Sticky Session?
When the client gets load balanced to a particular target, every subsequent request from that client will go to the same target. This binding between client and target is called a session i.e. sticky session.  

Q) What is the Idle Timeout?
When the client connects to the Application load balancer listener, they establish the TCP connection. HTTP and HTTPS requests and responses traverses this connection.
When there is no traffic going over this connection, the connection is idle but it remains open.
Idle time out control how long the TCP connection can remain idle before the load balancer closes it.
Idle time out deals with TCP connection with the client and load balancer.

Q) What is the Keep-Alive Interval?
Keep-alive interval deals with the connection between the load balancer and back-end targets. The keep-alive setting controls how long the webserver will maintain an idle TCP connection with the load balancer.

Q) AWS DNS Routing Policy?
When you create a record, you choose routing policy, which determines how AWS Route 53 responds to DNS queries.
1. Simple routing policy: a single resource that performs a given function. For example, a web server or an elastic load balancer. 
2. Failover routing policy: Configure two resources in active-passive failover mode. If the active resource is healthy, 100% of the traffic goes to that resource. If active is unhealthy, traffic is routed to the passive resource.
3. Geolocation routing policy: Route traffic based on where the requester is located.
4. Geoproximity routing policy: If you have resources in multiple regions, you can route traffic to the nearest location, and optionally, shift traffic from resources in one location to another.
5. Latency routing policy: If you have resources in multiple regions, you can route traffic to the region that provides the best latency.
6. Multivalue answer routing policy: Route 53 returns with up to eight healthy records selected at random.
7. Weighted routing policy: Route traffic to multiple resources in proportions that you specify.

Q) What is VPC Flow Logs.
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon Cloud Watch Logs or Amazon S3. 
You can create a flow log for a VPC, a subnet, or a network interface. Flow Logs help in troubleshooting network connectivity issues, monitoring traffic in your VPC.

Q) Data Consistency model for Amazon S3.
1. Amazon S3 provides read-after-write consistency for PUTS of the new objects in your Amazon S3.
2. Amazon S3 provides eventual consistency for overwrite PUTS(update) and DELETE of object in your Amazon S3.

Q) Glacier Data retrieval process.
i) Expedited: within 1-5min, allows you to quickly access your data.
ii) Standard: within 3-5 Hours access your archive within several hours.
iii) Bulk: within 5-12 Hours, retrieve large amounts, even petabytes.

Q) What is VPC Peering?
VPC Peering connection is a networking connection between two VPC’s that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses.
Instances in either VPC can communicate with each other as if they are within the same network.
You can create a peering connection between your own VPC’s or VPC’s with other AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).

Q) What is versioning?
Versioning allows you to keep multiple copies of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.

Q) What is bucket policy?
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM user’s access permissions for the bucket and the objects in it.

Q) What is Elastic IP? When it will not incur any charges?
An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An elastic IP address is allocated to your AWS account and is yours until you release it.
An Elastic IP address doesn’t incur charges as long as all the following conditions are true:
1.   The Elastic IP address is associated with an EC2 instance.
2.   The instance associated with the Elastic IP address is running.
3.   The instance has only one Elastic IP address attached to it.
4.   The Elastic IP address is associated with an attached network interface, such as a Network Load Balancer or NAT gateway.

Ref Link:

Q) What is Warm-up time?
Warm-up value for Instances allows you to control the time until a newly launched instance can contribute to the CloudWatch metrics, so when warm-up time has expired, an instance is considered to be a part Auto Scaling group and will receive traffic.

Go for Part 1, 3, and 4 of AWS Interview Question and Answer Series

Part 1: AWS Interview QnA Part 1

Part 3: AWS Interview QnA Part 3

Part 4: AWS Interview QnA Part 4

No comments:

Post a Comment