GCP: Service Account gcloud cli

 ### Creating and Managing Service Accounts Using gcloud command-line ###

Ref Link: creating-managing-service-accounts

## Craeet Service account ##

#gcloud iam service-accounts create cmd-svc-accnt --description="Command Line Service Account" --display-name="Command Line Account" --project my-playground


## List The Service Accounts ##

#gcloud iam service-accounts list --project my-playground


## Update the Service Account ##

#gcloud iam service-accounts update  cmd-svc-accnt@my-playground.iam.gserviceaccount.com --description="Command Line Service Account" --display-name="Chamge Command Line Account"  --display-name "Change Command Line Account" --project my-playground


## Disable and Enable Service Accounts ##

#gcloud iam service-accounts disable cmd-svc-accnt@my-playground.iam.gserviceaccount.com --project my-playground

#gcloud iam service-accounts enable cmd-svc-accnt@my-playground.iam.gserviceaccount.com --project my-playground


## Grant Service account an IAM Role ##

#gcloud projects add-iam-policy-binding my-playground --member="serviceAccount:cmd-svc-accnt@my-playground.iam.gserviceaccount.com" --role="roles/editor"


## Delete the Service Account ##

#gcloud iam service-accounts delete cmd-svc-accnt@my-playground.iam.gserviceaccount.com --project my-playground


Ref Link: creating-managing-service-account-keys

## Create service account keys ##

#gcloud iam service-accounts keys create key-file.json --iam-account=cmd-svc-accnt@my-playground.iam.gserviceaccount.com


## List service account keys ##

#gcloud iam service-accounts keys list --iam-account cmd-svc-accnt@my-playground.iam.gserviceaccount.com


## Delete service account keys ##

gcloud iam service-accounts keys delete 96b194863bedad164f9d3001a128f70fd469d0a1 --iam-account cmd-svc-accnt@my-playground.iam.gserviceaccount.com

AWS IAM Policy Examples

1. IAM Policy that allows performing actions on EC2 and ELB from a specific region.

{

    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowEC2AndELBActionFromSpecificRegion",
            "Effect": "Allow",
            "Action": [
   "ec2:*",
   "elasticloadbalancing:*"
],   
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "ap-south-1"
                }
            }
        }
    ]




AWS CloudFormation Exercise 6: VPC creation

Exercise 6: Cloud Formation Template for VPC

In This Exercise, we are going to configure VPC using the Cloud Formation Template written in YAML format.

We are going to create a VPC in Mumbai Region (ap-south-1), with 4 subnets spread across two availability zone i.e. ap-south-1a and ap-south-1b. Out of 4, two of them are public and the remaining two are private subnets. Two route tables will be created, one for the public subnets and another for the private subnets.  Internet gateway and Nat gateway will be created and route table entries will be added in the respective route tables.

To get the Stack click on the link: https://github.com/pranavdhopey

and save it to server say under /opt directory.

1.   Login to AWS Management Console. AWS Console

2.   On Management Console click on “Cloud Formation” under the “Management and Governance” section.

3.   Now you will land on Cloud Formation Console. In Cloud Formation Console click on the “Create Stack” button.

Now follow the below steps to create a stack for this exercise

Step 1: Specify template

In this section choose the “Template is Ready” button and select “Upload a template file”. Now choose a file to upload from your personal computer where it is saved and upload. Now click on next.

Step 2: Specify stack details

Now Specify “Stack name” for e.g. Say “TestVPC” for this exercise. Now provide the values for parameters need to create VPC stack, here we are giving the below parameter values.

1. VpcCIDR: 192.168.0.0/16 (Values to be replace)

2. PublicSubnet1CIDR: 192.168.0.0/24 (Values to be replace)

3. PublicSubnet2CIDR: 192.168.1.0/24 (Values to be replace)

4. PrivateSubnet1CIDR: 192.168.11.0/24 (Values to be replace)

5. PrivateSubnet2CIDR: 192.168.12.0/24 (Values to be replace)

6. EnvironmentName: Dev/Test/Prod

Step 3: Configure stack options

On the “Configure stack options” page leave all settings default and click on next.

Step 4: Review Stack

In this step review all the settings that you have filled in and then click create stack.

After some time stack will be created and you can view and access resources created by the cloud formation stack.

Click below to get started

Create Stack

 

We can also create a stack using AWS CLI

AWS CLI for creating stack:

Note: Replace Template file name accordingly.

1.   To validate cloud formation template template

#aws cloudformation validate-template --template-body file:///<path-to-file>/

CFNVPCConfigurationStack.yml

 

2.   To create stack

#aws cloudformation create-stack --stack-name TestVPC --template-body file:///

<path-to-file>/CFNVPCConfigurationStack.yml --parameters file:///<path-to

file>/parameters.json

Here parameters are passed in “parameters.json” file to avoid a mess on the command line. Snapshot is given below.




3.   To describe stack

#aws cloudformation describe-stacks --stack-name TestVPC


4.   To view the stack events

#aws cloudformation describe-stack-events --stack-name TestVPC


5.   To delete the stack

#aws cloudformation delete-stack --stack-name TestVPC


This completes VPC creation using a cloud formation template with various parameters.

 

AWS Interview Questions and Answers: Part 4


Q) What is CodeCommit?
CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. CodeCommit eliminates the need for you to manage your own source control system or worry about scaling its infrastructure. 
With CodeCommit you can store code, binaries, images, libraries, and more. You can transfer your files to and from AWS CodeCommit using HTTPS or SSH, as you prefer. It encrypts your code in transit and at rest using AWS KMS.
AWS CodeCommit uses AWS Identity and Access Management to control and monitor who can access your data as well as how, when, and where they can access it.
AWS CodeCommit store your repository data in Amazon S3 and DynamoDB. Your encrypted data is redundantly stored across multiple facilities. This architecture increases the availability and durability of your repository data.

Q) What is CodeBuild?
AWS CodeBuild is a fully managed build service that compiles source code, runs unit tests, and produces artifacts that are ready to deploy.
With CodeBuild, you don’t need to provision, manage, and scale your own build
servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue.
It provides prepackaged build environments for popular programming languages and build tools such as Apache Maven, Gradle, and more. You can also customize build environments in CodeBuild to use your own build tools.

Q) What is CodeDeploy?
CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.

Q) What is CodePipeline?
AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software. You can quickly model and configure the different stages of a software release process. CodePipeline automates the steps required to release your software changes continuously. 

Q) What is the AWS Storage Gateway?
AWS Storage Gateway is a hybrid cloud storage service that connects an on-premises software appliance with cloud-based storage to provide seamless integration with data security features between your on-premises IT environment and the AWS storage infrastructure.
AWS Storage Gateway offers file-based, volume-based, and tape-based storage solutions:

File Gateway: A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and a virtual software appliance.
File gateway provides an on-premise virtual file server, that enables you to store and retrieve files as an object in Amazon S3. You can store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB).
The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine (VM) running on VMware ESXi, Microsoft Hyper-V, or Linux Kernel-based Virtual Machine (KVM) hypervisor.
With a file gateway, you can do the following:
·    You can store and retrieve files directly using the NFS version 3 or 4.1 protocol.
·  You can store and retrieve files directly using the SMB file system version, 2, and 3 protocol.
·   You can access your data directly in Amazon S3 from any AWS Cloud application or service.
· You can manage your Amazon S3 data using lifecycle policies, cross-region replication, and versioning. You can think of a file gateway as a file system mount on S3.
File Gateway supports S3 Standard, S3 Standard-IA, S3 One Zoned-IA. 

Volume Gateway: A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers.
The volume gateway is deployed into your on-premises environment as a VM running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor.
The gateway supports the following volume configurations:
· Cached volumes: You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally.
· Stored volumes: If you need low-latency access to your entire dataset, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3.

Tape Gateway: A tape gateway provides cloud-backed virtual tape storage. The tape gateway is deployed into your on-premises environment as a VM running on VMware ESXi, KVM, or Microsoft Hyper-V hypervisor.
With a tape gateway, you can cost-effectively and durably archive backup data in GLACIER or DEEP_ARCHIVE. A tape gateway provides a virtual tape infrastructure that scales seamlessly with your business needs and eliminates the operational burden of provisioning, scaling and maintaining a physical tape infrastructure.
 
Ref Link:  https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

Q) what is AWS CloudFront?
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
AWS CloudFront is a web service that speeds up the distribution of static and dynamic web content such as .html, .css, .js files, and images to users. CloudFront delivers your content through a worldwide network of data centers called edge locations.

Q) Amazon S3 Transfer Acceleration?
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Q) What is AWS Cloud Map?
AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and it maintains the updated location of these dynamically changing resources. This increases your application availability because your web service always discovers the most up-to-date locations of its resources.
Cloud Map allows you to register any application resources, such as databases, queues, microservices, and other cloud resources, with custom names. Cloud Map then constantly checks the health of resources to make sure the location is up-to-date. The application can then query the registry for the location of the resources needed based on the application version and deployment environment.

Q) RTO: Recovery Time Objective
RTO states how much downtime an application experiences before there is a measurable business loss. It is the maximum time your company is willing to wait for the recovery to finish in case of an outage. 

Q) RPO: Recovery Point Objective
RPO refers to the maximum acceptable amount of data loss an application can undergo before causing measurable harm to the business. It is the maximum amount of data loss your company is willing to accept as measured in time. 

Q) What is Placement Group?
Placement group determines how the instances are placed on underlying hardware.
There are three types of group
i) Cluster placement group
ii) Spread placement group
iii) Partition placement group
 
i) Cluster Placement Group
It is a logical grouping of instances within single availability zone.
Great for low latency and high throughput communication.
 
ii) Spread Placement Group.
It is a group of instances placed on the distinct rack. Each rack having its own network and power source.
They span across multiple AZ but restricted 7 instances per AZ.
Recommended for small number of critical instances that should be kept separate from each other.

iii) Partition Placement Group.
It is used to spread out instances across logical partitions in AZ. Each partition has its own set of racks having its own network and power source.
You can have max 7 partitions per AZ but can span across multiple AZ in the same region. 
They are used for large distributed and replicated workload such as Hadoop, Cassandra and Kafka.

Q) CloudFormation intrinsic function
AWS CloudFormation provides several built-in functions to manage your stacks.

1. Fn::Base64
2. Fn::Cidr
3. Condition functions
4. Fn::FindInMap
5. Fn::GetAtt
6. Fn::GetAZs
7. Fn::ImportValue
8. Fn::Join
9. Fn::Select
10. Fn::Split
11. Fn::Sub
12. Fn::Transform
13. Ref

Q) What is a Parameter in CloudFormation template?
Parameters enable us to provide custom value to our template each time when we create or update the stack. We can have a maximum of 60 parameters in cfn template.
Each parameter must be given a logical name which must be alphanumeric and unique among all the logical names within a template.
Each parameter must be assigned a parameter type that is supported by AWS CloudFormation.
Each parameter must be assigned a parameter value at runtime for AWS CloudFormation to successfully provision the stack. We can optionally specify a default value for AWS CloudFormation to use unless another value is provided.
Parameters must be declared and referenced within the same template. We can reference it from the Resources and Outputs section of the template.

Q) What is Mapping in the CloudFormation template?
The mapping section matches a key to the corresponding set of the named values.

Q) What are Pseudo parameters?
Pseudo parameters are parameters that are predefined by AWS CloudFormation. We do not need to declare them in a template.
We can use them the same way we use Parameters as an argument for the Ref function.
Some pseudo parameters are:
AWS::AccountId
AWS::NotificationARNs
AWS::NoValue
AWS::Partition
AWS::Region
AWS::StackId
AWS::StackName
AWS::URLSuffix

Q) What is Metadata in the CloudFormation template?
Metadata provides detail about cfn template. There are three types of metadata keys which are listed below
1.   AWS::CloudFormation::Designer
2.   AWS::CloudFormation::Interface
3.   AWS::CloudFormation::Init

Q) CloudFormation Helper Scripts?
AWS CloudFormation provides python helper scripts that we can use to install software and start services on amazon ec2 instances that you create as a part of the stack.
1.   cfn-init
2.   cfn-signal
3.   cfn-get-metadata
4.   cfn-hup

Q) What is AWS Organizations?
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage.
You can group your accounts into organizational units (OU’s).
You can use a service control policy(SCP’s) to specify maximum permissions for member accounts in the organization. In SCPs, you can restrict which AWS services, resources, and individual API action the users and roles in each member account can access.    
Service control policies (SCP’s) can be attached to member accounts or OU’s.

Q) AWS Organizations components?
An AWS Organization has the following components:
1.   Root: It is the parent container of all accounts and organizational units (OU’s) in the organization.
2.   Organizational Unit(OU’s): An OU is a container of other OUs and accounts.
3. Accounts:  An organization has a primary account called a Master account/Management account. All the other accounts in an organization are called Member accounts. 


Go for Part 1, 2, and 3 of AWS Interview Question and Answer Series.

Deploy CloudFormation stack using Jenkins

In this post, we are going to deploy Cloud Formation Stack using Jenkins FreeStyle project.

Cloud Formation Stack we are going to use is available here: EC2Stack

Download the stack file and save it in your s3 bucket as we are using the s3 bucket URL as a template-body in cloud formation cli.  

Follow the below steps:

Step1: Go to Jenkins Dashboard and click on “New Item”, New screen will appear on that enter the item name say “CFNStackDeploy” and select Freestyle Project and then click on “OK”



Step2: Now under the “General” section select the check box “This project is parameterized” and then add “String Parameter”. Now provide Name, Default Value, And Description, here we have given Name as “StackName”, Value as “ExampleStack” and Description as “Provide Stack Name”

This parameter we are going to invoke in the shell while executing the stack command under the build section.


Step3: Now under the “Build Environment” section, select the check box “Use secret text(s) or file(s)” and add “secret text”. Here provide the Variable names and select Credentials Values as shown in the image. 

Click on this link to know how to save AWS credentials in Jenkins.


Step4: Now under the “Build” section add a build step to “Execute Shell” and copy the below command as given. And Then Click on “Apply” and “Save”

#aws cloudformation create-stack --stack-name ${StackName} --template-body https://<bucket-name>.s3.ap-south-1.amazonaws.com/CFNEC2TemplateBasic.yml --parameters ParameterKey=MyKeyName,ParameterValue=Mumbai ParameterKey=SecurityGroupIds,ParameterValue=sg-xxxxxxxxxx ParameterKey=MyAvailabilityZone,ParameterValue=ap-south-1a ParameterKey=MySubnetId,ParameterValue=subnet-xxxxxxxxxx --region ap-south-1 

Modify the above command as per your configuration. Change s3 bucket URL, change Keyname, security group id, availability zone, subnet id, region to deploy.

Step5: Now go to the project and click on “Build with Parameter”, provide stack name “Anything”, then click on build.


Go to the Console Output to see the build status.


Also, verify CloudFormation dashboard whether the stack is deployed successfully. And verify the instance launched.




This is how we have configured our Freestyle Jenkins Job using Parameters, Secret text credentials, Shell command.


Managing AWS User Credentials in Jenkins

How to store AWS Access Key and Secret Access Key in Jenkins.

There are multiple ways you can save AWS “Access Key ID” and “Secret Access Key” in Jenkins.  I have given two ways you can save these credentials and invoke them while running Jenkins jobs.

Way1: Using Secret Text

Step1: On Jenkins Dashboard go to “Manage Jenkins”, after that go to “Manage Credentials”. Now click on the “global” section under Domain shown below. Now click on “global” section under Domain shown as below. 


Step2: Click on “Add Credentials”. 


Step3: Now under the “Kind” drop down box select “Secret text”, keep the scope as global, and copy-paste the Access Key ID which you got from AWS IAM Console while creating User. Fill up the ID and Description field and click “OK”

Step4: Follow a similar step to save “Secret Access Key” and click “OK”.



Way2: Using Amazon EC2 Plugin

For this you need Amazon EC2 Plugin installed on your Jenkins machine. Follow Step1 and Step2 same as above.

Step3: Now under the “Kind” drop down box select “AWS Credentials”, keep the scope as global, and copy-paste “Access Key ID” and “Secret Access Key”  which you got from AWS IAM Console while creating User. Fill up the ID and Description field and click “OK”.


Finally, you have saved AWS credentials using two ways to access it through Jenkins Jobs. 


AWS CloudFormation Exercise 5: Network Load Balancer creation

Exercise 5: Cloud Formation Template for Network Load Balancer

In This Exercise, we are going to create Network Load Balancer using the Cloud Formation Template written in YAML format. For this exercise, we need to keep few things ready.
1.   VPC (Default or Custom)
2.   Public Subnets
We are going to create a load balancer in Mumbai Region (ap-south-1), so we have configured VPC with public subnets (already configured in case of the default VPC) and the security group is configured with Port 80 and 443 inbound, we need to select subnets for each Availability Zone so that load balancer would route traffic to those subnets.

To get the Stack click on the link: Without SSL or With SSL and save it to server say under /opt directory.

1.   Login to AWS Management Console. AWS Console

2.   On Management Console click on “Cloud Formation” under the “Management and Governance” section.

3.   Now you will land on Cloud Formation Console. In Cloud Formation Console click on the “Create Stack” button.

Now follow the below steps to create a stack for this exercise

Step 1: Specify template

In this section choose the “Template is Ready” button and select “Upload a template file”. Now choose a file to upload from your personal computer where it is saved and upload. Now click on next.

Step 2: Specify stack details

Now Specify “Stack name” for e.g. Say “NLBStack” for this exercise. Now provide the values for parameters need to create Network Load Balancer stack, here we are giving the below parameter values.

1. VPCId: vpc-xxxxxxxxxx (Values to be replace)

2. MySubnetId: subnet-xxxxxxxxxx, subnet-xxxxxxxxxx, subnet-xxxxxxxxxx (Values to be replace)

Step 3: Configure stack options

On the “Configure stack options” page leave all settings default and click on next.

Step 4: Review Stack

In this step review all the settings that you have filled in and then click create stack.

After some time stack will be created and you can view and access resources created by the cloud formation stack.

Click below to get started

Create Stack

 

We can also create a stack using AWS CLI

AWS CLI for creating stack:

Note: Replace Template file name accordingly.

1.   To validate cloud formation template template

# aws cloudformation validate-template --template-body file:///<path-to-file>/ CFNCreateNLBwithOutput.yml


2.   To create stack

#aws cloudformation create-stack --stack-name NLBStack --template-body

file:///<path-to-file>/CFNCreateNLBwithOutput.yml --parameters

ParameterKey=VPCId,ParameterValue=vpc-xxxxxxxxxx

ParameterKey=MySubnetId,ParameterValue=subnet-xxxxxxxxxx\\,subnet-xxxxxxxxxx\\,subnet-xxxxxxxxxx


3.   To describe stack

#aws cloudformation describe-stacks --stack-name NLBStack

4.   To view the stack events
#aws cloudformation describe-stack-events --stack-name NLBStack

5.   To delete the stack

#aws cloudformation delete-stack --stack-name NLBStack


This completes internet-facing Network LoadBalancer creation using a cloud formation template with various parameters.

AWS CloudFormation Exercise 4: Application Load Balancer creation

 Exercise 4: Cloud Formation Template for Application Load Balancer

In This Exercise, we are going to create Application Load Balancer using the Cloud Formation Template written in YAML format. For this exercise, we need to keep few things ready.
1.   VPC (Default or Custom)
2.   Public Subnets
3.   Security Groups
We are going to create a load balancer in Mumbai Region (ap-south-1), so we have configured VPC with public subnets (already configured in case of the default VPC) and the security group is configured with Port 80 and 443 inbound, we need to select subnets for each Availability Zone so that load balancer would route traffic to those subnets.

To get the Stack click on the link: Without SSL or With SSL and save it to server say under /opt directory.

1.   Login to AWS Management Console. AWS Console

2. On Management Console click on “Cloud Formation” under the “Management and Governance” section.

3. Now you will land on Cloud Formation Console. In Cloud Formation Console click on the “Create Stack” button.

Now follow the below steps to create a stack for this exercise

Step 1: Specify template

In this section choose the “Template is Ready” button and select “Upload a template file”. Now choose a file to upload from your personal computer where it is saved and upload. Now click on next.

Step 2: Specify stack details

Now Specify “Stack name” for e.g. Say “ALBStack” for this exercise. Now provide the values for parameters need to create Application Load Balancer stack, here we are giving the below parameter values.

1. MyELBSecurityGroups: sg-xxxxxxxxxx(Value to be replace)

2. VPCId: vpc-xxxxxxxxxx (Values to be replace)

3. MySubnetId: subnet-xxxxxxxxxx, subnet-xxxxxxxxxx, subnet-xxxxxxxxxx (Values to be replace)

Step 3: Configure stack options

On the “Configure stack options” page leave all settings default and click on next.

Step 4: Review Stack

In this step review all the settings that you have filled in and then click create stack.

After some time stack will be created and you can view and access resources created by the cloud formation stack.

Click below to get started

Create Stack

 

We can also create a stack using AWS CLI

AWS CLI for creating stack:

Note: Replace Template file name accordingly.

1.   To validate cloud formation template template

#aws cloudformation validate-template --template-body file:///<path-to-file>/CFNCreateALBwithOutput.yml


2.   To create stack

#aws cloudformation create-stack --stack-name ALBStack --template-body

file:///<path-to-file>/CFNCreateALBwithOutput.yml --parameters

ParameterKey=MyELBSecurityGroup,ParameterValue=sg-09f534dac79a40ce2

ParameterKey=VPCId,ParameterValue=vpc-xxxxxxxxxx

ParameterKey=MySubnetId,ParameterValue=subnet-03a896945c3e5eb15\\,subnet-0a8957bd1f2621bf1\\,subnet-01e4eb75db170e6f8

 

3.   To describe stack

#aws cloudformation describe-stacks --stack-name ALBStack


4.   To view the stack events

#aws cloudformation describe-stack-events --stack-name ALBStack


5.   To delete the stack

#aws cloudformation delete-stack --stack-name ALBStack

This completes internet facing Application LoadBalancer creation using a cloud formation template with various parameters.