AWS Interview Questions and Answers: Part 1

Q) What is Cloud Computing?
Cloud Computing is the on-demand delivery of computing power, database storage, applications, and other IT service through a cloud services platform with pay-as-you-go pricing. You can provision exactly the right type and size of computing resources you need. You can access as many resources as you need almost instantly.
Cloud Computing is a simple way to access server, storage, databases, and a set of application services.

Q)What is Amazon EC2?

Amazon Elastic Compute Cloud (EC2) provides scalable (resizable) computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 you can launch as many virtual servers as you need and configure security, networking, and also manage storage.

Q) What is the EC2 Instance?
An EC2 instance is a virtual server in Amazon’s Elastic Compute Cloud (EC2) for running applications on Amazon Web Service (AWS) infrastructure.

Q) Features of EC2?
1. Virtual computing environment known as instances
2. Preconfigured templates for your instances known as Amazon Machine Images (AMI).
3. Various configurations of CPU, Memory, Storage, and networking capacity is known instance types.
4. Secure login information for your instances using the key pair.
5. Storage volumes for temporary data that are deleted when you stop or terminate the instances known as instance store volume.
6. Persistent storage volume for your data using Amazon Elastic Block Store (EBS) known as Amazon EBS volume.
7. Multiple physical locations for your resources such as instances and Amazon EBS volumes known as regions and Availability zones.
8. A Firewall that enables you to specify the protocol, ports, and sources IP ranges that can reach your instances using security groups.
9. Static IP addresses for dynamic cloud computing knows as Elastic IP Addresses.
10. Metadata i.e. tags that you can create and assign to your Amazon EC2 resources.
11. Virtual networks you can create that are logically isolated from the rest of the AWS cloud and that you can optionally connect to your own network know as Virtual Private Cloud (VPC)

Q) What is AMI?
It’s a template that provides information (an operating system, an application server, and applications) required to launch an instance, which is a copy of AMI running as a virtual server in the AWS cloud.
An AMI includes the following:
1. A template for the root volume for instance (an operating system, an application server, and applications)
2. Launch permission that controls which AWS accounts can use the AMI to launch the instances.
3. A block device mapping that specifies the volumes attached to the instance when it is launched.

Q) Types of AMI?
You can select an AMI to use based on the following characters.
1. Regions and availability zones.
2. Operating Systems
3. Architecture (32-bit or 64-bit)
4. Launch permission     
5. Storage for root device

Q) What is an instance type?
When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance.
Each instance type offers different compute, memory, and storage capabilities and are grouped into instance families based on these capabilities.

Q) Types of EC2 Instances?

1. General Purpose

2. Compute Optimized

3. Memory Optimized

4. Storage Optimized

5. Accelerated Computing

Q) What is VPC?

Amazon Virtual Private Cloud (VPC) enables you to launch Amazon Web Services (AWS) Resources into the virtual network that you have defined.
A Virtual Private Cloud (VPC) is a virtual network dedicated to your AWS account.
It is logically isolated from other virtual networks in the AWS cloud. You can launch AWS resources such as Amazon EC2 instances into your VPC.
You can configure your VPC; you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings.

Q) Difference between Default and Non-Default VPC in AWS Cloud?
The primary difference between Default and Non-Default VPC is that Default VPC is created by AWS for you when you create a new account, whereas any VPC created by you is called Non-Default VPC.
Default VPC comes with the following pre-configured settings i.e. when AWS creates a default VPC, they do the following settings                  
i. Create a default subnet in each availability zone.
ii. Create an internet gateway and connect it to your default VPC.
iii. Create the main route table and send all the internet traffic from default VPC through an internet gateway.
iv. Create a default security group and associate it with your default VPC.
v. Create a default ACL’s and associate it with your default VPC
vi. Associate default DHCP option set with your default VPC.
Instances that you launch into default subnets will receive both private IP address and public IP address and also receive both private and public DNS hostnames.
Instances that you launch into non-default subnets will not receive a public IP address and DNS hostname.

Q) What is Security group?
A Security group act as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign an instance to up to five security groups.
Security groups act at the instance level, not the subnet level. Therefore each instance in a subnet in your VPC could be assigned to a different set of security groups.
If you don’t specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.
For each security group, you add rules that control the inbound traffic to instances and a separate set of rules that control the outbound traffic.

Q) Basic Characteristics of Security group for VPC.
i. You can create up to 500 security groups per VPC. You can add up to 50 inbound and outbound rules to each security group. You can associate up to 5 security groups per network interface.
ii. You can specify allow rules, but not deny rules.
iii. You can specify separate rules for inbound and outbound traffic.
iv. By default, no inbound traffic is allowed until you add inbound rules to the security groups.
v. By default, an outbound rule allows all outbound traffic. You can remove the rule and add outbound rules that allow specific outbound traffic only.
vi. Security groups are stateful means the responses to allowed inbound traffic are allowed to flow outbound regardless of outbound rules and vice versa.
vii. Instances associated with security groups can’t talk to each other unless you add rules allowing it.
Viii. Security groups are associated with network interfaces. After you launch an instance, you can change the security group associated with the instance, which changes the security group’s associated with a primary network interface (eth0).
You can also change the security group associated with any other network interface.

Q) What is Network Access Control List (NACL)?
Network Access Control List (NACL) is an optional layer of security for your VPC that acts as a firewall to control traffic in and out of one or more subnets.
Default VPC comes with modifiable default network ACL, by default it allows all inbound and outbound traffic (IPv4/IPv6).
You can create a custom network ACL and associate it with the subnet. By default, custom ACL denies all the inbound and outbound traffic until you add the rules.
Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.
You can associate a network ACL with multiple subnets; however, a subnet can be associated with only one network ACL at a time. 
Network ACL contains a numbered list of rules which are evaluated in order from lowest to determine whether traffic is allowed in or out of any subnet associated with the network ACL. 
A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.
Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

Reference Link:

Q) What is S3?
Amazon S3 is storage for the internet. A Simple Storage Service that offers software developers a high-scalable, reliable and low-latency data storage infrastructure at very low costs. It is designed to make web-scale computing easier for developers.
Amazon S3 provides a web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web.
Using this web service, developers can easily build applications that make use of internet storage.

Q) Amazon EBS Volume?
Amazon Elastic Block Store (Amazon EBS) provides the block-level storage volume for use with Amazon EC2 Instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same availability zone.
Amazon EBS is recommended when data must be quickly accessible and requires long-term persistence. EBS volumes particularly well suited for use as the primary storage for the file systems, databases, or any applications that require fine granular updates and access to raw, unformatted, and block-level storage.
Amazon EBS is well suited to both database-style applications that rely on random reads and writes and to the throughput-intensive application that performs long, continuous read and writes.
Amazon EBS provides the following volume type:
1. General purpose SSD (gp2,gp3) Volume size: 1GiB to 16TiB
2. Provisioned IOPS SSD (io1,io2,io2 block express) Volume size: 4GiB to 16TiB
3. Throughput Optimized HDD (st1) 125GiB to 16TiB
4. Cold HDD (sc1) Volume size: 125GiB to 16TiB
5. Magnetic (standard) Volume size: 1GiB to 1TiB

Q) Instance store volume?
An Instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, cache, scratch data, and other temporary content or for the data that is replicated across a fleet of instances, such as a load-balanced pool of web servers.
The data on an instance store volume persists only during the life of the associated Amazon EC2 instance; if you stop or terminate an instance, any data on instance store volumes is lost.

Q) What is IaaS?
Cloud Infrastructure Service, known as Infrastructure as a Service. IaaS model provides computing infrastructure including servers, storage, networking, and networking services (eg firewall).
IaaS provider offers these cloud servers and their associated resources via dashboard and/or API. IaaS clients have direct access to their servers and storage, just as they would with traditional servers but gain access to a much higher order of scalability. Users of IaaS can outsource and build a “virtual data center” in the cloud and have access to many of the same technologies and resource capabilities of a traditional data center without having to invest in capacity planning or the physical maintenance and management of it.

Examples: Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.

Q) What is PaaS?
Cloud Platform Service or Platform as a Service provides the platform on which software can be developed or deployed. It provides you with computing platforms that typically include an operating system, programming language execution environment, database, web server, etc.

Examples: AWS Elastic Beanstalk, Google App Engine, Apache Stratos.

Q) What is SaaS?
Cloud application services, or Software as a Service (SaaS), is the most popular and known form of cloud service for consumers. SaaS moves the task of managing software and its deployment to third-party services.
In the IaaS model, you are provided with access to application software’s often referred to as on-demand software.
The use of SaaS applications tends to reduce the cost of software ownership by removing the need for technical staff to manage install, manage, and upgrade software, as well as reduce the cost of licensing software.

Examples: Google Apps, Netflix, WebEx, GoToMeeting, and DropBox, Microsoft Office 365.

Q) Regions and Availability zones?
Amazon EC2 is hosted in multiple locations worldwide. These locations are composed of regions and availability zones. Each region is a separate geographic area. Each region has multiple, isolated locations known as Availability zones. Amazon EC2 provides you the ability to place the resources, such as instances and data in multiple locations.
Each region is completely independent. Each availability zone is isolated, but the availability zones in a region are connected through a low-latency link.
Amazon EC2 resources are either global, tied to a region, or tied to an Availability Zone.

Availability zone:
Availability zones are effectively different data centers located within the regions. Each availability zone is completely independent of others which enables them to reside in different areas within the same region providing a level of business continuity in the event of a disaster.
All the Availability zones within the same regions are linked by extremely low latency links providing high availability features for many AWS services such as S3, RDS, etc. to communicate with each other.

Q) What is Edge Location?
A site that CloudFront uses to cache copies of your content for faster delivery to users at any location.
Edge locations are used in conjunction with the AWS CloudFront service which is a global Content Delivery Network service. Edge locations are deployed across the world in multiple locations to reduce the latency for the traffic served over the CDN and as a result, are usually located in highly populated areas.

Q) What is shared instance?
i. Shared instances are Amazon EC2 instances that are running on hardware that is not dedicated to a single AWS account i.e. different instances from different AWS account sharing the same physical host.
ii. In case of stop and start of instances, the underlying hardware (i.e. host) would change

Q) What is dedicated instance?
i. Dedicated instances are Amazon EC2 instances that run in Virtual Private Cloud (VPC) on hardware that is dedicated to a single customer.
ii. Your dedicated instances are physically isolated at the host hardware level from the instances that belong to other AWS account.
iii. Dedicated instances may share hardware with other instances from the same AWS account that is not dedicated instances.
iv. In case of stop and start of instances, the underlying hardware (i.e. host) would change.

Q) What is a Dedicated Host?
i. An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. You can use Dedicated Hosts to launch Amazon EC2 instances on physical servers that are dedicated for your use.
ii. Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server.
iii. In case of stop and start of instances, the underlying hardware will not change.

Q) How ENI is attached to an Instance?

Ø  Hot Attach : ENI can be attached to an instance when it’s running.

Ø  Warm Attach:  ENI can be attached to an instance when it’s stopped.

Ø  Cold Attach : ENI can be attached to an instance when it is being launched.

Go for Part 2, 3 and 4 of AWS Interview Question and Answer Series

Part 2 : AWS Interview QnA Part 2

Part 3 : AWS Interview QnA Part 3

Part 4 : AWS Interview QnA Part 4

General Linux Interview Questions

Q) What is Run Level?
Run level is one of the modes that the UNIX-based operating system will run in. Each run level has a certain no. of services stopped or started, giving the user control over the behavior of machines.
There are total seven run levels present numbered from 0 to 6
i.      Run level 0: Halt the system
ii.  Run level 1: Single user mode (for administrative tasks)
iii.  Run level 2: Multi-User Mode, without NFS (Network file system)
iv.  Run level 3: Multi-User Mode with networking and command line interface.
v.     Run level 4: Not used
vi.   Run level 5: Multi-User Mode with networking and X Window (GUI)
vii. Run level 6: Reboot the system

Q) What is the difference between SSH and Telnet?
SSH which is known as Secure Shell is a networking protocol used to securely log in to the remote system. It is a most common way to access remote Linux or UNIX type system over the internet.
SSH run on port no 22 by default; however, it can be easily changed.
SSH is very secure protocol because it shares and sends the information in an encrypted format which provides confidentiality and security of data over an unsecured network such as the internet.
Once the data for communication is encrypted using SSH, it is very difficult to decrypt and read that data, so our password also become secure to travel on the public network.
SSH uses public-key for the authentication of users accessing the servers and it is a great practice providing us extreme security.

Telnet is the joint abbreviation of Telecommunication and network and it is the networking protocol best known for UNIX platform.
Telnet uses the port 23 and it was designed specifically for local area network.
Telnet is not a secure communication protocol because it does not use any security mechanism and transfers the data over the network/internet in plain text format including the password and so anyone can sniff the packet to get that important information.
There are no authentication policies and data encryption techniques used in telnet causing huge security threat that is why telnet is no longer used for accessing network devices and servers over the public network.

Q) Boot Process.
1. Power On/Restart:

When you power on or restart your computer the power is supplied to your computer SMPS.
One of the main components of the computer is SMPS (Switch Mode Power Supply). The primary objective of SMPS is to supply perfect required voltage level to the devices attached to the machine such as a motherboard, HDD’s, Keyboard, Mouse, CD-DVD ROM etc.
The most intelligent device in the computer is Processor (CPU) when supplied with the power start running sequence of operations stored in its memory. The first instruction it will run is to pass control to BIOS.


BIOS stands for Basic Input-Output System. The most important use of BIOS is to do POST (Power on Self-Test) during the boot process. POST is the series of test conducted by BIOS to check the proper functioning of all the hardware components attached to the computers.
Once the POST is completed successfully, BIOS will check CMOS setting to know what the boot order is.
Boot order is nothing but a user defined order which tells where to look for the operating system. BIOS will select first boot device for booting, the devices can be Hard Drive, CD Rom, Floppy Drive, Network Interface or other removable media such as USB drive.
BIOS is programmed to look at the first sector of your Hard Drive which is known as Boot sector. This location is also known as MBR, which contains the program that will help our computer to load the operating system. As soon as BIOS finds a valid MBR, it will load entire content of MBR into the RAM and further execution is done by the content of MBR.

3. MBR

MBR stands for Master Boot Recorder which is located at the first sector of your hard disk. It is just a 512 Bytes in size. MBR is not located inside any partition.
MBR has following three components.
a. Primary boot loader code (size: 446Bytes)
b. Partition table information (size: 64 Bytes)
c. Magic number (size: 2 Byte)

a. Primary boot loader code: This code provides boot loader information and location details of actual bootloader code on the hard disk.

b. Partition table: MBR contains 64 bytes of data which stores Partition table information such as what is the start and end of each partition, the size of the partition, type of partition (Whether it's a primary or extended etc.). We can create maximum 4 primary partitions each of 16 Bytes only.

c. Magic Number: The magic number service as validation check for MBR. If MBR gets corrupted this magic number is used to retrieve it. 

MBR cannot directly load kernel as it is unaware of file system concept and requires bootloader with file system driver with each supported file system. To overcome this situation GRUB is used with the details of the file system in /boot/grub/grub.conf and file system drivers.


GRUB (Grand Unified Boot Loader) loads the kernel in 3 stages.

GRUB stage 1:
Its primary function is to load either stage 1.5 or stage 2 boot loader.

GRUB stage 1.5:
Stage 1 can load stage 2 directly but it is normally setup to load stage 1.5.
This can happen when the /boot partition is situated beyond 1024 cylindrical head of the hard disk.
GRUB Stage 1.5 is located in the first 30KB of hard disk immediately after MBR and before the first partition. This space is utilized to store file system drivers and modules.
This enabled stage 1.5 to load stage 2 to load from any known location on the file system i.e. /boot/grub

GRUB stage 2:
This is responsible for loading kernel from /boot/grub/grub.conf and any other module needed.
GRUB loads the user-selected (or default) kernel into memory and passes control on to the kernel. If the user does not select the OS after a defined timeout GRUB will load the default kernel in the memory for starting it.

4. Kernel

The kernel can be considered as the heart of the operating system responsible for handling all system processes. Kernel acts as a mediator of hardware and software.
The kernel is a compressed image file, it is basically an executable bzImage file.
The kernel verifies hardware configurations (floppy drive, hard drive, network adapter etc.) and configures drivers for the system.
Now the kernel uncompresses Initrd image. Initrd stands for initial ramdisk used by the kernel as temporary root file system until the kernel is booted and real root filesystem is mounted.
It also contains necessary drivers compile inside which helps it to access the hard drive partition and other hardware.
Once all modules are loaded which are present in Initrd image, it umount initrd image and mounts the root partition as specified in grub.conf file as read only.


Once the kernel starts its operation the first thing it does is executing INIT process.
The init process is the root/parent process of all the processes running under Linux.
As soon as init process is executed it will look at /etc/inittab file to know what the default run level is.
Based on the appropriate run-level, scripts are executed to start/stop various processes to run the system and make it functional.
Scripts for run levels 0 to 6 are located in subdirectories /etc/rc.d/rc0.d through /etc/rc.d/rc6.d respectively. There are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.

/etc/rc0.d/ –Contain Start/Kill scripts which should be run in Runlevel 0
/etc/rc1.d/ –Contain Start/Kill scripts which should be run in Runlevel 1
/etc/rc2.d/ –Contain Start/Kill scripts which should be run in Runlevel 2
/etc/rc3.d/ –Contain Start/Kill scripts which should be run in Runlevel 3
/etc/rc4.d/ –Contain Start/Kill scripts which should be run in Runlevel 4
/etc/rc5.d/ –Contain Start/Kill scripts which should be run in Runlevel 5
/etc/rc6.d/ –Contain Start/Kill scripts which should be run in Runlevel 6

At last, INIT runs one file which is /etc/rc.local 

Q) What is inode?
i) Inode is the data structure that contains information about the files that are created when the file system is created. Each file has an inode and is identified by an inode number in the file system where it resides.
ii) Inode contains all important information about the file except its name and actual data.
    1. The size of the file in (bytes).
    2. Physical location i.e. pointer to block storing file contains.
    3. Files owner and group. 
    4. File access permission such as read, write, execute for owner, group and other.
    5. Timestamps telling when the inode was created, last modified and last accessed.
    6. A reference count telling how many hard links point to the inode.

Q) What is Soft Link/Symlink?
Soft Link or Symlink is the actual link to an original file. Soft link is a file which contains a reference to another file or directory in the form of absolute or relative path.
In short, you can create a shortcut of the file or directory to the other path.
1. These links will have different inode values.
2. Soft link points to original file so if the original file is deleted then the soft link fails. If you delete soft link nothing will happen to file.
3. Soft link can link to directory also.
4. Soft link can cross the file system.
5. Soft link contains the path for original file/directory, not the actual content.

Q) What is Hard link?
Hard link is the mirror copy of the original file. Hard links point directly to the physical file on disk, and not on the path name.
1. These links share same inode value.
2. Changes made to the original file or hard linked file will reflect other. When you delete original file or hard linked file nothing will happen to other.
3. Hard link can link to files only not to the directory.
4. Hard link can’t cross file system.
5. Removing any link, just reduces the link count but doesn't affect the other links.

Q) What is UMASK?
UMASK stands for User File Creation Mask. It is a default set of permission given when new file/directory is created on Linux machine.
Default UMASK value for Normal user: 002
Default UMASK value for root user: 022
Base permission for directories are: 0777
Base permissions for files are: 0666

Q) What is ulimit?
The ulimit command provides the control over the resources available to the shell and/or to processes started by it.
You can limit the user to a specific range by editing /etc/security/limits.conf at the same time system wide settings can be updated in /etc/sysctl.conf.

Q) Types of the File system.
1. Ext2:
> Second extended file system.
> Introduced in 1993, developed by Remy Card.
> Ext2 does not have journaling Feature.
> Max Individual file size: 16 GB to 2 TB
> Overall file system size: 2 TB to 32 TB

2. Ext3:
> Third extended file system.
> Introduced in 2001, developed by Stephen Tweedie
> Ext3 have journaling feature enabled.
> Max Individual file size: 16 GB to 2 TB
> Overall file system size: 2 TB to 32 TB

3. Ext4:
> Fourth extended file system.
> Introduced in 2008.
> It has the option to turn off journaling feature, other features like delayed allocation, multi-block allocation, fast fsck etc.
> Max Individual file size: 16 GB to 16 TB
> Overall file system size: 1 EB (1EB = 1024PB = 1024 TB)
Q) What is Journaling?
Journaling file systems provide new level of safety to the Linux kernel. Instead of writing data directly to the storage device and then updating inode table, journaling filesystem writes file changes into a temporary file (called as journal) first. After data is successfully written to the storage device and the inode table, the journal entry is deleted.
When the system crashes, the possibility of file system corruption is less because of journaling.

If the system crash or suffer a power outage before the data can be written to the storage device, the journaling file system just reads through the journal file and processes any uncommitted data left over.

Q) TCP and UDP Difference.
1. TCP stands for Transmission Control Protocol.
2. It is connection oriented protocol.
3. TCP header size is 20 bytes
4. TCP is reliable but slower in transferring.
5. TCP guarantee delivery of data.
6. The order of data at receiving end is same as on sending end.
7. TCP does error checking and error recovery.

1. UDP stands for User Datagram Protocol.
2. It is connectionless protocol.
3. UDP Header size is 8 bytes.
4. UDP is not reliable, but faster in transferring.
5. UDP doesn’t provide guaranteed delivery of data.
6. UDP doesn’t provide any ordering of data.
7. UDP makes error checking but no reporting.

Q) Raid Levels?
RAID stands for Redundant Array of Independent (or Inexpensive) Disk. RAID is the way of combining several independent and relatively small disks into a single storage of large size. The disks included in the array are called as an array member. The disk can be combined into the array in different ways known as RAID levels.

1. RAID 0 (Striping)
In Raid 0, Data are splits up into blocks and then get written across all the drives in the array. Raid 0 provides high performance such as high read and write speed.
Utilizes all the storage capacity.
Raid 0 does not provide fault-tolerance, if one of the disks fails, all the data in Raid 0 array are lost.
We need at least minimum 2 disks to create a RAID 0 (Striping).

2. RAID 1 (Mirroring)
Data are stored twice by writing them to both the data drive and mirror drive. If a drive fails, the controller uses either the data drive or the mirror drive to recover data and continues operations.
The effective storage capacity is only half of the total drive capacity because all data get written twice.
In case a drive fails, data do not have to be rebuilt, they just have to be copied to the replacement drive.
We need at least minimum 2 disks to create a RAID 1 (Mirroring).

3. RAID 5 (Distributed parity)
Raid 5 is the most common secure raid level. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to fixed drive, they are spread across all drives.
Raid 5 array can withstand a single drive failure.
If one of the drives fails, parity info will be used to rebuild the data.
We need minimum 3 disks to create a RAID 5 (Distributed parity). 

4. RAID 6 (Striping with double parity)
RAID 6 is like RAID 5 only, but the parity data are written to two drives.
RAID 6 can withstand 2 drive failure simultaneously.
If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5.
We need minimum 4 Drives to create RAID 6 

5. RAID 01 (Mirror of Stripes)
Raid 01 or Raid 0+1 is called “Mirror of Stripes”.
Within the group, the data is striped. Across the group, the data is mirrored.

6. RAID 10 (Stripe of Mirrors)
Raid 10 or Raid 1+0 is called “Stripe of Mirror”.
Within the group, the data is mirrored. Across the group, the data is striped.

Reference Link: thegeekstuff.

Q) Hot Spare?
Hot spare is an extra drive added to the disk array to increase fault tolerance.
If you have the hot spare in your raid disk array, then raid controller will automatically start rebuilding data on that hot spare drive, if one of the disk from the array fails.

Q) What is NIC/Network bonding?
Network bonding is a Linux kernel feature that allows to aggregate two or more network interfaces into single virtual network interface which may increase the bandwidth and provide redundancy of NIC card.

This is the great way to achieve redundant links, fault tolerance or load balancing network in production systems.
mode=0 (Balance Round Robin)
mode=1 (Active backup)
mode=2 (Balance XOR)
mode=3 (Broadcast)
mode=4 (802.3ad)
mode=5 (Balance-TLB)
mode=6 (Balance-ALB)

Q) What is LVM?
LVM is the Logical Volume Manager provided by the Linux kernel. Its main purpose is to allow storage devices to be aggregated and subdivided. This is done by:
Formatting each storage device as an LVM ‘physical volume’,
Aggregating the physical volumes to form one or more storage pools called ‘volume groups’, then
Creating virtual block devices called ‘logical volumes’ within those volume groups.
Reference link:

Q) What is Zombie process?
A zombie process or defunct process is a process that has completed execution (via the exit system call) but still has an entry in the process table: it is a process in the "Terminated state".
A process is removed from the process table when the process is completed, and its parent process reads the completed process exit status by using the wait() system call. If a parent process fails to call wait() for whatever reason, its child process will be left in the process table, becoming a zombie.

Q) What is NFS?
NFS (Network File System) is basically developed for sharing of files and folders between Linux/Unix systems by Sun Microsystems in 1980. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. With the help of NFS, we can set up file sharing between Unix to Linux system and Linux to Unix system.
NFS uses Remote Procedure Calls (RPC) to route requests between clients and servers.

NFS Mount Options:
root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server.
no_root_squash: if this option is used, then root on the client machine will have the same level of access to the files on the system as root on the server.
all_squash: The UID and GID of exported files are mapped to the user anonymous. It is good for public directories.
sync: If sync is specified, the server waits until the request is written to disk before responding to the client.
async: If async is specified, the server responding to the client before the request is written to disk.
ro: The directory is shared read only; the client machine will not be able to write to it. This is the default.

rw: The client machine will have read and write access to the directory.

no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.

Reference Link:

Q) Explain "Soft Mounting" option at NFS Client?
If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. If it cannot be satisfied (for example, the server is down), then it quits. This is called soft mounting.

Q) Explain "Hard Mounting" option at NFS Client?
If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. If it cannot be satisfied, then it will not quit until the request is satisfied. This is called hard mounting.

Q) Difference between RHEL 6/7?
Reference Link: difference-between-rhel6-rhel7
Command CheatSheet: RHEL_5_6_7_cheatsheet