AWS Interview Questions and Answers

AWS Interview Questions and Answers
AWS Interview Questions and Answers

Amazon provides the support of cloud computing platforms to an individual, an organization, or a government through Amazon Web Services(AWS). AWS is simple, reliable, inexpensive, and works on a pay-as-you-go basis.

Amazon web services provide various products for security, storage, networking, IoT, developers, analytics, databases, management tools, and enterprise applications. These services are an asset to any organization to lower their IT cost, increase their production speed, and make it more efficient. And the best part about these services is that you only have to pay for what you use.

Being an AWS-certified professional would not only give a boost to your career but also open up various other opportunities for you along the line. AWS job interviews can be a tad bit hard to crack. Apart from studying you should prepare certain questions which will help you land the job you want. We highly recommend you to go through these 101 AWS Interview Questions and Answers which are compiled for both beginners and experts.


  • Amazon web services or AWS is a cloud computing platform that provides products for security, storage, networking, IoT, database, management tools, etc to an individual or organization on a pay-as-you-go basis. The services are offered in the form of small building blocks through which various applications are created and deployed in the cloud.

  • AWS contains the following:

      Simple Email Service

      Route 53

      Simple Storage Device (S3)

      Elastic compute cloud (EC2)

      Elastic Block Store

      Cloud watch

  • There are 4 kinds of cloud services:

      Data as a Service(DaaS)

      Platform as a Service(PaaS)

      Software as a Service(SaaS)

      Infrastructure as a Service(IaaS)

  • Simple email services allow any application that is running on the AWS to have built-in email functionality. This service allows email to be delivered easily and securely using simple API calls or SMTP.

    1. Simple Storage Service(S3)

      Glacier

      Elastic File System

      Elastic Block Storage

  • AWS regions are referred to different geographical locations like Mumbai, India, and California, USA. The isolated areas that are present inside the AWS regions are known as availability zones. They can be replicated as per requirement.

  • Amazon Machine Image or AMI are used to launch instances of the original AMI running on the cloud. It is a type of template which provides the necessary environment containing an operating system, an application serve, and other applications required to launch an instance.

  • S3 stands for simple storage service. It is an Amazon web service that allows a user to store and retrieve data at any point of time in any place all around the web. It provides developers with easy web-scale computing.

  • AMI contains the following:

      A template for the instance to be launched.

      Permissions for launching the instance.

      A block mapping for deciding the volume to be attached to the instance while launching. These are commonly asked AWS Interview Questions and Answers.

  • Snowball is a mini-application that helps in the movement of terabytes of data inside and outside the AWS environment.

  • As the name suggests CloudWatch, is used to watch over or monitor various AWS environments such as EC2, CPU utilization, and RDS instances. It sets different metrics which trigger alarms.

  • Key-pairs are a set of security credentials containing a private key and a public key. If you are connecting to an instance, the key pairs are required to prove your identity. They are a virtual machine’s login information.

  • VPC or Virtual Private Cloud is an isolated network on the cloud. It allows a user the benefit of customization. According to business or personal requirements, the network configurations can be modified. AWS resources can be launched into a virtual network using VPC.

  • Given below are the different types of instances:

      General-purpose

      Accelerated Computing

      Memory-Optimized

      Computer Optimized

      Storage Optimized

  • The networking connection between two different VPCs is known as the VPC peering connection. The route traffic between the two Virtual Private Clouds is enabled using IPv6 and IPv4 addresses. The instances of both the VPCs function as a part of the same network. If you want to start learning about AWS do check out AWS Online Training at FITA Academy. The AWS expert mentors enable you to master the AWS cloud architecture and various other cloud services efficiently in real-time practices.

  • REST stands for Representational State Transfer. It is an architectural style that enables the best performance of a web service by specifying constraints to induce desired properties like scalability, performance, etc.

  • Amazon’s Simple Storage Service is a type of REST service. The REST API or the AWS SDK wrapper libraries can be used to send a request to Amazon S3.

  • Amazon S3 contains the following storage classes:

      Amazon Glacier

      Amazon S3 Standard-Infrequent Access

      Amazon S3 Standard

      Amazon S3 Reduced Redundancy Storage

  • One single AMI can be used to launch multiple instances.

  • The host computer’s hardware used for an instance can be defined using the instance type. Different instance types have different capabilities and provide different computer and memory functions.

  • EC2 or Elastic Compute Cloud is a web service provided by Amazon. It allows renting of virtual computers so that the users can run their computer applications. It offers resizable compute capacity by allowing OS-level control. This cloud virtual machine can be run anytime anywhere with the option of choosing any type of hardware and the applications on the machine.


  • Simple Storage Service

    Elastic Compute Cloud

     Used for data storage.

     Used for hosting applications.

     Has a REST interface.

     It is a virtual machine.

     Uses buckets to store data.

     Has instances for running applications.


  • Elastic Transcoder is an easy-to-use service provided by Amazon which helps in transforming media files such as videos, images, etc from their source code to different types of resolutions and formats depending upon the device(like smartphones, laptops, tablets, etc).

  • T2 instances are instances which provide burstable functionality. They perform at a moderate baseline and possess the capability to burst or outperform the baseline as per the workload demand.

  • Subnetwork or as generally called subnet is a smaller section of a larger network. The process of dividing a network into multiple sections is known as subnetting and the logical partition of the network produced are called subnets.

    There are 2 types of subnets: public and private. The public subnet allows the machine to have internet access while the private network remains hidden.

  • A user is allowed to have 200 subnets per Virtual Private Cloud.

  • DNS and load balancer come under the category of Infrastructure as a Service(IaaS).

  • EC2 has three types of instances:

      On-Demand Instance - They are prepared as per need. It can be created anytime you want. It is cheaper for a short duration but for not for the long term.

      Spot Instance - They are cheaper as compared to on-demand instances. The bidding model helps in the creation of these instances.

      Reserved Instance - AWS allows a user to create instances and reserve them for almost a year. Reserved instances come in handy; the requirements are known in advance. These instances can help save costs by a huge margin.

      Dedicated Hosts - The EC2 server dedicated to the user is known as a desiccated host. It helps in the reduction of overall costs as it provides a Virtual Private Cloud that contains dedicated hardware. These are frequently asked AWS Interview Questions and Answers for Experienced.

  • VPC can be monitoring using the following:

      CloudWatch

      VPC Flow logs

  • Stopping an EC2 instance is similar to shutting down any Personal Computer normally. Stopping an instance won’t delete or remove any attached volumes and the user can start the instance again whenever required.

    While if you terminate an instance it means that the instance would be deleted. Everything attached to the instance, the attached volumes would be deleted and there is no possibility of restarting the instance even if it is needed later on.

  • NAT stands for Network Address Translator. It helps in the conservation of the IP address. It allows connection between various instances in a private subnet and the internet or other AWS products. It prevents any initial connection between the internet and the instance.

  • Eventual Consistency - Even if the data is not immediate, it would be consistent eventually. This allows faster processing of the client requests, though there are chances that some initial requests may read stale data. Systems not requiring real-time data prefer eventual consistency. For example, the invisibility of recent posts on Instagram is acceptable for a few seconds.

    Strong Consistency - The data across all database servers is consistent immediately. Some time may be taken in the processing of making the data consistent and after that, the system will again start serving the requests. This model guarantees consistent data for all responses.

  • Security Groups are a type of Virtual Firewalls having certain rules which are used to govern the accessibility of an instance. You may want an instance to be inaccessible from a public network or allow access only to a particular network. This can be done by creating rules which define port numbers, networks, or protocols that would allow or deny access.

  • A firewall that maintains the state of defined rules is known as a stateful firewall. In stateful firewalls, only the inbound rules are required to be defined, based on which the flow of outbound traffic is automatically decided.

    While in a Stateless Firewall you have to separately define rules for both the traffics, inbound and outbound.

    For example, suppose Port 81 is set for the flow of inbound traffic, then in a Stateful Firewall outbound traffic will also flow from Port 81, but a Stateless Firewall some other port has to be defined.

  • A user, by default, can create buckets to a limit of 100 for each AWS account.

  • Amazon web services offer a secure content delivery network service called CloudFront. It contains a network of proxy servers distributed over the globe that caches the content. Caching ensures that the content is delivered with low latency and high speed. Static and dynamic web content including .js, web videos, image files, .css, HTML, and other bulky media can be distributed with high speed using CloudFront.

  • It is not possible to change the IP Address of an EC2 instance. A private IP Address is assigned to an instance at the time it is launched and that IP Address is linked forever to the instance for its entire lifetime.

  • Following are the steps to scale an instance vertically:

      A new larger instance is spun apart from the one that is presently running.

      Unlink the volume of the root web after pausing the instance and discard it.

      Stop the live instance and detach it from its root volume.

      Attach root volume to the new server after noting down the unique ID number.

      Repeat.

  • Customized content can be created based on a user’s geographic location using Geo-Tagging. It helps in serving relevant content to the user. For example, with the help of Geo-Targeting, the users of India can know about the news of local events, while US users have no relevance for that news.

  • Recovery Time Objective - From the interruption time of service to this restoration, the maximum acceptable delay is known as the recovery time objective. If the service is unavailable it can translate to an acceptable time window.

    Recovery Point Objective - It is the maximum acceptable time since the data point’s last restoration. The data loss is acceptable if it lies between the last recovery point and the point of service interruption.

  • Route 53 is a highly scalable, cost-effective Domain Name Service provided by Amazon. It translates the domain name into a numerical internet protocol address that helps developers and businesses to route their end-user to applications on the internet.

  • It is a compute service that helps in running the code without server management. Whenever a user wants to run his code the lambda function helps in doing so and the user needs to pay not when his code is running.

  • Amazon Route 53 does this by:

      Servers distribution - Like all other amazon services, Route 53 is a global service therefore Amazon has DNS Servers distributed globally. If anywhere around the globe a query is created by any customer it reaches a DNS Server local to them which results in providing low latency.

      Dependency - Critical applications require a high level of dependability required which is provided by Route 53.

      Optimal Locations - The data center nearest to the client sends the request to Route 53. Amazon Web Services have globally located data centers. According to the requirements and the configuration is chosen, the data can be cached in any data center around the world. Any data center’s server can respond to the request if it has the required data. The time taken to serve a request is reduced as the nearest data centers respond to the client.

  • AWS provides a feature called connection draining that allows a server to process all current requests before being removed or getting updated.

    On enabling Connection Draining, the Load Balancer allows the instance which is being removed or updated to complete all its current requests but it does not send any new requests to it. If Connection Draining is not enabled, the outgoing instance will be immediately shut off and the pending request won't be possessed and give us an error.

  • Amazon Web Services provides a feature called auto-scaling which helps in scaling the capacity to maintain predictable and steady performance automatically. Auto-scaling allows the scaling of multiple resources with minutes across various services. Additional resources of other AWS services can be scaled by combining Amazon EC2 Autoscaling with Auto-Scaling.

  • Yes, the Amazon Machine Image can be shared.

  • You can secure an S3 bucket in the following two ways:

      ACL (Access Control List) - The access management of resources to buckets and objects is achieved through ACL. Every bucket’s object is associated with the ACL. It defines access type and grants access to the AWS accounts. Every time a request for a resource is received, its corresponding ACL is checked for verification of the user’s access to the resource.

      By default, Amazon S3 creates an ACL while creating a bucket that provides full access to the AWS resources.

      Bucket Policies - Only S3 buckets are applicable for bucket policies. The actions allowed and denied are defined by bucket policies. Bucket policies are linked to the bucket, not to the S3 object. However, the permissions which the bucket policy defines are applicable to all S3 objects.

  • An object associated with a resource through which permissions are defined is known as a policy. Whenever a user makes a request, the policies are evaluated by AWS. The policy determines whether an action is permitted or denied. Policies are saved in JSON document format.

  • Six types of policies are supported by AWS:

      Identity-based policies

      Permissions boundaries

      Resource-based policies

      Session policies

      Organizations SCPs

      Access Control Lists

  • Auto-scaling has the following benefits:

      Setup Scaling Quickly - During a single interface, it sets multiple resources’ target utilization levels. The mean utilization level of various resources can be seen in the same console, i.e., no need to move to a different console.

      Used for making scalable decisions - The responses to the changes in different resources are automated by making the scaling plans. It optimizes the cost and availability of resources. According to the user preference it automatically creates scaling policies and sets the targets. Applications are kept under monitorization and addition or removal of the capacity can be done accordingly as per requirement.

      Maintains performance automatically - Even under the conditions of unpredictable workload, the application’s performance and availability are automatically optimized through Auto Scaling. To obtain the desired performance level it monitors the user's application continuously. The resources are automatically scaled as per demand.

  • Cloud computing refers to remotely accessing the software and hardware resources. It also allows us to manipulate and configure the data. It provides services like data storage, application, and infrastructure online. Since the software need not be installed on the computer, it offers the benefit of platform independence. It offers various services at reduced cost and increased speed.

  • A bucket policy contains the following:

      Sid: It determines a policy’s work. For example, for adding a new user sid would be AddCannedAcl and for evaluating an IP address, it would be allowed.

      Effect: It determines the action which will follow after the application of the policy. Eg: allowing or denying an action.

      Principal: It is a string that helps to determine the area of application of the policy. Setting principal as ‘'*' applies the policy to everyone. It is also possible to specify the policy for individual AWS accounts.

      Action: It happens after the application of the policy. For example, for reading an object data the action would be s3:Getobject.

      Resource: The statement is applied to an S3 bucket called resource. Bucket’s name in a specific format has to be specified, just entering bucket name simple is not allowed. For example, if ABC-bucket is the bucket name, then the resource name would be "arn:aws:s3""ABC-bucket/*".

    1. Even distribution of traffic among multiple EC2 instances is done by the load balancer. It also ensures that the incoming traffic is highly scalable.

      It accesses a system’s health and automatically decides the routing traffic.

      Load Balancer provides a hassle-free experience by routing the traffic to the same virtual machine from a user for multiple instances.

  • Some of the policies are:

      Setting minimum password length.

      Adding at least one special character or number to the password.

      Passwords should include uppercase, lowercase and non-alphanumeric characters.

      Passwords can expire automatically and old passwords can’t be rescued.

      Account administrators can be contacted on the expiration of the password.

  • The permissions stored in JSON format are known as identity-based policies. These policies are allowed to be linked to an individual user or a group of users or roles. A user’s performable actions are determined by these policies considering the allowed resources and conditions.

    Identity-based policies can be further divided into two:

      Managed Policies: Multiple users, groups or roles can be linked through these policies. The managed policies can be of two types:

        AWS Managed Policies: AWS creates and manages these policies. For the first-time experience of using a policy, it is recommended to use AWS Managed Policies.

        Custom Managed Policies: The policies which are created by the user are called custom managed policies. Compared to AWS Managed Policies, Custom Managed Policies provide control over the policies with more precision.

      Inline Policies: These policies are also created and managed by the user. Direct encapsulation of these policies is done for a single user, group or role.
      These are frequently asked AWS Interview Questions and Answers for Experienced.

  • a) Amazon Mechanical Turk b) Amazon Elastic MapReduce
    c) Amazon DevPay d) Multi-Factor Authentication
    Amazon DevPay helps with billing and account management in AWS.

  • General-purpose instances can be of 3 types:

      T2 instances: These instances while sitting idle receive CPU credits and while being active they use these CPU credits. These instances don’t make consistent use of CPU but as per the demand of the workload, they can burst to a higher level.

      M4 instances: These are the General-purpose instance’s latest version. For memory management and network resources, M4 instances are the best choice. Applications with high demand for micro-servers make extensive use of M4 instances.

      M3 instances: M3 instances are a predecessor of the M4 instances. These are primarily used for data processing tasks.


  • Security Group

    Network Access control list

     Associated with EC2.

     Associated with the subnet.

    Changes made in inbound rules are reflected in outbound rules automatically therefore it is stateful.

     It is stateless because outbound rules don't reflect changes in inbound rules.

     Supports only allow rules which are denied by default.

    Both allow and deny rules are supported which are denied by default.

     1st layer of defense.

     2nd layer of defense.


  • The two types of access are:

      Console Access - A password must be created to login into an AWS account in case of console access.

      Programmatic access - In the case of Programmatic access, it is mandatory to make an API call by an IAM user. AWS CLI can be used to make an API call. An access key ID and secret access key must be created to use AWS CLI.

  • Amazon web services provide easy use, block storage service called Elastic Block Store or EBS. It is designed with the purpose of handling extensive workloads of transaction and throughput processes. It is used with Amazon Elastic Compute Cloud(EC2) to store persistent data. Even if the EC2 instances are shut, data can be stored on EBS servers. With the help of EBS dynamic scaling, attaching or detaching of data can be done with any EC2 instance. AWS Training in Bangalore at FITA Academy is a hands-on learning program that renders the students with best-in practices and training methods to comprehend the AWS Cloud concepts and their applications.

  • EBS offers the following storage options:

      General Purpose SSD (GP2): It provides optimized balance for multiple IT workloads between their performances and costs. General Purpose SSD can be used in apps, dev, test environments, virtual desktops, etc.

      Provisioned IOPS SSD (IO1): It has high-performance functionality which greatly benefits critical IT workloads. Large databases and business applications requiring a throughput of 250 MiB/s per volume make use of IO1.

      Throughput Optimized HDD (ST1): It provides an alternative at a comparatively lower cost for workloads with huge storage volume which require high-performance throughput. Log processing, applications using big data, and streaming workloads are some use cases of ST1.

      Cold HDD (SC1): It is a cheap substitute for workloads that require large volume data storage to be maintained at minimal costs. For example, less frequently accessed workloads.

  • A standard EC2 Instance store only provides temporarily available storage on physical EC2 host servers. Temporary data contents find use of EC2 instance store. These may be some cache files, buffers, or replicated files in the host servers. EBS provides us with the benefit of storing data persistently. EBS offers various storage volume options which makes the data available to the user despite the operating life of the EC2 instance.

  • There are 2 types of AMI provided by AWS:

      Instance store backed

      EBS backed

    1. It is an EC2 instance. The hard drive of the virtual machine stores the root device of the instance-store-backed AMI.

      The AMI is copied on the creation of the instance.

      This instance can’t be stopped as its root device resides on the virtual machine's hard drive. The instance can only be terminated and cannot be recovered after deleting.

      Data can be lost if the hard drive of the virtual machine fails.

      It must be left in running state unless work is completed.

      Charges are calculated from the start of the instance till its termination.

  • Amazon provides a cloud big data platform called Elastic Map Reduce or EMR. It is an easy-to-use, cost-effective service which allows the processing of large amounts of data with the help of open-source tools like Apache Hive, Spark, HBase, etc. It automates time-consuming tasks such as tuning clusters and capacity provision which makes it easy for operating and scaling the big data environment.

  • Elastic Map-reduce has a central component known as a cluster. The collection of multiple EC2 instances makes up a cluster.

    A node is an instance in a cluster. Specific roles are attached to each node known as a node type. Various software components are installed by Amazon EMR on the node type.

  • Various node types in EMR are:

      Master node: A master node distributes the tasks among various cluster nodes by running the software components. The status of all the tasks is tracked by the master done and it also monitors a cluster’s health.

      Core node: By running the software components, a core node processes various tasks, and the data is stored in the Hadoop Distributed File System (HDFS). Clusters having multi-nodes will have one core node at least./p>

      Task node: Along with software components, the task node similar to the core node processes the tasks but the only difference is that it does not store the data in HDFS. These nodes are optional

  • Amazon provides a fully managed messaging service called Simple Notification Service or SNS. It provides services for application-to-application as well as application-to-person communication. It provides message and notification management and delivery from any cloud platform. While using auto-scaling, Amazon SNS automatically triggers the services and sends emails about the growth of the EC2 instance. It uses the concept of parallel processing and can send to a large number of users at once.

  • There are 2 types of SNS clients:

      Publishers - They are also referred to as producers because they produce and send messages to the logical access point i.e., SNS.

      Subscribers - Subscribers are the clients who are the recipients of the notification or the message sent from the SNS. The message is received over the supported protocols such as email, SMS, SQS, etc. The subscribers can be any web servers or lambda function or email addresses.

  • Simple Notification Server offers the following benefits:

      Instantaneous delivery - Since SNS is based on push-based delivery, as soon as you publish a message SNS is pushed to deliver the message to multiple subscribers.

      Flexible - SNS is flexible in the sense that it supports various endpoint types. Multiple transport protocols like HTTP, SMS, email, etc provide support to multiple endpoint types for receiving the message.

      Inexpensive - SNS provides a cost-effective service and the billing is done on a pay-as-you-go basis, so the user only needs to pay while using the resources, and the up-front costs are not included.

      Easy to use - The simple point and click interface of the Web-based AWS Management Console of the SNS makes it easy to use.

      Simple Architecture - SNS offloads the message filtering logic for the simplification of the messaging architecture. It offloads message routing logic from the subscribers and publishers. SNS sends only the messages of interest to the subscribers instead of sending all the topic messages.

  • Amazon provides a fully managed messaging queue service called Simple Queue Service or SQS. It enables decoupling and scaling of serverless applications, micro, and distributed systems. SQS removes the complexity associated with the management and operation of middleware. It allows sending, receiving, and storing of messages between different software components without any loss of messages. It doesn’t even require the availability of other services. The maximum visibility timeout of an SQS queue is 12 hours.

  • SQS has 2 types of queues:

    Standard Queue

      It is the default queue type in SQS.

      The number of transactions allowed per second is unlimited.

      One-time delivery of a message is guaranteed. But it is possible that a message might get delivered more than once.

      It ensures ordered message delivery but does not give a full guarantee.

    FIFO Queue

      The FIFO Queue is complementary to the standard Queue.

      It guarantees ordered delivery i.e., they are sent and received in the same order.

      The FIFO queue allows ordered and on-time delivery of the message which remains available until it gets deleted by the user after processing.

      Duplicate values are not allowed in the Queue.

      Message groups allowing groups with multiple ordered messages in a single queue are also supported.

      FIFO queues have a transaction per second limit of 300 but possess all standard queue capabilities.

    These are commonly asked AWS Interview Questions and Answers for Freshers.


  • Simple Queue Service

    Simple Notification Service

     The receivers have to pull the messages from the queue i.e, it is pull-based.

     SNS is push-based delivery as it pushes the messages to multiple subscribers.

     Messages are not received by multiple  receivers at the same time.

     All subscribers receive messages at the same  time.

     Immediate message delivery.

     Messages are delivered with some latency


  • Route 53 provides the following routing policies:

      Simple Routing Policy - The concept of the round-robin method is applied to a single resource that performs all the domain functions. For example, sending content from a web server to a website. Here the single resource is the webserver. According to the values available in the resources, it responds to DNS queries.

      Weighted Routing Policy - The traffic can be routed in specific proportions to different resources with the help of a weight routing policy. For example, one server can get 80% and the remaining 20% can go to another server. The range for the assignment of weights is from 0 to 255. This policy is applicable when the same function is accessed by multiple resources. For example, when multiple web servers access a website, a unique weight number is given to each web server. It associates a single DNS name with multiple resources

      Latency-based Routing Policy - This policy allows Route53 to respond to the lowest latency DNS queries. Latency-based Routing policy is used when a single domain is accessed by multiple resources. The resource with the fastest response and lowest latency is identified by Route 53.

      Failover Routing Policy

      Geolocation Routing Policy

  • Amazon offers a load balancing service called Elastic Load Balancer or ELB. It helps in the distribution of incoming traffic automatically to various targets including EC2 instances, lambda functions, IP addresses, etc. It handles application traffic of varying loads in single as well as multiple availability zones. It aims to make an application fault-tolerant by offering various load balancers which provide robust security, high availability, and automatic scaling.

  • ELB offers the following load balancers:

      Classic Load Balancer: It operates at request as well as connection level and allows various Amazon EC2 instances to have basic load balancing functionality. Applications built within the EC2-Classic network make use of the Classic Load Balancer.

      Application Load Balancer: Load balancing of HTTP and HTTPS traffic is best done using this load balancer. Based on the type of request, it routes the traffic to Amazon VPC targets. It makes routing decisions at the application layer.

      Network Load Balancer: It makes routing decisions as to the transport layer as this layer requires extreme performance. It routes traffic to Amazon VPC targets and is capable of maintaining ultra-low latencies even while handling a million requests per second. It uses the hash flow algorithm to select one target from a group of targets.

      Gateway Load Balancer: The running, scaling and deploying of third-party virtual networking appliances are made easy through Gateway Load Balancer. It maintains transparency with the traffic source and destination while providing third-party applications with load balancing and auto-scaling. This feature of the Gateway Load Balancer makes it suitable for providing security, network analytics, and other services to third-party applications.

  • Elasticache is a service provided by Amazon used for operating on the in-memory cache as well as deploying and scaling it. It allows fast retrieval of information from the managed in-memory cache which improves the overall performance of the web applications. It does not entirely rely upon disk-based databases which are slower. Caching provides low latency as it stores critical sections of data in the memory.

  • Elasticache can be of 2 types:

      Memcached - It uses a key-value store service that is compatible and stored in memory as the cache. It provides an in-memory, high-performance data store that is easy to use. We can use it either as a cache or session store. Real-time applications generally make use of Memcache.

      Redis - Redis is the short form for Remote Dictionary Server. It is an open-source key-value data store that is fast and built-in memory. It processes millions of requests per second and has a millisecond response time. Health care, IoT, and gaming are a few real-time applications that make use of Redis. It also finds its utility in geospatial, caching, session management, etc

  • Redis provides the following benefits:

      It stores the data in memory providing a faster response time.

      It provides support of various data structures such as strings, hashes, lists, etc.

      Offers simplicity by allowing data operations with few lines of code.

      Extensible as it has a vibrant community and is open source.

      Allows replication of data in multiple servers and provides point-in-time backups supporting persistence.

      The solutions have high availability and perform consistently with reliability.

  • The benefits of Memcached are:

      Provides faster response time as it stores data in the server’s main memory. It can support millions of operations per second.

      The simplicity of Memcached’s design makes it powerful and easy to use in developing applications

      It is highly scalable due to its distributed and multithreaded architecture.

      It is open-source and has an active support community.


  • Memcached

    Redis

     Advance data structures like sets, lists, etc  are not supported.

     Supports advanced data structures.

     Supports multithreading as it allows  multiple processing cores.

     Multithreaded architecture is not supported.

     Memcached does not support  transactions.

     The transaction support leads to the execution of a  group of commands.

     Does not support Lua Scripting.

     Boosted performance and simplification of  application are provided by Lua Scripts.


  • Amazon provides a data warehouse service in the cloud called Redshift. It is a powerful, fully managed, fast, and reliable big data warehouse. Redshift is an economical method for data warehousing. Redshift uses the Online Analytics Processing System (OLAP). It consists of 2 nodes: Single node and Multi-node. Single nodes can store up to 160GB of data. Multi-node possess more than one node and can be further divided into leader node and compute node.

  • Redshift is fast because of the following reasons:

      Columnar Data Storage - Amazon Redshift stores data in columnar format instead of storing it in rows. Column-wise format is best suited for data warehousing and analytical processing since the queries often have to be aggregated over huge data sets where Row-wise format is suitable loading for transaction processing. Column-based systems improve the performance of the query as it requires less input out operations because of data being stored sequentially in the storage media.

      Advanced Compression - Because of sequential storage of data the columnar data can be compressed way more than row format data. Compared to traditional relational data stores, Amazon Redshift achieves high compression by employing various compression techniques. Amazon Redshift requires less space as there is no need for indexes or materialized views. As the data is loaded into the empty table, Amazon Redshift automatically selects the relevant techniques for compression after sampling the data.

      Massively Parallel Processing - Amazon Redshift provides automatic distribution of the data. It helps in loading the query over multiple nodes. The addition of new nodes in a data warehouse is made easy through Amazon Redshift. This allows achieving faster query performance with the growth of the data warehouse. These are frequently asked AWS Interview Questions and Answers for Experienced.

  • Redshift has the following features:

      It has an easy setup. The operation management and deployment of new models are also simple.

      It is the most cost-effective data warehouse as the user only has to pay for what he uses.

      It allows the user to choose the preferred node type according to his requirement.

      It is highly scalable and can be scaled according to needs.

      It provides flexibility and can be used to query the Amazon S3 data lake.

      It can be encrypted and set to use SSL which provides security.

      Compression techniques, columnar data storage, and parallel processing improve the performance time.

  • Amazon EC2 instance provides a service called Elastic Internal Protocol. The purpose of EIP is for dynamic cloud computing. EIP is an IP address linked to an EC2 instance, but the EIP address is not associated with the Amazon EC2 instance, it is connected to the AWS account. The EIP address can mask the failure of an instance by unlinking from one EC2 instance and associating with another EC2 instance of the AWS account.

  • EIP has the following characteristics:

      It does not change over time, i.e., it is a static address.

      It is first allocated to an account and then associated with an EC2 instance.

      EIP which has been disassociated remains allocated until explicitly released.

      It is used in networks with specific border groups only.

      It can only be used in a specific region.

      EIP is generated from the pool of Amazon’s IPv4 addresses.

  • The regions all around the world in which AWS is based are known as AWS Global infrastructure. They are a group of high level IT services comprising of the following:

      Availability Zones

      Regiond

      Edge locations

      Regional Edge Caches

  • The endpoints of the AWS which are used for caching the content as known as edge locations. There are currently more than 150 edge locations around the globe. Edge locations comprise CDN or the Content Delivery Network and CloudFront. Edge locations are usually based in major cities so that the content can be distributed with low latency to the end-users.

    For example, if a user from Sri Lanka wants to access a website then the edge location closest to Sri Lanka would receive his request where the cached data can be read.

  • A user may face the following issues:

      The server may not recognize the user key.

      Denial of permission

      The network connection is unexpectedly closed by the server.

      The private key is unprotected.

      The user’s browser is unable to connect to the server.

      Connection timeout.

      Cannot ping the instance

      Host key gets refused by the server.

  • Elastic Beanstalk is a service provided by Amazon which is cost-effective and easy to use. It helps in deploying, managing, and scaling web applications. The services and applications developed using Java, PHP, Python, etc make use of the Beanstalk. A user just has to upload the code, after which Elastic Beanstalk the various operations related to its deployment automatically such as load balancing, monitoring the health, auto-scaling, etc. The user is simultaneously able to control and access the AWS resources which support the application.

  • Elastic Beanstalk provides the following benefits:

      Elastic Beanstalk enables fast and easy management as well as deployment of the application.

      On increase or decrease of the application traffic, Elastic Beanstalk automatically scales up or down.

      An application can be deployed without any knowledge by the developers but they are required to maintain secure and user-friendly features of the application.

      It provides a cost-effective service. A user only needs to pay for his AWS account.

      Customization of AWS services is allowed by Elastic Beanstalk. The users can configure the features they want to use for developing the application.

      On a change of platform, the application is automatically updated. AWS professionals manage the infrastructure and updates of the platform.


  • Horizontal

    Vertical

     Used in distributed systems.

     Used in virtualization.

     Implementation is difficult.

     It is easy to implement.

     Utilizes network calls.

     Allows interprocess communication.

     Fails only in case of system failure.

     Only a single point of failure.


  • Amazon offers a service for modeling and setting up the AWS resources called CloudFormation. It reduces the time spent on the management of resources and helps ftp focus on the running applications. A template is created by the user specifying all the required AWS resources and CloudFormation provisions and configures the mentioned resources. Individual creation and configuration of AWS resources are not handled by the user as CloudFormation takes care of that. It solves the need for standardization and replicates the architecture for proper optimization and execution.

  • CloudFormation has the following benefits:

      Helps in reduction of the deployment time of the infrastructure.

      Increases the confidence in deployment models.

      Environment repair time is reduced.

      It scales up the resources for replicating the complex environments.

      Definitions between different products are reused.

  • AWS disaster recovery solution has the following benefits: Following are the advantages of AWS’s Disaster Recovery (DR) solution:

      AWS helps companies in reducing their capital expense by offering cost-effective, storage, and DR solutions.

      Provides greater productivity gains and less time for setup.

      Even during seasonal fluctuations, AWS helps the companies to scale up.

      All the present data is replicated to the cloud.

      Files are retrieved at a fast pace.

  • Amazon provides a key-value and document database known as DynamoDB. At any scale, it delivers the performance in milliseconds. It is a fully managed and active database that set-up ensures durability. It contains built-in features like backup, security, and in-memory caching. The number of requests processed by DynamoDB per day is greater than 10 trillion. It delivers reliable performance when a fast and flexible NoSQL database is required.


  • DynamoDB

    Redshift

     Used in a database containing  modified data.

     Used in Data warehouses.

     Does not support SQL query  language.

     Supports SQL query language.

     Secondary indexes are supported.

     Supports restricted secondary index.

     Server-side scripting is not supported.

     Server-side scripting is supported by user-defined  functions in python.


  • Amazon S3 Glacier is a cloud storage class used for archiving data and backing up long-term data. It is secure, durable, and available at an extremely low cost. Glaciers are designed to deliver extremely high durability. They provide high security and various capabilities which help achieve even stringent regulatory requirements. Amazon S3 Glacier provides three choices for accessing the archives ranging from a few minutes to several hours.

    1. Amazon is the owner and distributor of AWS while OpenStack is an open-source cloud computing platform.

      Cloud computing services like IaaS, PaaS, SaaS are offered by AWS. On the other hand OpenStack, itself is a cloud computing platform using IaaS.

      OpenStack being open source is free for use while AWS is paid.

      AWS makes use of templates for performing repeatable functions while OpenStack uses text files to do the same.

  • Subnets offer the advantage of efficiently utilizing networks with a large number of hosts.

  • In a DDoS attack or distributed denial-of-service attack service is made unavailable by making use of multiple resources to overwhelmed the incoming traffic. It is a malicious attempt over a targeted server or network to overload it with fake traffic. It is a subclass of Denial of Service attack. The DDoS attack doesn’t interface with the security of your network, it rather blocks the network for the use of legitimate users. It can be used as a mask under which other attacks can be planted to breach the network’s security. These are frequently asked AWS Interview Questions and Answers for Experienced and freshers.

The above-mentioned AWS Interview Questions and Answers cover a wide range of questions that you may encounter while interviewing in the AWS domain. These questions and answers give a good idea about the AWS domain but it does not provide you in-depth knowledge about AWS. If you are interested in learning about AWS in detail, check out AWS Training in Chennai at FITA Academy. This course provides in-depth training in AWS which will help you in achieving expertise in AWS. Their expert mentors will guide you to become an AWS professional.


Interview Questions


FITA Academy Branches

Chennai

TRENDING COURSES

Digital Marketing Online Course Software Testing Online Course Selenium Online Training Android Online Training Swift Developer Online Course RPA Training Online AWS Online Training DevOps Online Training Cyber Security Online Course Ethical Hacking Online Course Java Online Course Full Stack Developer Online Course Python Online Course PHP Online Course Dot Net Online Training

AngularJS Online Course Data Science Online Course Artificial Intelligence Online Course Graphic Design Online Training Spoken English Course Online German Online Course IELTS Online Coaching Digital Marketing Course in Chennai Software Testing Training in Chennai Selenium Training in Chennai Swift Developer Course in Chennai RPA Training in Chennai AWS Training in Chennai DevOps Training In Chennai Ethical Hacking Course in Chennai Java Training In Chennai Python Training In Chennai PHP Training in Chennai AngularJS Training In Chennai Cyber Security Course in Chennai Full Stack Developer Course in Chennai UI UX Design Course in Chennai Data Science Course in Chennai Dot Net Training In Chennai Salesforce Training in Chennai Hadoop Training in Chennai Android Training in Chennai Tally Training in Chennai Artificial Intelligence Course in Chennai Graphic Design Courses in Chennai Spoken English Classes in Chennai German Classes in Chennai IELTS Coaching in Chennai Java Training in Bangalore Python Training in Bangalore IELTS Coaching in Bangalore Software Testing Course in Bangalore Selenium Training in Bangalore Digital Marketing Courses in Bangalore AWS Training in Bangalore Data Science Courses in Bangalore Ethical Hacking Course in Bangalore CCNA Course in Bangalore Spoken English Classes in Bangalore German Classes in Bangalore

Read more