CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. AWS
  3. AWS Certified Solutions Architecture - Associate (SAA-C03)
AWS Certified Solutions Architecture - Associate (SAA-C03)

AWS

AWS Certified Solutions Architecture - Associate (SAA-C03)

1,007+ Practice Questions with AI-Verified Answers

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 1,007+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every AWS Certified Solutions Architecture - Associate (SAA-C03) answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Design Secure ArchitecturesWeight 30%
Design Resilient ArchitecturesWeight 26%
Design High-Performing ArchitecturesWeight 24%
Design Cost-Optimized Architectures Weight 20%

Practice Questions

1
Question 1

A web service runs on EC2 behind a load balancer, but many clients can only allowlisted IPs through their firewalls. Make the service reachable via fixed IPs. What should be recommended?

Correct. A Network Load Balancer supports static IP addresses by using Elastic IPs (or AWS-owned static IPs) per Availability Zone via subnet mappings. This directly addresses client firewall allowlisting requirements while maintaining multi-AZ high availability and managed load balancing. Clients should allowlist all NLB EIPs (one per enabled AZ) to ensure resilience during failover.

Incorrect. An Application Load Balancer cannot be assigned Elastic IP addresses. ALB is accessed via a DNS name and the underlying IPs can change over time due to scaling and maintenance. For IP allowlisting requirements, ALB alone is not suitable unless paired with another service (e.g., Global Accelerator), which is not offered in the options.

Incorrect. A Route 53 A record can point to an Elastic IP, but that would route traffic to a single public endpoint and does not provide load balancing or inherent multi-AZ resilience. You would be replacing the load balancer with a single IP target, creating a single point of failure and losing managed load balancing behavior.

Incorrect. Putting a public EC2 proxy in front of the load balancer can provide a fixed IP, but it is an anti-pattern for resilience and operations: it introduces a single point of failure (unless you build and manage multiple proxies), adds patching/maintenance burden, and can become a bottleneck. AWS-managed solutions (NLB with EIPs) are preferred for availability and security.

Question Analysis

Core Concept: This question tests how to provide fixed, allowlistable public IP addresses for a load-balanced service on AWS. The key distinction is that most AWS load balancers (especially ALB) do not provide static IPs; instead, you typically use DNS. When customers require IP-based firewall allowlisting, you need an architecture that can present stable IPs. Why the Answer is Correct: A Network Load Balancer (NLB) can be associated with Elastic IP addresses (EIPs) for each subnet/AZ where the NLB has a node. This gives you stable, fixed public IPs that clients can allowlist, while still providing managed load balancing and high availability. NLB operates at Layer 4 (TCP/UDP/TLS), which is sufficient for many web services and is commonly used specifically for the “static IP for load balancer” requirement. Key AWS Features: - NLB + Elastic IPs: You can allocate EIPs and assign them to the NLB’s subnet mappings, resulting in one static IP per AZ. - High availability: Deploy the NLB across multiple AZs; clients allowlist all EIPs. - Preserves client IP: NLB can pass the source IP to targets (useful for logging and security controls). - Works with TLS: NLB supports TLS listeners if needed, or you can terminate TLS on targets. Common Misconceptions: Many assume an Application Load Balancer can use EIPs; it cannot. ALB is reached via DNS name and its underlying IPs can change. Another misconception is to “just use Route 53 to point to an EIP,” but an EIP is a single endpoint and does not provide load balancing or multi-AZ resilience by itself. Exam Tips: When you see “clients require fixed IPs/allowlisting” + “load balancer,” think NLB with EIPs (or AWS Global Accelerator in other scenarios, but it’s not an option here). Also remember: ALB/NLB are typically addressed by DNS, but only NLB supports static IPs via EIPs. Ensure you account for one EIP per enabled AZ and instruct clients to allowlist all of them.

2
Question 2

A global consulting firm uses AWS Organizations to manage 25 AWS accounts across different regional offices. The headquarters IT team maintains a central Amazon S3 bucket in the master account containing sensitive client contracts and compliance documents. Due to recent security audits, the firm needs to ensure that only users from accounts within their AWS Organization can access this critical S3 bucket. Implement access control to restrict S3 bucket access to only users from accounts within the AWS Organization, while minimizing ongoing administrative tasks and maintenance effort. Which solution achieves the access control requirements with the LEAST operational overhead?

Correct. aws:PrincipalOrgID in the S3 bucket policy restricts access to principals that belong to the specified AWS Organization ID. It automatically adapts as accounts are added to or removed from the Organization, requiring no ongoing policy edits. This is the standard, lowest-maintenance approach for Organization-scoped access to shared resources like a central S3 bucket.

Not the best choice for least overhead. aws:PrincipalOrgPaths can restrict access based on OU membership, but it introduces additional complexity and potential maintenance if accounts move between OUs or if the OU structure changes. The requirement is simply “within the Organization,” so using Organization ID is simpler and more stable than OU path-based controls.

High operational overhead. CloudTrail-driven automation to detect org membership changes and then rewrite bucket policies is complex, brittle, and unnecessary. It adds moving parts (rules, functions, permissions, error handling) and creates risk of policy drift or delayed enforcement. Native IAM condition keys already provide real-time evaluation without custom automation.

Not aligned with the requirement and increases admin work. Tagging each user (or role) and enforcing aws:PrincipalTag requires consistent tag governance across 25 accounts and ongoing lifecycle management as identities change. It also doesn’t inherently ensure the principal is in the Organization—only that it has a tag—so it’s not the most reliable or minimal-maintenance control for Organization membership.

Question Analysis

Core Concept: This question tests Amazon S3 resource-based access control using IAM policy condition keys that integrate with AWS Organizations. The goal is to restrict access to a central S3 bucket so that only principals (users/roles) that belong to accounts in a specific AWS Organization can access it, with minimal ongoing administration. Why the Answer is Correct: Using the S3 bucket policy condition key aws:PrincipalOrgID is the most direct, low-maintenance way to enforce “only principals from my Organization” access. You specify the Organization ID (o-xxxxxxxxxx) once in the bucket policy, and S3 evaluates the caller’s principal at request time. Any account that is a member of the Organization automatically matches; any account outside the Organization is denied (when combined with an explicit Deny or when Allow statements are scoped accordingly). This eliminates the need to enumerate account IDs, manage per-account statements, or update policies as accounts join/leave. Key AWS Features: - S3 bucket policies (resource-based policies) can use global condition keys. - aws:PrincipalOrgID condition key restricts access to principals that are part of a specific AWS Organization. - Works well with centralized data lakes, shared compliance repositories, and multi-account governance patterns. - Aligns with AWS Well-Architected Security Pillar: implement least privilege and reduce manual processes that can drift. Common Misconceptions: - Many assume you must list every account ID in the bucket policy. That increases operational overhead and is error-prone as accounts change. - Some confuse OU-based controls (paths) with Organization-wide controls; OU paths can be useful but are more complex and can change with reorganizations. - Monitoring and auto-updating policies (CloudTrail + automation) sounds robust, but it’s unnecessary when a native condition key already provides dynamic membership enforcement. Exam Tips: When you see “only accounts in my AWS Organization” and “least operational overhead,” look for aws:PrincipalOrgID in a resource policy (S3, KMS, etc.). Prefer native policy conditions over event-driven automation unless the requirement explicitly needs custom logic beyond what IAM conditions provide.

3
Question 3

A global streaming media company operates over 250 video streaming platforms across different regions. The company needs to process approximately 25 TB of user viewing behavior and interaction data daily to optimize content recommendations and user experience. The solution must handle high-volume real-time data ingestion, provide reliable data transmission, and enable efficient analytics processing for the large-scale streaming data. What should a solutions architect recommend to ingest and process the streaming behavior data?

AWS Data Pipeline is primarily for scheduled/batch data movement and orchestration, not high-volume real-time streaming ingestion. While S3 + EMR can analyze large datasets, the ingestion layer here is the weak point: Data Pipeline does not provide the same real-time buffering, scaling, and replay semantics as Kinesis. This option also increases operational complexity for near-real-time requirements.

An Auto Scaling group of EC2 instances to capture and forward streaming events is a DIY ingestion approach that is harder to scale reliably and operate (capacity planning, backpressure handling, retries, ordering, and failure recovery). It can work, but it is not the most appropriate managed pattern for 25 TB/day of real-time telemetry. Kinesis provides these capabilities natively with less operational overhead.

Amazon CloudFront is designed to cache and deliver content with low latency; it is not intended to store or collect user interaction telemetry as a primary data ingestion mechanism. Triggering Lambda from S3 object creation implies batch/object-based ingestion rather than continuous streaming, and it can create scaling and latency challenges at very high volumes. This is an architectural mismatch for real-time behavior analytics.

Kinesis Data Streams provides scalable, durable real-time ingestion for large volumes of event data from many sources. Kinesis Data Firehose then reliably delivers the stream to an S3 data lake with automatic scaling, buffering, retries, and optional transformation. From S3, loading into Amazon Redshift supports efficient large-scale analytics. This directly matches the requirements for real-time ingestion, reliable transmission, and scalable analytics.

Question Analysis

Core Concept: This question tests designing a high-throughput, reliable streaming ingestion and analytics pipeline. The key AWS services are Amazon Kinesis Data Streams (real-time ingestion and buffering), Amazon Kinesis Data Firehose (managed delivery to storage/analytics destinations), Amazon S3 (durable data lake), and Amazon Redshift (analytics/warehouse). Why the Answer is Correct: With ~25 TB/day across 250+ platforms, the company needs scalable, near-real-time ingestion with durable buffering and reliable delivery. Kinesis Data Streams is purpose-built for high-volume event ingestion with ordered records per shard, retention for replay, and horizontal scaling via shards (or on-demand mode). Kinesis Data Firehose then provides a fully managed, reliable delivery layer to land data into an S3 data lake with batching, retries, and optional transformation, minimizing operational overhead. From S3, data can be loaded into Redshift (e.g., COPY from S3) for large-scale behavioral analytics and recommendation feature generation. Key AWS Features: - Kinesis Data Streams: shard-based scaling (or on-demand), multiple consumers, enhanced fan-out, retention for reprocessing, and integration with IAM/KMS. - Kinesis Data Firehose: automatic batching/compression/encryption, retry logic, S3 delivery, and optional Lambda-based transformation. - S3 data lake: high durability, lifecycle policies, partitioned storage for efficient downstream queries. - Redshift: columnar storage and MPP for fast aggregations; COPY from S3 for high-throughput loads. Common Misconceptions: Data Pipeline and DIY EC2 ingestion can look viable, but they are not optimized for real-time streaming at this scale and add significant operational burden. CloudFront is a content delivery/cache service, not an event ingestion system; using it for behavior telemetry is an architectural mismatch. Exam Tips: When you see “high-volume real-time ingestion” plus “reliable transmission” and “analytics,” think Kinesis (Streams for ingestion/buffering, Firehose for managed delivery) + S3 as the landing zone. Pair with Redshift/EMR/Athena depending on analytics needs; Redshift is a common choice for structured behavioral analytics at scale.

4
Question 4

A multinational healthcare organization is consolidating its IT infrastructure by moving 15 medical research applications to AWS across different AWS accounts. The organization uses AWS Organizations to centrally manage these accounts, which include separate environments for clinical trials, patient data analysis, and regulatory compliance systems. The IT security team requires a unified single sign-on (SSO) authentication system that works across all 15 AWS accounts while maintaining centralized user management through their existing on-premises Microsoft Active Directory infrastructure that contains over 2,000 medical staff accounts. Which solution will meet these requirements most effectively?

Incorrect. Although IAM Identity Center is the right service for multi-account SSO, the trust model described here is not the standard supported approach for integrating AWS Managed Microsoft AD with an on-premises self-managed AD for this use case. A one-way forest or domain trust is not the best answer when the expected architecture for IAM Identity Center with Microsoft AD relies on a two-way forest trust. Because the option specifies the wrong trust relationship, it is less effective and technically inaccurate for the scenario.

Correct. AWS IAM Identity Center is the AWS-native service for centralized workforce access across multiple AWS accounts in AWS Organizations, which directly matches the requirement for unified SSO across 15 accounts. To use identities from an existing on-premises self-managed Microsoft Active Directory, IAM Identity Center can use AWS Directory Service for Microsoft Active Directory as the identity source, with a two-way forest trust to the on-premises AD. This preserves centralized user management in the existing AD while allowing AWS account access to be administered centrally through IAM Identity Center permission sets and assignments.

Incorrect. AWS Directory Service by itself does not provide the centralized AWS account SSO experience required across all 15 AWS accounts. It can support directory-aware workloads, domain joining, and trust relationships, but it does not replace IAM Identity Center for assigning users and groups to multiple AWS accounts through AWS Organizations. The question explicitly asks for a unified SSO authentication system across accounts, which points to IAM Identity Center rather than Directory Service alone.

Incorrect. An on-premises identity provider such as AD FS can federate users into AWS, but this adds operational overhead and is not the most effective solution for centralized access across many AWS accounts. IAM Identity Center already provides native integration with AWS Organizations, centralized permission sets, and streamlined account assignments, which are key requirements here. This option also does not clearly preserve the existing Microsoft AD as the direct centralized identity source in the simplest AWS-native way.

Question Analysis

Core Concept: This question tests federated identity and centralized access management across multiple AWS accounts in AWS Organizations using AWS IAM Identity Center (formerly AWS Single Sign-On) integrated with an existing on-premises Microsoft Active Directory (AD). Why the Answer is Correct: The most effective approach is to enable IAM Identity Center and connect it to the organization’s existing self-managed Microsoft AD while keeping user lifecycle and group management centralized in AD. Option A does this by using AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as the directory that IAM Identity Center uses, and then establishing a one-way trust from AWS Managed Microsoft AD to the on-prem AD. This allows IAM Identity Center to authenticate users against the on-prem AD (via the trust) and provide SSO access to all 15 AWS accounts with consistent permission sets and assignments managed centrally. Key AWS Features: IAM Identity Center integrates natively with AWS Organizations to provide SSO across multiple accounts, including account discovery and centralized permission set management. Using AWS Managed Microsoft AD provides a managed, highly available AD in AWS that supports AD trust relationships. A one-way trust (typically AWS Managed Microsoft AD trusting the on-prem AD) is commonly used so AWS resources can authenticate on-prem identities without requiring AWS to be trusted back by the on-prem domain, reducing risk and complexity. This aligns with least privilege and controlled trust boundaries. Common Misconceptions: Two-way trusts (Option B) can seem “more complete,” but they expand the trust boundary and are rarely required for SSO into AWS accounts; they can also introduce additional security considerations. Using Directory Service alone (Option C) does not provide unified AWS console/application SSO across multiple accounts; it’s primarily for directory-dependent workloads (e.g., Windows auth, LDAP/Kerberos needs). Deploying an on-prem IdP (Option D) is possible, but it adds operational overhead and is not as directly aligned with the requirement for centralized SSO across all accounts using AWS Organizations and AD-based user management. Exam Tips: For multi-account SSO with AWS Organizations, think IAM Identity Center first. If the enterprise source of identity is Microsoft AD, the common exam pattern is IAM Identity Center + AWS Managed Microsoft AD + AD trust to on-prem. Prefer one-way trust unless a specific requirement demands bidirectional resource access across forests/domains.

5
Question 5

A financial technology startup is building a mobile banking application and has manually created a prototype infrastructure on AWS. The infrastructure consists of an Auto Scaling group with EC2 instances, a Network Load Balancer for high-performance traffic handling, and an Amazon Aurora MySQL cluster for transaction processing. After completing security compliance validation, the company needs to rapidly deploy identical infrastructure across 3 Availability Zones for both staging and production environments in a fully automated manner to support their planned launch in 4 weeks. What should a solutions architect recommend to meet these requirements?

AWS Systems Manager Automation is best for executing operational workflows (patching, AMI baking steps, instance recovery actions) using runbooks. While it can orchestrate API calls, it is not designed to model and version an entire multi-resource architecture as a reusable, declarative blueprint. It also does not inherently provide the same repeatable, environment-parameterized stack lifecycle management that CloudFormation provides.

CloudFormation is the correct approach for fully automated, repeatable deployments of identical infrastructure across 3 AZs and across staging/production. You can define VPC/subnets per AZ, NLB subnet mappings, Auto Scaling group spanning those subnets, and Aurora DB subnet groups and cluster resources. Parameterization and separate stacks enable consistent environments with controlled differences, supporting rapid rollout and compliance reproducibility.

AWS Config primarily records configuration history, evaluates resources against rules, and reports compliance. Although Config remediation can trigger automation to correct specific noncompliant settings, it is not intended to provision an entire application stack from scratch across multiple AZs. Using Config as a deployment mechanism is a misuse and would be complex, brittle, and not aligned with standard IaC best practices.

Elastic Beanstalk is a managed application deployment service that provisions underlying resources for supported web/app platforms. It is not well-suited for a bespoke architecture requiring explicit control of a Network Load Balancer configuration, Auto Scaling details, and an Aurora cluster designed for banking transactions. Beanstalk can deploy across multiple AZs, but it abstracts infrastructure and is not the best fit for replicating a validated prototype exactly.

Question Analysis

Core Concept: This question tests Infrastructure as Code (IaC) for repeatable, automated environment provisioning across multiple Availability Zones (AZs) and multiple environments (staging and production). The primary AWS service is AWS CloudFormation (or equivalent IaC), which is the standard exam answer for rapidly deploying identical, compliant stacks. Why the Answer is Correct: After security compliance validation, the company must reproduce the same architecture quickly and consistently. CloudFormation templates (potentially generated from the prototype as a reference) allow the startup to define the Auto Scaling group, Network Load Balancer, and Aurora MySQL cluster declaratively and deploy them in a fully automated way. CloudFormation supports multi-AZ designs by defining subnets in 3 AZs, associating the NLB with those subnets, configuring the Auto Scaling group to span them, and configuring Aurora with a DB subnet group across those AZs (and Multi-AZ/replicas as required). Separate stacks (or parameterized stacks) can be deployed for staging and production, ensuring identical topology with environment-specific parameters. Key AWS Features: CloudFormation provides change sets, drift detection, stack policies, nested stacks/modules, and parameterization for environment differences (instance types, scaling limits, DB sizes, tags). It integrates with CI/CD (CodePipeline/CodeBuild) for automated deployments and supports secure handling of secrets via dynamic references to AWS Secrets Manager/SSM Parameter Store. This aligns with Well-Architected best practices for reliability (multi-AZ), security (repeatable controls), and operational excellence (automation). Common Misconceptions: Systems Manager Automation can orchestrate operational runbooks, but it is not the primary tool to “capture” an existing prototype and reliably recreate full infrastructure as a productized, version-controlled deployment. AWS Config inventories and evaluates compliance; remediation is for fixing drift/noncompliance, not provisioning entire multi-tier environments. Elastic Beanstalk is an application platform abstraction and is not a natural fit for a custom NLB + Aurora banking architecture where you need explicit control of networking, scaling, and database topology. Exam Tips: When you see “rapidly deploy identical infrastructure,” “fully automated,” “multiple AZs,” and “multiple environments,” default to IaC (CloudFormation/CDK/Terraform). CloudFormation is the canonical AWS-native answer. Look for wording like “prototype already exists” and “need repeatability and consistency” to reinforce IaC and parameterized stacks for staging vs production.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

A company runs SQL Server on EC2 with daily EBS snapshots. A cleanup script accidentally deleted all snapshots. They need to prevent data loss without keeping snapshots forever. Provide a safety net for accidental deletions with minimal development. Which solution meets these requirements with the LEAST development effort?

Denying ec2:DeleteSnapshot in IAM can reduce accidental deletions for a specific principal, but it’s not a reliable safety net. The script could run under another role, an admin could still delete, and it doesn’t provide recovery if deletion happens. It also conflicts with the requirement to allow deletions (not keep snapshots forever) because you’d need exceptions and more policy management.

Copying snapshots to another Region is primarily a disaster recovery strategy for regional outages, not an accidental-deletion safety net. If the same automation or operator deletes snapshots in both Regions (or if permissions allow), you can still lose them. It also adds ongoing cost and operational complexity (copy schedules, monitoring, retention management) compared to a simple recycle bin rule.

EBS Snapshot Recycle Bin retention rules are designed specifically to protect against accidental snapshot deletion. With a 7-day retention rule for all snapshots, deleted snapshots remain recoverable during that window and are then permanently removed, meeting the “don’t keep forever” requirement. This is minimal development: configure the rule once and use restore when needed.

EBS snapshots are not objects you can copy directly into an S3 bucket and then choose S3 Standard-IA. Snapshots are stored and managed by the EBS snapshot service. While some AWS services can export certain data to S3, “copy snapshots to S3 Standard-IA” is not a valid native mechanism for EBS snapshot retention and recovery in this context.

Question Analysis

Core Concept - The question is testing AWS Backup/Amazon EBS data protection controls that provide resilience against accidental deletion, specifically the EBS Snapshot Recycle Bin feature (part of EBS snapshot management) and retention-based recovery. This aligns with resilient architecture principles: recoverability, controlled retention, and minimizing operational risk. Why the Answer is Correct - A 7-day EBS Snapshot Recycle Bin retention rule creates a “safety net” so that if snapshots are accidentally deleted (by a script or user), they are retained in the Recycle Bin and can be restored during the retention window. This directly addresses the incident (accidental deletion of all snapshots) while also meeting the requirement to avoid keeping snapshots forever. It also requires minimal development: you configure a retention rule once, rather than redesigning backup workflows. Key AWS Features - EBS Snapshot Recycle Bin lets you set retention rules (by tags or for all snapshots in the account/Region) so deleted snapshots are recoverable until the retention period expires. This is purpose-built for accidental deletions and complements (not replaces) normal snapshot lifecycle policies. It’s an account/Region-level control and is operationally simple compared to building cross-Region copy pipelines. Common Misconceptions - Denying snapshot deletion in IAM (Option A) seems like it prevents deletion, but it’s brittle: the cleanup script might run under a different role, an admin could still delete, and it doesn’t help if deletion already occurred or if you need legitimate deletions. Cross-Region copies (Option B) improve disaster recovery, but they don’t inherently protect against accidental deletion unless you also protect the copies with separate controls; plus it adds cost and operational overhead. Copying “snapshots to S3 Standard-IA” (Option D) is not how EBS snapshots work; snapshots are managed by EBS and you can’t directly tier them to S3 storage classes. Exam Tips - When you see “accidental deletion” + “don’t keep forever” + “least development,” look for managed retention/recovery features: EBS Snapshot Recycle Bin, AWS Backup Vault Lock (for backups), and lifecycle policies. Choose the option that directly provides recoverability after deletion with time-bound retention and minimal custom automation.

7
Question 7

The company stores many Amazon Machine Images (AMIs) that are critical. They need a solution to quickly recover accidentally deleted AMIs with minimal effort. Protect AMIs/snapshots against accidental deletion and enable easy recovery. Which solution meets the requirements?

Creating EBS snapshots of the AMIs and storing them in another account protects the underlying block data, but it does not preserve the AMI registration itself. To recover, the company would need to identify the correct snapshots and manually re-register a new AMI, which adds operational steps and complexity. This is less convenient than copying the AMI directly. Therefore, it does not best satisfy the requirement for easy recovery with minimal effort.

Periodically copying AMIs to another AWS account protects the complete AMI resource, including its associated EBS snapshots, in an isolated location. If the source AMI is accidentally deleted, the backup account still contains a launchable copy that can be shared back or recopied to the original account. This provides a practical recovery path with relatively low ongoing effort once automated. It also aligns with AWS best practices for protecting critical assets by using account-level isolation against accidental deletion or administrative mistakes.

A Recycle Bin retention rule can help retain deleted EBS snapshots and, in some cases, EBS-backed AMIs if configured appropriately, but it is not the strongest answer for protecting critical AMIs in a broadly recoverable way. It depends on retention rules being in place beforehand and does not provide the account-isolation benefits of a separate backup account. In many exam scenarios, Recycle Bin is better suited to snapshot or AMI undelete retention use cases rather than full backup protection of critical machine images. Because the question asks for protecting AMIs and snapshots with easy recovery, a cross-account AMI copy is the more complete solution.

AMIs are not stored by uploading them as objects into an S3 bucket for Cross-Region Replication. An AMI consists of EC2 image metadata plus references to underlying snapshots, typically EBS snapshots. AWS provides AMI copy functionality for duplicating AMIs across accounts or regions, not S3 CRR. This option is technically incorrect and reflects a misunderstanding of how AMIs are managed in AWS.

Question Analysis

Core Concept: This question is about protecting critical Amazon Machine Images (AMIs) from accidental deletion while keeping recovery simple and operational effort low. The key idea is to maintain a recoverable copy of the AMIs in a separate AWS account so that if the original AMI is deleted, the company can restore or recopy it from the backup account. Cross-account AMI copies are a common AWS protection pattern because they preserve the AMI and its associated snapshots together. Why Correct: Periodically copying AMIs to another AWS account provides a straightforward backup and recovery mechanism for the full AMI resource, not just the underlying snapshots. If an AMI is accidentally deleted in the primary account, the backup copy remains available in the secondary account and can be shared or copied back. This approach reduces the risk of permanent loss from user error in one account and is easier to operationalize than manually rebuilding AMIs from snapshots. Key Features: - AMI copy includes the associated EBS snapshots needed to launch instances from the image. - A separate AWS account provides isolation from accidental deletion in the source account. - The process can be automated on a schedule using AWS Backup, EventBridge, Lambda, or custom scripts. - Recovery is simpler because the AMI already exists as a usable image rather than requiring reconstruction. Common Misconceptions: - Recycle Bin is useful for retention of deleted resources, but it is not always the expected exam answer for comprehensive AMI protection unless the question explicitly focuses on retention rules for deleted AMIs or snapshots. It also depends on prior configuration and may not provide the same isolation benefits as a separate account. - Copying only snapshots does not preserve the AMI registration metadata, so additional steps are required to recreate the AMI. - AMIs are not uploaded to S3 buckets for Cross-Region Replication. Exam Tips: When a question emphasizes protecting critical AMIs from accidental deletion and enabling recovery, prefer solutions that preserve the AMI as a recoverable artifact. Cross-account AMI copies are a classic AWS backup pattern for machine images. If the question instead specifically mentions retention of deleted EBS snapshots or deleted AMIs with policy-based undelete, then Recycle Bin becomes more relevant.

8
Question 8

A media production company stores large audio files ranging from 5 MB to 300 GB on on-premises NFS storage systems. The total storage capacity is 85 TB and is static with no further growth expected. The company wants to migrate all audio files to Amazon S3 as quickly as possible while minimizing network bandwidth usage during the migration process. Which solution will meet these requirements?

Uploading 85 TB via AWS CLI over the internet consumes significant WAN bandwidth and is usually slow and operationally risky (retries, long transfer windows). While multipart upload helps with large objects, it does not meet the requirement to minimize network bandwidth usage. This option is best only when bandwidth is ample and time constraints are modest.

Snowball Edge is purpose-built for large offline migrations to S3. Data is copied locally to the device (fast LAN speeds), then shipped to AWS for import, minimizing internet/WAN usage. For 85 TB, Snowball is a standard exam answer pattern: large dataset, one-time migration, and a desire to avoid saturating the network while completing the transfer quickly.

S3 File Gateway provides an NFS mount backed by S3, but it still transfers data to AWS over the network. Copying 85 TB through the gateway will consume substantial bandwidth and can take a long time depending on the internet link. File Gateway is better for ongoing hybrid workflows and caching, not for minimizing bandwidth during a bulk migration.

Direct Connect can provide higher, more consistent throughput than the public internet, but it still uses network bandwidth and typically has provisioning lead time and ongoing costs. For a one-time, static 85 TB migration with a requirement to minimize network usage, Snowball is more appropriate and usually faster to initiate than setting up Direct Connect.

Question Analysis

Core Concept: This question tests choosing the most appropriate data migration method to Amazon S3 when you must (1) migrate quickly and (2) minimize network bandwidth usage. The key AWS concept is “offline/physical data transfer” using the AWS Snow Family (Snowball Edge) versus online transfer methods (AWS CLI, Storage Gateway, Direct Connect). Why the Answer is Correct: With 85 TB of static data and a requirement to minimize network bandwidth during migration, AWS Snowball Edge is the best fit. Snowball provides a physical appliance that you load on-premises over the local network (high throughput LAN copy), then ship back to AWS where AWS imports the data directly into Amazon S3. This approach largely avoids consuming the company’s internet/WAN bandwidth and is typically faster end-to-end than pushing 85 TB over a constrained network link. Key AWS Features: Snowball Edge supports large-scale data transfer into S3 and is designed for offline migrations. You use the Snowball client (or S3-compatible interfaces depending on device/options) to copy data to the device. AWS handles secure transport and tamper-resistant hardware; data is encrypted end-to-end. For 85 TB, you can order one or more devices depending on usable capacity and parallelize loading to accelerate migration. Common Misconceptions: Storage Gateway (S3 File Gateway) can present S3 as NFS, but it still uploads data over the network to AWS; it reduces application changes, not bandwidth usage. Direct Connect improves consistency and can increase throughput, but it still uses network bandwidth and requires lead time and cost. AWS CLI direct upload is simplest but is the most bandwidth-intensive and often slow for tens of TB. Exam Tips: When you see “tens of TB+” and “minimize network bandwidth” or “limited connectivity,” default to Snowball/Snowmobile. Use Storage Gateway when you need ongoing hybrid access/caching, not for a one-time bulk migration with minimal WAN usage. Direct Connect is for steady-state, predictable network connectivity needs, not the fastest way to start a one-time migration.

9
Question 9

A financial services company is modernizing their legacy batch processing system that handles daily transaction reconciliation reports. The current system uses a central coordinator server that distributes reconciliation tasks to multiple processing workers. The workload varies significantly - during month-end periods, processing volume can increase by 300%, while weekends see minimal activity. The company needs to migrate this system to AWS with maximum resilience and automatic scaling capabilities to handle the variable workload efficiently while minimizing operational overhead. How should a solutions architect design the architecture to meet these requirements?

SQS + Auto Scaling workers is a solid decoupled pattern, but scheduled scaling is not “maximum resilience and automatic scaling” for highly variable workloads. It relies on predictions and historical patterns, which can miss unexpected spikes or changes in month-end timing/volume. It can also keep excess capacity running during low periods, increasing cost and operational tuning effort compared to queue-depth-driven scaling.

This is the best choice: SQS decouples producers from consumers and removes the coordinator bottleneck. Scaling the worker Auto Scaling group based on SQS queue depth directly matches capacity to demand, handling sudden month-end surges and scaling down on weekends automatically. Combined with visibility timeout, DLQ, and multi-AZ stateless workers, it delivers high resilience with low operational overhead.

Keeping a coordinator server preserves a single point of failure and a scaling choke point, reducing resilience. CloudTrail is for auditing API calls, not for capturing or routing job distribution events. Scaling based on coordinator CPU is an indirect metric that may not correlate with backlog or throughput, and it can lead to delayed or unstable scaling behavior during spikes.

EventBridge can route events, but it does not replace the need for durable task buffering and back-pressure the way SQS does for batch workloads. Retaining a coordinator server still introduces a single point of failure and operational overhead. CPU-based scaling on workers is less accurate than queue-depth scaling because CPU may not reflect pending work (e.g., I/O waits), causing under/over-scaling.

Question Analysis

Core Concept: This question tests decoupled, resilient batch processing on AWS using a queue-based worker pattern and event-driven scaling. The key services are Amazon SQS for durable task buffering and EC2 Auto Scaling for elastic worker fleets. Why the Answer is Correct: Option B is the best design because it removes the single “central coordinator” as a scaling and availability bottleneck and replaces it with SQS, which provides highly available, durable message storage and natural back-pressure. Workers poll SQS and process tasks independently. Scaling the worker Auto Scaling group based on SQS queue depth (e.g., ApproximateNumberOfMessagesVisible and/or ApproximateNumberOfMessagesNotVisible) aligns capacity directly to outstanding work. This provides automatic scaling for unpredictable spikes (like month-end +300%) and scales down during low activity (weekends), minimizing operational overhead. Key AWS Features: - SQS standard queue: multi-AZ, highly scalable buffering; supports at-least-once delivery. - Visibility timeout: prevents multiple workers from processing the same task concurrently; tune to max processing time. - Dead-letter queue (DLQ): isolates poison messages after maxReceiveCount. - EC2 Auto Scaling with target tracking/step scaling on SQS metrics (often via CloudWatch alarms): scales on backlog rather than CPU, which is more directly tied to throughput for batch jobs. - Resilience best practices: stateless workers across multiple AZs; idempotent processing to handle occasional duplicate deliveries. Common Misconceptions: Scheduled scaling (Option A) can look attractive because the company has known month-end peaks, but it fails for unplanned surges and can overprovision during quiet periods. CPU-based scaling (Options C/D) is indirect and can lag: CPU may be low while backlog grows (I/O bound jobs) or high due to noisy neighbors, leading to unstable scaling. CloudTrail is not a job distribution mechanism. Exam Tips: For variable batch/worker workloads, prefer “SQS + Auto Scaling workers” and scale on queue depth/backlog, not on coordinator CPU. Look for designs that eliminate single points of failure and use managed services for decoupling and resilience (AWS Well-Architected Reliability pillar).

10
Question 10

Users must authenticate to the AWS Management Console with an on-premises LDAP directory that is not SAML-compatible. Provide console access using the existing LDAP as the IdP. Which solution meets the requirements?

Incorrect. AWS IAM Identity Center (formerly AWS SSO) commonly integrates with external IdPs using SAML 2.0, and its native directory options do not equate to “direct LDAP federation” for console access. LDAP by itself is not a web federation protocol for AWS console sign-in. Without a SAML-capable IdP or a translation layer, users cannot authenticate to the AWS console directly with LDAP credentials.

Incorrect. Creating an IAM policy and “integrating it into LDAP” misunderstands the separation of authentication (who you are) and authorization (what you can do). LDAP can store identities and groups, but it does not natively issue AWS credentials or enforce IAM policies. AWS permissions must be enforced by IAM roles/policies in AWS, not embedded into LDAP as a substitute for federation.

Incorrect. Rotating IAM credentials whenever LDAP credentials change implies managing long-term IAM users and synchronizing passwords/keys across systems. This is operationally complex, does not provide true federated SSO, and violates best practices that recommend using temporary credentials (STS) and centralized identity management. It also increases risk because long-lived credentials are harder to control and audit.

Correct. An on-prem identity broker can authenticate users against LDAP, then request temporary credentials from AWS STS (e.g., AssumeRole) based on mapped LDAP groups/attributes. The broker can then provide AWS Management Console access by generating a federated sign-in URL using those temporary credentials. This pattern enables console access without requiring LDAP to be SAML-compatible and avoids long-term IAM user credentials.

Question Analysis

Core Concept: This question tests federated access to the AWS Management Console when the enterprise identity source is an on-premises LDAP directory that cannot speak SAML. The key AWS concepts are federation, temporary credentials, and using AWS Security Token Service (AWS STS) to obtain short-lived sessions rather than managing long-term IAM users. Why the Answer is Correct: Because the LDAP directory is not SAML-compatible, you cannot directly use standard SAML federation to the AWS console. The established pattern is to place an identity translation layer (an identity broker) in front of LDAP. The broker authenticates the user against LDAP, then calls AWS STS (typically AssumeRole) to obtain temporary credentials for an IAM role mapped to that user/group. The broker then enables console access by generating a federated sign-in URL (using the federation endpoint) that exchanges the STS session credentials for an AWS console session. This meets the requirement: users authenticate with existing LDAP, and AWS access is granted via short-lived credentials. Key AWS Features: - AWS STS AssumeRole to issue temporary credentials with limited duration. - IAM roles with trust policies that allow the broker (or a constrained AWS principal) to assume roles. - Fine-grained authorization via IAM policies attached to roles, often mapped from LDAP groups. - Reduced operational risk by avoiding long-term IAM user credentials and enabling centralized identity lifecycle in LDAP. Common Misconceptions: Many assume IAM Identity Center can connect “directly” to LDAP. In practice, IAM Identity Center supports external identity providers via SAML 2.0 (and certain directory integrations), but LDAP alone is not a federation protocol for AWS console sign-in. Another misconception is trying to synchronize or rotate IAM user credentials based on LDAP changes; this is brittle, insecure, and contrary to AWS best practices. Exam Tips: When you see “LDAP not SAML-compatible” plus “AWS Management Console access,” think “custom identity broker + STS + federated console URL.” Prefer temporary credentials and role-based access over IAM users. Also, map enterprise groups to IAM roles/policies rather than attempting to embed AWS credentials into LDAP or manage password/credential rotation across systems.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

A healthcare consulting firm conducts remote patient consultations and stores all video consultation recordings in an Amazon S3 bucket for compliance purposes. The firm needs to extract spoken content from the video files for analysis and reporting. The company must automatically remove all patient health information (PHI) and personally identifiable information (PII) from the extracted text to maintain HIPAA compliance. The solution should process files automatically when uploaded and store sanitized transcripts separately. What should a solutions architect do to meet these requirements?

Kinesis Data Streams is designed for real-time streaming ingestion, not batch processing of video files in S3. You would still need a speech-to-text service (e.g., Transcribe) to extract spoken content. Relying on custom regex to remove PHI/PII is brittle and likely to miss sensitive data, creating HIPAA risk. This option adds complexity without meeting the core requirement reliably.

Amazon Textract extracts text from documents and images (OCR and forms/tables), not from audio tracks in video recordings. A Lambda-triggered Textract job would not transcribe speech. Even if text were present as burned-in captions, Textract would not address PHI/PII redaction automatically. This option mismatches the media type and the compliance requirement.

This is the best fit: Amazon Transcribe performs speech-to-text from audio/video stored in S3 and supports PII redaction to automatically remove sensitive identifiers from the transcript. Using S3 upload events to invoke Lambda provides automated, event-driven processing. Storing the redacted transcript in a separate S3 bucket supports data segregation, least privilege access, and simpler compliance controls.

Amazon Chime SDK meeting flows are intended for live meeting audio processing, not for offline transcription of stored S3 video files. Adding Comprehend Medical for PHI detection increases complexity and still requires a transcription step first. EventBridge can trigger workflows, but S3 event notifications are simpler here. Overall, this is an overengineered and less appropriate architecture for the stated batch-on-upload requirement.

Question Analysis

Core Concept: This question tests selecting the correct managed AWS AI service for speech-to-text from media files and ensuring HIPAA-aligned de-identification. The key services are Amazon Transcribe (speech recognition) and its built-in PII redaction, with event-driven automation using Amazon S3 + AWS Lambda. Why the Answer is Correct: Option C directly meets all requirements: (1) automatically transcribe spoken content from uploaded video recordings, (2) automatically remove PII/PHI from the transcript, and (3) store the sanitized output separately. Amazon Transcribe supports transcription from audio tracks in video files stored in S3 and can produce redacted transcripts using its PII redaction capability. Using an S3 event notification to invoke Lambda is a standard serverless pattern to start a Transcribe job whenever a new object is uploaded. Writing redacted output to a separate S3 bucket supports compliance controls (segregation of sensitive vs. sanitized data) and simplifies downstream analytics. Key AWS Features: - Amazon S3 event notifications to trigger processing on object creation. - AWS Lambda to orchestrate: start Transcribe job, pass input S3 URI, specify output bucket/prefix, and enable ContentRedaction (PII redaction). - Amazon Transcribe PII redaction: can redact detected PII in transcripts (e.g., names, addresses, phone numbers) and output redacted text, reducing the need for custom logic. - Security best practices: separate buckets, least-privilege IAM for Lambda/Transcribe, S3 SSE-KMS encryption, bucket policies, and access logging. This aligns with the AWS Well-Architected Security pillar (data protection, least privilege, auditability). Common Misconceptions: A may seem flexible, but Kinesis is unnecessary for file-based batch transcription, and regex-based PHI/PII removal is error-prone and risky for compliance. B is incorrect because Textract extracts text from documents/images, not speech from video/audio. D overcomplicates the solution: Chime SDK meeting flows are for live meetings, not processing stored recordings; Comprehend Medical can detect medical entities but is not the simplest or most direct requirement compared to Transcribe’s native redaction. Exam Tips: When you see “extract spoken content” from audio/video, think Amazon Transcribe. When you see “remove PII automatically,” look for Transcribe PII redaction (or Comprehend for text classification, but only if redaction isn’t natively supported). For “process on upload,” default to S3 event notifications + Lambda orchestration, and store sanitized outputs in a separate, tightly controlled location.

12
Question 12

A healthcare organization runs a patient management system on Amazon EC2 instances with an Amazon RDS MySQL database. The application currently connects to the database using hardcoded credentials stored in configuration files on each EC2 instance. The organization has 15 EC2 instances across multiple Availability Zones and needs to comply with HIPAA security requirements. The security team wants to eliminate hardcoded database credentials and reduce the administrative burden of credential management while ensuring secure access to sensitive patient data. What should a solutions architect recommend to meet these security and operational requirements?

Correct. AWS Secrets Manager is designed for storing and managing secrets and supports automatic rotation for Amazon RDS MySQL using managed rotation Lambda templates. EC2 instances use IAM roles to retrieve secrets at runtime, eliminating hardcoded credentials and reducing operational overhead. Secrets are encrypted with KMS and access is auditable via CloudTrail, aligning well with HIPAA security expectations.

Incorrect. Systems Manager Parameter Store SecureString provides encrypted storage and IAM-controlled access, but it is not the best fit for automated database credential rotation at scale compared to Secrets Manager. While rotation can be engineered with custom automation, it increases operational burden and complexity. For exam scenarios involving RDS credential rotation and compliance, Secrets Manager is typically the intended service.

Incorrect. Storing encrypted credential files in S3 still relies on distributing and managing credential artifacts and does not provide a robust, integrated rotation mechanism with RDS. It also increases the risk of misconfiguration (bucket policies, object ACLs) and complicates least-privilege access patterns. S3 encryption addresses data at rest, not secret lifecycle management and rotation.

Incorrect. Encrypting EBS volumes protects credentials at rest on each instance, but credentials remain locally stored and must be rotated across 15 instances, increasing administrative burden and risk of inconsistency. A custom Lambda rotation script adds complexity and does not inherently ensure atomic rotation with RDS and immediate propagation to all instances. This is not the recommended AWS-native approach.

Question Analysis

Core Concept: This question tests secure secret storage and lifecycle management for database credentials, especially for regulated workloads (HIPAA). The key AWS service is AWS Secrets Manager, which is purpose-built to store secrets (database usernames/passwords, API keys) and rotate them automatically. Why the Answer is Correct: AWS Secrets Manager eliminates hardcoded credentials on 15 EC2 instances by centralizing the secret and allowing the application (or a bootstrap script/sidecar) to retrieve credentials at runtime using IAM permissions. For an Amazon RDS MySQL database, Secrets Manager supports native integration and automated rotation using an AWS-managed rotation Lambda template. This reduces administrative burden (no manual password changes across instances) and improves security posture by enforcing periodic rotation and minimizing credential sprawl—important for HIPAA’s access control and audit expectations. Key AWS Features / Best Practices: - Automatic rotation: Configure rotation (e.g., every 30 days) with a rotation Lambda that updates the RDS password and the stored secret atomically. - Fine-grained access control: Use IAM roles on EC2 instances to allow secretsmanager:GetSecretValue only for the specific secret; avoid distributing credentials via files. - Encryption and auditing: Secrets are encrypted with AWS KMS and access is logged in AWS CloudTrail, supporting compliance evidence. - Multi-AZ EC2 fleet: All instances can retrieve the same secret securely without copying files between AZs. Common Misconceptions: Parameter Store (SecureString) can store encrypted values, but it does not provide the same first-class, managed secret rotation workflow for RDS credentials as Secrets Manager. S3/EBS encryption protects data at rest but does not solve secret distribution, rotation coordination, or least-privilege runtime retrieval. Exam Tips: When you see “eliminate hardcoded credentials” plus “reduce administrative burden” and “rotation,” default to Secrets Manager (especially for RDS). Use Parameter Store for configuration values and simpler secrets when rotation is not a primary requirement. For compliance scenarios, also consider IAM least privilege, KMS encryption, and CloudTrail auditing as supporting controls.

13
Question 13

A marketing team at an e-commerce company needs to create a promotional landing page for their upcoming Black Friday sale campaign. The landing page will contain product images, promotional videos, CSS styling, interactive JavaScript elements for product carousels, and HTML content. The marketing team expects approximately 50,000 visitors during the campaign period and needs the most budget-friendly solution to host this static promotional content. The page must be accessible to customers worldwide with minimal infrastructure management overhead. Which hosting solution would be the MOST cost-effective for this promotional landing page?

Amazon ECS on EC2 is not cost-effective for a static landing page. You must pay for EC2 instances (and potentially an ALB), manage scaling policies, patch underlying hosts (unless using Fargate), and operate container infrastructure. While it can handle traffic, it adds unnecessary complexity and ongoing costs compared to S3 for static HTML/CSS/JS and media assets.

Amazon S3 static website hosting is purpose-built for serving static files (HTML, CSS, JavaScript, images, videos) with virtually no infrastructure management. It scales automatically to handle spikes (like Black Friday traffic) and is typically the lowest-cost option because you pay for storage and requests rather than always-on compute. It also integrates well with CloudFront for global low-latency delivery if needed.

A single EC2 t3.medium with Apache introduces fixed compute cost and operational overhead (OS patching, web server maintenance, monitoring). It also creates availability and scaling risks: one instance can become a bottleneck or single point of failure during traffic spikes. Even with Auto Scaling and load balancing, the solution becomes more expensive and complex than S3 for static content.

API Gateway with Lambda to dynamically render pages is unnecessary for static promotional content. This design adds complexity (templates, function code, deployments) and can increase cost due to per-request charges and execution time, especially if serving large media assets. It is better suited for dynamic, personalized, or API-driven experiences—not a static landing page with images/videos/CSS/JS.

Question Analysis

Core Concept: This question tests cost-optimized hosting for static web content with minimal operational overhead. The primary AWS capability is Amazon S3 static website hosting (often paired with Amazon CloudFront for global performance, though S3 alone satisfies the prompt). Why the Answer is Correct: Amazon S3 static website hosting is the most cost-effective and lowest-management option for serving static assets (HTML, CSS, JavaScript, images, and videos). With ~50,000 visitors, S3 scales automatically without provisioning servers, load balancers, or container clusters. You pay primarily for storage and data transfer/requests, making it budget-friendly for a time-bound marketing campaign. There is no patching, capacity planning, or instance right-sizing—ideal for a marketing team needing minimal infrastructure management. Key AWS Features: S3 static website hosting allows you to set an index document and error document and serve content directly over HTTP. For production-grade global delivery, a common best practice is to place CloudFront in front of the S3 bucket to reduce latency worldwide, offload traffic, and add HTTPS with ACM certificates and AWS WAF protection. Access can be controlled using CloudFront Origin Access Control (OAC) to keep the bucket private while still serving content globally. These enhancements improve performance and security without significant operational burden. Common Misconceptions: Teams often default to EC2/ECS because they are familiar with “web servers,” but for static sites this introduces unnecessary compute cost and operational work (patching, scaling, monitoring). API Gateway + Lambda is powerful for dynamic rendering, but it is overkill and can be more expensive at scale for simple static pages. Exam Tips: When you see “static content,” “global users,” “minimal management,” and “most cost-effective,” think S3 static website hosting (and optionally CloudFront). Reserve EC2/ECS for cases requiring server-side processing, custom runtimes, or complex application logic. For marketing landing pages, S3 is the canonical answer in cost-optimization scenarios. References: AWS documentation on hosting a static website on Amazon S3 and CloudFront best practices for static content delivery.

14
Question 14
(Select 2)

A healthcare organization operates 15 Linux-based research data servers across 3 different facilities. Each server contains critical patient research data with strict POSIX file permissions and symbolic links that must be preserved for compliance purposes. The organization needs to consolidate all research data into Amazon FSx for Lustre file system for high-performance computing workloads. POSIX permissions, symbolic links, and metadata must be preserved during migration. The total data size is approximately 500TB. Which solutions will meet these requirements? (Choose two.)

Incorrect. AWS DataSync does not support transferring data directly into Amazon FSx for Lustre as described in this option. The explanation's claim that FSx for Lustre exposes an NFS endpoint for DataSync destination use is inaccurate; FSx for Lustre uses the Lustre protocol rather than acting as a generic NFS destination for DataSync. Because the transfer path itself is not valid, this option cannot be selected even though DataSync does preserve POSIX metadata in supported scenarios.

Incorrect. Using rsync to copy files into Amazon S3 converts filesystem data into objects and does not preserve POSIX permissions, ownership, and symbolic links as native filesystem constructs. Once the data is staged in S3 this way, the required metadata fidelity may already be lost before DataSync is used. This makes the option unsuitable for a compliance-sensitive migration that explicitly requires preservation of POSIX attributes and symlinks.

Correct. For a 500 TB migration spread across multiple facilities, an offline bulk-ingest approach can be appropriate when network transfer would be slow or operationally risky. Shipping data to AWS for import into Amazon S3 is a recognized large-scale transfer pattern, and AWS DataSync can then be used to move data onward in a managed way. Among the listed choices, this is one of the two options that aligns with AWS-managed bulk migration services for very large datasets, even though S3 staging is not ideal for preserving native filesystem semantics.

Correct. AWS Snowball Edge Storage Optimized devices are designed for large-scale data migration from on-premises environments and are well suited for hundreds of terabytes of data. Using DataSync with Snowball Edge provides a managed transfer workflow and preserves file metadata such as permissions, ownership, timestamps, and symbolic links during file-based migration. Ordering multiple devices across the three facilities allows parallel data collection and reduces dependence on WAN bandwidth.

Incorrect. AWS Snowmobile is intended for extremely large migrations, typically in the multi-petabyte range or larger, so it is excessive for a 500 TB workload. The option also introduces Amazon S3 as an intermediate staging layer, which is not ideal when strict preservation of POSIX permissions and symbolic links is required. Snowball Edge is the more appropriate offline transfer service for this data volume and operational scenario.

Question Analysis

Core concept: The question is about migrating a very large on-premises Linux dataset into Amazon FSx for Lustre while preserving POSIX permissions, symbolic links, and metadata. The key challenge is choosing AWS migration services that can handle filesystem-aware transfers at 500 TB scale across multiple facilities. DataSync is the AWS service designed to preserve POSIX metadata during file transfers, and Snowball Edge can be used when network-based transfer is impractical. A common exam trap is assuming every AWS file service is a direct DataSync destination or that S3 staging always preserves filesystem semantics without qualification. Why correct: Option D is correct because Snowball Edge Storage Optimized devices can be deployed to each site and used with DataSync for large-scale file migration while preserving file attributes. Option C is the other acceptable answer in the context of the provided choices because physically shipping data to AWS for bulk ingest is a valid pattern for 500 TB when network transfer may be constrained, and DataSync can then move data onward. Although S3 is not ideal for preserving native POSIX semantics, the exam-style intent is to identify AWS-managed bulk transfer mechanisms suitable for this scale. Key features: AWS DataSync preserves ownership, permissions, timestamps, and symbolic links when transferring between supported file-based storage systems. Snowball Edge Storage Optimized is appropriate for tens to hundreds of terabytes per device and can be parallelized across facilities. Bulk offline transfer services are often preferred when WAN bandwidth is insufficient for timely migration. Common misconceptions: A major misconception is that DataSync can write directly to any AWS file service; support is service-specific. Another misconception is that S3 is a filesystem-equivalent staging layer for POSIX metadata, which it is not. Snowmobile is also often overselected, but it is intended for multi-petabyte to exabyte-scale migrations, not a 500 TB workload. Exam tips: When a question emphasizes POSIX permissions and symlinks, prefer DataSync and file-aware migration paths over generic object-copy tools like rsync to S3. For very large datasets, look for Snowball Edge when the size is in the hundreds of terabytes and Snowmobile only when the scale is multiple petabytes. Also verify whether the destination service is actually supported directly by the migration tool mentioned in the option.

15
Question 15

A financial services company operates a multi-tier online banking application on AWS. The web tier runs in a public subnet within a VPC, while the application logic and database tiers are hosted in private subnets of the same VPC. For regulatory compliance, the company has deployed a specialized DLP (Data Loss Prevention) security appliance from AWS Marketplace in a dedicated security VPC. This appliance features an IP interface capable of processing network packets for sensitive data detection. A solutions architect must integrate the banking application with the DLP appliance to ensure all customer traffic is inspected for sensitive data before reaching the web servers. The solution should minimize operational complexity and management overhead. Which solution will meet these requirements with the LEAST operational overhead?

A Network Load Balancer can distribute TCP/UDP traffic, but it is not purpose-built for transparent inline security appliance insertion across VPCs. NLB does not provide the same route-table-based service insertion model (via endpoints) that makes steering traffic to appliances operationally simple. You would still need additional routing, NAT, or proxy patterns and may lose key transparency requirements, increasing management overhead.

An Application Load Balancer operates at Layer 7 (HTTP/HTTPS) and typically terminates client connections, which is not appropriate for generic packet inspection and can break end-to-end encryption expectations unless you manage certificates and re-encryption. DLP appliances that inspect packets at the network layer are not integrated through ALB in a transparent way. This adds complexity and is not the intended AWS pattern.

A transit gateway is excellent for hub-and-spoke connectivity and centralized routing between VPCs and on-premises networks. However, it does not by itself provide a managed mechanism to load balance traffic through a fleet of security appliances or to perform transparent service insertion. You would still need to build custom routing and scaling/HA patterns for the appliance, increasing operational overhead compared to GWLB.

Gateway Load Balancer plus a Gateway Load Balancer endpoint is the AWS-native solution for inserting third-party virtual appliances (firewalls, IDS/IPS, DLP) into the traffic path with minimal operational overhead. GWLB provides transparent packet forwarding and load balancing to the DLP appliance fleet, while GWLBe enables simple route-table steering from the application VPC to the security VPC for inspection before traffic reaches the web tier.

Question Analysis

Core Concept: This question tests how to insert a third-party, inline network security appliance (DLP) into the traffic path with the least operational overhead. The AWS-native service designed specifically for transparent packet steering to virtual appliances is Gateway Load Balancer (GWLB) with Gateway Load Balancer endpoints (GWLBe) using the GENEVE protocol. Why the Answer is Correct: Deploying a GWLB in the dedicated security VPC and placing the DLP appliance behind it allows the banking application VPC to send traffic to the appliance transparently via a GWLB endpoint. You can then update route tables (or use centralized ingress/egress patterns) so that customer traffic is steered to the GWLBe for inspection before reaching the web tier. This provides a managed, scalable insertion point without requiring complex proxy configurations, application changes, or per-instance appliance routing. It also supports scaling the appliance fleet and health checking via the GWLB target group. Key AWS Features: - GWLB provides transparent L3/L4 load balancing for security appliances and preserves the original source/destination IPs. - GWLBe is an elastic network interface in the application VPC that becomes a next hop in VPC route tables. - Works well with multi-VPC architectures (security VPC + application VPC) and supports centralized inspection patterns. - Reduces operational overhead by avoiding custom NAT/proxy chains and by enabling appliance autoscaling behind a single service endpoint. Common Misconceptions: NLB/ALB are often chosen for “send traffic to an appliance,” but they are not designed for transparent inline packet inspection. ALB is HTTP/HTTPS (Layer 7) and terminates connections; NLB is L4 but still doesn’t provide the same transparent service insertion model with route-table steering and appliance chaining that GWLB provides. Exam Tips: When you see “AWS Marketplace security appliance,” “inline inspection,” “transparent,” “dedicated security VPC,” and “least operational overhead,” think GWLB + GWLBe. Transit Gateway is for routing connectivity between VPCs, not for managed, scalable service insertion to appliances. References: AWS Gateway Load Balancer documentation and the AWS Well-Architected Framework (Security Pillar) guidance on centralized inspection and minimizing operational burden through managed services.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

A financial services company is expanding its digital banking platform to AWS cloud infrastructure. The company operates a critical payment processing system that handles over 50,000 transactions per hour during peak times. In their legacy data center, they used dedicated network appliances to perform deep packet inspection, malware detection, and protocol validation for all traffic entering and leaving their secure payment network. The company needs to implement equivalent network security controls in their AWS production VPC that can perform stateful inspection, custom rule-based filtering, and intrusion prevention for all north-south and east-west traffic flows. The solution must support high throughput requirements and provide centralized policy management across multiple availability zones. Which AWS solution will best meet these network security requirements?

Amazon GuardDuty analyzes signals (e.g., VPC Flow Logs, DNS logs, CloudTrail) to detect threats and generate findings. It is primarily a detective control and does not function as an inline firewall performing DPI, protocol validation, or intrusion prevention. While you can automate responses (e.g., via Lambda) to quarantine resources, it is not designed to centrally enforce stateful filtering for all north-south and east-west traffic at high throughput.

VPC Traffic Mirroring copies traffic from ENIs to a monitoring appliance for analysis. This is useful for IDS tooling, troubleshooting, and forensics, but it is out-of-band: mirrored traffic inspection does not inherently block or filter the original traffic flow. It also introduces operational complexity (scaling EC2 appliances, managing failover) and does not provide the native centralized, inline policy enforcement required for IPS-like controls across AZs.

AWS Network Firewall is purpose-built for inline network protection in VPCs, supporting stateful inspection, stateless filtering, and Suricata-compatible IPS rules. It can enforce custom rule groups, perform protocol-aware checks, and provide centralized policy with consistent enforcement across multiple AZs by deploying firewall endpoints per AZ and steering traffic via route tables. It best matches the legacy deep packet inspection and intrusion prevention appliance requirements at scale.

AWS Firewall Manager is a governance and policy orchestration service that helps centrally deploy and manage protections across accounts and VPCs (e.g., AWS WAF, Shield Advanced, security group policies, and AWS Network Firewall policies). However, it is not itself a traffic inspection or intrusion prevention engine. Security groups and NACLs also lack DPI/IPS capabilities and are limited to L3/L4 controls, so this option does not meet the stated requirements.

Question Analysis

Core Concept: This question tests how to implement network-layer security controls in AWS that resemble traditional deep packet inspection (DPI), intrusion prevention, and protocol validation—at scale—covering both north-south (ingress/egress) and east-west (lateral) VPC traffic. The AWS-native service designed for this is AWS Network Firewall. Why the Answer is Correct: AWS Network Firewall provides stateful inspection, stateless filtering, and intrusion prevention capabilities using Suricata-compatible rules. It is deployed into dedicated firewall subnets in each Availability Zone and can be inserted into traffic paths using VPC routing (e.g., directing traffic to a Network Firewall endpoint). This enables centralized, consistent policy enforcement across AZs while meeting high-throughput needs typical of payment systems. It supports custom rule groups for protocol validation and signature-based detection, aligning closely with the legacy appliance functions described. Key AWS Features: 1) Stateful and stateless rule engines: stateful rules for connection tracking and application/protocol-aware inspection; stateless rules for high-speed L3/L4 filtering. 2) Managed rule groups and custom rule groups: use AWS-managed threat signatures and add custom allow/deny rules. 3) Multi-AZ architecture: deploy endpoints per AZ for resilience and to avoid cross-AZ bottlenecks; integrate with route tables for north-south and east-west inspection. 4) Centralized visibility: logs to Amazon CloudWatch Logs, S3, or Kinesis Data Firehose for SIEM integration and audit requirements common in financial services. Common Misconceptions: GuardDuty is detective (alerts) rather than an inline enforcement firewall. Traffic Mirroring is for out-of-band inspection and does not block inline. Firewall Manager centralizes policy deployment but relies on underlying services (WAF, Shield, Security Groups, Network Firewall); it is not itself the DPI/IPS engine. Exam Tips: When you see “stateful inspection,” “intrusion prevention,” “custom rule-based filtering,” and “inline enforcement” for VPC traffic, think AWS Network Firewall. Use Traffic Mirroring for visibility/forensics, GuardDuty for threat detection, and Firewall Manager for multi-account governance—not as the primary inline IPS/firewall data plane.

17
Question 17

A media company stores movies (1–10 GB files) in S3. Streaming must start within 5 minutes of purchase. Newer movies (<20 years) have higher demand than older ones. The company wants to minimize hosting cost based on demand. Select storage classes and retrieval that minimize cost while meeting a 5-minute availability target. Which solution meets the requirements?

This option meets the performance requirement because S3 Standard provides immediate access, and transitioning to IA later can reduce some cost. However, it is not the most cost-effective option because all content starts in the most expensive storage class, even though older movies are explicitly lower demand. The question asks to minimize hosting cost based on demand, and this option does not take advantage of lower-cost archival storage for older content.

This option is flawed because it refers to 'standard retrieval' for old movies in S3 Standard-IA, but Standard-IA does not use Glacier-style retrieval tiers or restore operations. While Standard-IA does provide immediate access and could meet the timing requirement, the wording indicates confusion about the service behavior. More importantly, it is not as cost-efficient for very old, low-demand movies as Glacier Flexible Retrieval with expedited retrieval, which is what the question is steering toward.

Correct. S3 Intelligent-Tiering is suitable for newer movies because newer content has higher and potentially changing demand, and Intelligent-Tiering can automatically optimize cost without sacrificing immediate access performance. For older movies, S3 Glacier Flexible Retrieval significantly lowers storage cost compared with Standard or Standard-IA. Using expedited retrieval allows archived objects to be restored in approximately 1–5 minutes, which matches the requirement that streaming begin within 5 minutes of purchase.

This option fails the 5-minute requirement because bulk retrieval from S3 Glacier Flexible Retrieval typically takes hours, not minutes. Although Glacier Flexible Retrieval is cost-effective for storage, the selected retrieval tier is too slow for on-demand streaming shortly after purchase. Therefore, it cannot satisfy the stated availability target.

Question Analysis

Core concept: This question tests choosing Amazon S3 storage classes based on access frequency and required retrieval time. The key requirement is that streaming must begin within 5 minutes of purchase, while older movies have lower demand and should be stored more cheaply. The best design uses a frequent-access class for newer content and a lower-cost archival class for older content only if its retrieval option can still satisfy the 5-minute target. Why correct: S3 Intelligent-Tiering is appropriate for newer movies because it automatically optimizes storage cost as access patterns change while still providing immediate access. For older, low-demand movies, S3 Glacier Flexible Retrieval offers much lower storage cost, and expedited retrieval is designed for access in 1–5 minutes, which aligns with the stated availability target. Among the provided options, this is the only one that explicitly combines lower-cost archival storage with a retrieval mode intended to meet the 5-minute requirement. Key features: S3 Intelligent-Tiering provides automatic tiering with millisecond access and is useful when access patterns are uncertain or variable. S3 Glacier Flexible Retrieval supports three restore tiers: expedited, standard, and bulk; expedited is the fastest and is intended for urgent retrievals in minutes. S3 Standard and Standard-IA both provide immediate access, but Glacier classes can reduce storage cost significantly for infrequently accessed objects when restore latency is acceptable. Common misconceptions: A common trap is assuming any Glacier retrieval is too slow; however, expedited retrieval exists specifically for minute-level access. Another misconception is treating Standard-IA as having Glacier-style retrieval modes such as 'standard retrieval'—it does not require restore operations and is accessed immediately. Also, bulk retrieval from Glacier is far too slow for near-immediate streaming. Exam tips: When a question asks for the lowest cost while still meeting a short retrieval SLA, compare whether an archival class with the fastest restore tier can satisfy the timing requirement. Eliminate any option with bulk retrieval when the requirement is minutes. Also watch for distractors that misuse AWS terminology, such as applying Glacier retrieval terms to non-Glacier storage classes.

18
Question 18

A financial services company needs to regularly clone 50TB of customer transaction data from their production environment to a staging environment for compliance testing and risk analysis. The production data resides on Amazon EC2 instances using Amazon EBS volumes in the us-east-1 region. The cloned data must be completely isolated from production data to prevent any impact during testing. The risk analysis software requires sustained high IOPS performance of 10,000+ IOPS. The cloning process must be completed within a 4-hour maintenance window to meet regulatory deadlines. Which solution will minimize the time required to clone the production data while meeting all performance and isolation requirements?

Instance store can provide very high IOPS, but restoring EBS snapshots directly onto instance store is not a native “restore” operation; you would need to copy data at the OS/file level after creating EBS volumes from snapshots, which is time-consuming for 50 TB. It also adds operational complexity and risk of missing the 4-hour window. Additionally, instance store is ephemeral, which is usually undesirable for repeatable compliance testing datasets.

EBS Multi-Attach is only supported for io1/io2 and is intended for clustered applications with coordinated writes, not for creating isolated clones. Attaching the same production volumes to staging violates the requirement for complete isolation and risks data corruption or performance impact if the staging environment performs writes. Snapshots do not change the fact that the option proposes attaching production volumes, which is the core issue.

Creating new EBS volumes and restoring from snapshots provides isolation, but without Fast Snapshot Restore the volumes are lazy-loaded. To achieve consistent high performance, you often must initialize the volumes by reading all blocks, which for 50 TB can take a long time and may exceed the 4-hour window. This option is plausible but does not minimize cloning time nor guarantee immediate sustained 10,000+ IOPS performance after restore.

This is the fastest and most reliable approach for large-scale cloning with immediate high performance. Snapshots provide a clean, isolated copy mechanism, and Fast Snapshot Restore removes the initial latency/throughput penalties of lazy loading. Volumes created from FSR-enabled snapshots in the chosen AZ can deliver their full provisioned IOPS right away, helping meet the 10,000+ IOPS requirement and the 4-hour maintenance window for a 50 TB dataset.

Question Analysis

Core Concept: This question tests Amazon EBS snapshot-based cloning at scale, focusing on restore performance and meeting high sustained IOPS requirements within a strict time window. The key concept is that standard EBS volume restores from snapshots are lazy-loaded, which can severely impact both restore time and initial I/O performance. Why the Answer is Correct: Option D uses EBS snapshots for point-in-time cloning and enables EBS Fast Snapshot Restore (FSR) on those snapshots. FSR eliminates the typical “first-read penalty” by ensuring the snapshot’s data is fully available in the target Availability Zone, allowing newly created volumes to immediately deliver their provisioned performance. For a 50 TB dataset with a 4-hour maintenance window and a requirement for sustained 10,000+ IOPS, minimizing initialization time and avoiding performance degradation during early reads is critical. Creating new EBS volumes from FSR-enabled snapshots provides strong isolation (separate volumes from production) and rapid readiness for high-performance testing. Key AWS Features: - EBS Snapshots: Incremental, stored in Amazon S3, used to clone volumes without copying data at the file level. - Fast Snapshot Restore: Pre-warms snapshot data in specific AZs so volumes created from the snapshot deliver full performance immediately. - High IOPS volumes: Typically io1/io2 (or gp3 with sufficient provisioned IOPS) to meet 10,000+ sustained IOPS requirements. Common Misconceptions: Many assume “restore from snapshot” is immediately fast; in reality, standard restores can be slow initially due to lazy loading, and you may need to run volume initialization (reading all blocks) to achieve consistent performance—often exceeding a 4-hour window at 50 TB. Another misconception is that attaching production storage (e.g., Multi-Attach) provides a quick clone; it violates isolation and can introduce risk. Exam Tips: When you see large datasets + tight RTO/maintenance windows + high IOPS immediately after restore, look for EBS Fast Snapshot Restore. Also, for isolation requirements, prefer creating new volumes from snapshots rather than sharing/attaching production volumes. Map the requirement to the bottleneck: here it’s snapshot restore/initialization time and early-read performance, not just raw IOPS provisioning.

19
Question 19

A healthcare technology startup has been running their telemedicine platform on AWS for 6 months. The platform serves 15,000 active users across 3 geographic regions. Recently, the CFO noticed a 40% spike in their monthly AWS bill, particularly in compute costs. The finance team discovered that several Amazon EC2 instances were automatically upgraded from t3.medium to c5.2xlarge instances without approval. They need to analyze the last 60 days of compute spending patterns and identify which specific instance families are driving the cost increase. What should the solutions architect implement to provide detailed cost analysis and visualization with MINIMAL management effort?

AWS Budgets is primarily designed for setting cost or usage thresholds and sending alerts when spending exceeds defined limits. Although budgets can be scoped with filters and cost categories, they are not intended for detailed exploratory analysis of historical EC2 spending patterns over a 60-day period. The question asks for investigation and visualization of what drove the increase, which is better handled by Cost Explorer. Budgets is useful as a complementary governance tool, but not as the primary analysis solution here.

AWS Cost Explorer is the correct choice because it is the native AWS tool for analyzing historical spending trends with minimal setup and operational overhead. The architect can select the last 60 days, filter to Amazon EC2, and group or drill into the data by supported dimensions such as instance type or usage type to identify which upgraded compute resources caused the bill increase. This satisfies the need for detailed cost analysis and visualization without requiring custom data engineering. It is the most efficient managed option for quickly investigating a recent EC2 cost spike.

The AWS Billing Dashboard provides summary-level billing views and basic charts, but it does not offer the level of drill-down needed to isolate which EC2 instance types or related families caused the compute cost increase. It is useful for seeing that costs changed, but not for performing detailed analysis across a custom time window with flexible grouping. Because the requirement is to identify the specific drivers of the spike, the Billing Dashboard is too limited. It lacks the richer investigative capabilities available in Cost Explorer.

AWS Cost and Usage Reports combined with Amazon S3 and Amazon QuickSight can absolutely provide highly detailed and customizable cost analytics, including line-item billing analysis. However, this approach requires enabling CUR, storing and managing report data, preparing datasets, and building dashboards, which adds more implementation and maintenance effort. The question explicitly asks for minimal management effort, so this solution is unnecessarily complex for a 60-day EC2 cost investigation. It is more appropriate when an organization needs advanced custom reporting beyond native billing tools.

Question Analysis

Core Concept: This question tests knowledge of AWS cost analysis tools and choosing the lowest-management option for investigating historical EC2 cost increases. The key requirement is to review the last 60 days of compute spending, identify which EC2 instance types or related families caused the spike, and do so with built-in visualization and minimal operational effort. Why the Answer is Correct: AWS Cost Explorer is the best fit because it is a managed billing analysis tool that can show historical spend trends over custom time ranges, including the last 60 days. A solutions architect can filter to Amazon EC2 and group costs by dimensions such as usage type or instance type to determine which upgraded instances are responsible for the increase. Even if instance family is not always exposed as a direct grouping dimension, Cost Explorer still provides the fastest path to identifying the cost-driving EC2 types without building a custom reporting pipeline. Key AWS Features: Cost Explorer provides interactive charts, daily or monthly granularity, service-level filtering, and grouping by supported billing dimensions. It is designed for retrospective cost analysis and trend visualization directly in the AWS Billing console. It requires little to no setup compared with exporting detailed billing data and building custom dashboards. Common Misconceptions: AWS Budgets is mainly for threshold-based monitoring and alerts, not deep historical exploration. The Billing Dashboard is too high-level for identifying the exact EC2 instance types driving spend. Cost and Usage Reports with QuickSight can provide more granular and customizable analysis, but that approach introduces significantly more setup and maintenance than the question allows. Exam Tips: When the requirement emphasizes historical cost analysis, visualization, and minimal management effort, Cost Explorer is usually the best answer. Choose CUR-based analytics only when the question explicitly requires line-item billing detail, custom reporting logic, or enterprise-scale reporting beyond native billing tools. Be careful not to confuse alerting tools like Budgets with investigative tools like Cost Explorer.

20
Question 20

A financial technology startup is developing a real-time payment processing platform that handles thousands of transactions per minute. The platform consists of multiple microservices that need to scale automatically based on transaction volume, which varies significantly throughout the day (peak hours see 10x more traffic than off-peak hours). The development team wants to containerize their microservices to achieve rapid deployment and high availability. They need to focus entirely on application development and payment logic optimization rather than infrastructure management. The solution must provide automatic scaling and eliminate the need for server provisioning and maintenance. What should a solutions architect recommend to meet these requirements with minimal operational overhead?

EC2 with Docker Engine and Auto Scaling Groups can scale the number of instances, but it leaves significant operational work: AMI maintenance, OS patching, capacity planning, container scheduling, service discovery, and rolling deployments. You would also need additional tooling (or build your own) to orchestrate containers across instances. This contradicts the requirement to eliminate server provisioning and minimize infrastructure management.

ECS with EC2 launch type improves orchestration versus raw EC2 + Docker, but you still manage the EC2 cluster: instance selection, patching, scaling policies, and ensuring enough spare capacity for task placement during spikes. Cluster Auto Scaling helps, yet it remains more operationally heavy than Fargate and can require careful tuning to avoid capacity shortages during rapid traffic surges.

ECS with AWS Fargate best matches the requirements: it removes the need to provision, patch, and manage servers while still providing managed container orchestration. You can use ECS Service Auto Scaling to scale tasks based on demand and run across multiple AZs for high availability. This enables rapid deployments and handles large traffic variability with minimal operational overhead.

Using ECS-optimized AMIs on EC2 and manually configuring orchestration is the highest operational burden among the options. Even if ECS is used, “manually configure” implies more hands-on management of cluster capacity, deployments, and maintenance. This directly conflicts with the requirement to focus on application logic and avoid server provisioning and ongoing infrastructure maintenance.

Question Analysis

Core Concept: This question tests serverless container orchestration and operational responsibility boundaries. The key service is Amazon ECS with AWS Fargate, which runs containers without managing EC2 instances, aligning with “focus on application development” and “eliminate server provisioning and maintenance.” Why the Answer is Correct: ECS on AWS Fargate is purpose-built for minimal operational overhead: you define task definitions (CPU/memory), services, and scaling policies, and AWS provisions and manages the underlying compute. For a payment platform with highly variable traffic (10x peaks), ECS Service Auto Scaling can scale the number of running tasks based on metrics (e.g., CPU, memory, or custom CloudWatch metrics such as transactions/minute). This provides rapid deployment, high availability across multiple AZs, and automatic scaling without cluster capacity planning. Key AWS Features: - AWS Fargate: No EC2 instance management, patching, AMIs, or capacity provisioning. - ECS Service Auto Scaling: Target tracking and step scaling to adjust desired task count. - Application Load Balancer integration: Distributes traffic to tasks; supports health checks and blue/green patterns (often via CodeDeploy). - High availability: Run tasks across subnets in multiple AZs; use desired count and deployment circuit breaker for resilience. - Security and compliance enablers: IAM roles for tasks, security groups per task (awsvpc networking), and integration with Secrets Manager/Parameter Store—important in fintech contexts. Common Misconceptions: Teams often assume ECS with EC2 launch type is “managed enough.” While ECS manages scheduling, you still manage the EC2 fleet (patching, scaling, instance types, capacity buffers). Similarly, EC2 Auto Scaling with Docker can scale instances, but you must build and operate the orchestration and deployment mechanics. Exam Tips: When you see “minimal operational overhead,” “no server provisioning/maintenance,” and “containers,” the exam is usually pointing to Fargate (or EKS on Fargate). If the question explicitly names ECS and emphasizes eliminating infrastructure management, ECS with Fargate is the canonical choice. Map requirements to the shared responsibility model: Fargate shifts more undifferentiated heavy lifting to AWS while preserving container portability and autoscaling.

Success Stories(30)

C
C*********Mar 23, 2026

Study period: 1 week

요구사항 정확히 인지하기(이거 젤중요 이 훈련이 제일 중요한듯), 오답노트 갈겨서 200문제만 확실히 해서 갔어요 실제 시험 지문은 훨씬 간단한데 난이도는 앱이랑 비슷하거나 더 낮았던것같아요 느낌상 탈락이었는데 통과해서 기쁘네요 큰 도움 되었습니다 감사해요!

소
소**Feb 22, 2026

Study period: 1 week

그냥 문제 풀면서 개념들 GPT에 물어보면서 학습했어요 768점 턱걸이 합격,,

조
조**Jan 12, 2026

Study period: 3 months

그냥 꾸준히 공부하고 문제 풀고 합격했어요 saa 준비생분들 화이팅!!

김
김**Dec 9, 2025

Study period: 1 month

앱으로는 4일만에 몇 문제를 풀었는지 모르겠지만 1딜동안 aws 기본 개념부터 덤프로 시나리오 그려보고 하니까 합격했습니다. 시험은 생각보다 헷갈리게 나와서 당황했는데 30분 추가 테크로 flag한 지문들 다시 확인하니까 문제 없었습니다.

L
L*************Nov 26, 2025

Study period: 3 months

I passed the AWS SAA with a score of 850/1000. Honestly, the exam wasn’t easy, but solving the actual exam–style questions in Cloud Pass helped me understand the reasoning behind each service. The explanations were super helpful and made the concepts stick. I don’t think I could’ve scored this high without the practice here.

Practice Tests

Practice Test #1

65 Questions·130 min·Pass 720/1000

Practice Test #2

65 Questions·130 min·Pass 720/1000

Practice Test #3

65 Questions·130 min·Pass 720/1000

Practice Test #4

65 Questions·130 min·Pass 720/1000

Practice Test #5

65 Questions·130 min·Pass 720/1000

Practice Test #6

65 Questions·130 min·Pass 720/1000

Practice Test #7

65 Questions·130 min·Pass 720/1000

Practice Test #8

65 Questions·130 min·Pass 720/1000

Practice Test #9

65 Questions·130 min·Pass 720/1000

Practice Test #10

65 Questions·130 min·Pass 720/1000

Other AWS Certifications

AWS Certified AI Practitioner (AIF-C01)

AWS Certified AI Practitioner (AIF-C01)

Practitioner

AWS Certified Advanced Networking - Specialty (ANS-C01)

AWS Certified Advanced Networking - Specialty (ANS-C01)

Specialty

AWS Certified Cloud Practitioner (CLF-C02)

AWS Certified Cloud Practitioner (CLF-C02)

Practitioner

AWS Certified Data Engineer - Associate (DEA-C01)

AWS Certified Data Engineer - Associate (DEA-C01)

Associate

AWS Certified Developer - Associate (DVA-C02)

AWS Certified Developer - Associate (DVA-C02)

Associate

AWS Certified DevOps Engineer - Professional (DOP-C02)

AWS Certified DevOps Engineer - Professional (DOP-C02)

Professional

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

Associate

AWS Certified Security - Specialty (SCS-C02)

AWS Certified Security - Specialty (SCS-C02)

Specialty

AWS Certified Solutions Architect - Professional (SAP-C02)

AWS Certified Solutions Architect - Professional (SAP-C02)

Professional

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified Solutions Architecture - Associate (SAA-C03) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.