CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Solutions Architecture - Associate (SAA-C03)
AWS Certified Solutions Architecture - Associate (SAA-C03)

Practice Test #8

Simulate the real exam experience with 65 questions and a 130-minute time limit. Practice with AI-verified answers and detailed explanations.

65Questions130Minutes720/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

A financial services company is modernizing their legacy batch processing system that handles daily transaction reconciliation reports. The current system uses a central coordinator server that distributes reconciliation tasks to multiple processing workers. The workload varies significantly - during month-end periods, processing volume can increase by 300%, while weekends see minimal activity. The company needs to migrate this system to AWS with maximum resilience and automatic scaling capabilities to handle the variable workload efficiently while minimizing operational overhead. How should a solutions architect design the architecture to meet these requirements?

SQS + Auto Scaling workers is a solid decoupled pattern, but scheduled scaling is not “maximum resilience and automatic scaling” for highly variable workloads. It relies on predictions and historical patterns, which can miss unexpected spikes or changes in month-end timing/volume. It can also keep excess capacity running during low periods, increasing cost and operational tuning effort compared to queue-depth-driven scaling.

This is the best choice: SQS decouples producers from consumers and removes the coordinator bottleneck. Scaling the worker Auto Scaling group based on SQS queue depth directly matches capacity to demand, handling sudden month-end surges and scaling down on weekends automatically. Combined with visibility timeout, DLQ, and multi-AZ stateless workers, it delivers high resilience with low operational overhead.

Keeping a coordinator server preserves a single point of failure and a scaling choke point, reducing resilience. CloudTrail is for auditing API calls, not for capturing or routing job distribution events. Scaling based on coordinator CPU is an indirect metric that may not correlate with backlog or throughput, and it can lead to delayed or unstable scaling behavior during spikes.

EventBridge can route events, but it does not replace the need for durable task buffering and back-pressure the way SQS does for batch workloads. Retaining a coordinator server still introduces a single point of failure and operational overhead. CPU-based scaling on workers is less accurate than queue-depth scaling because CPU may not reflect pending work (e.g., I/O waits), causing under/over-scaling.

Question Analysis

Core Concept: This question tests decoupled, resilient batch processing on AWS using a queue-based worker pattern and event-driven scaling. The key services are Amazon SQS for durable task buffering and EC2 Auto Scaling for elastic worker fleets. Why the Answer is Correct: Option B is the best design because it removes the single “central coordinator” as a scaling and availability bottleneck and replaces it with SQS, which provides highly available, durable message storage and natural back-pressure. Workers poll SQS and process tasks independently. Scaling the worker Auto Scaling group based on SQS queue depth (e.g., ApproximateNumberOfMessagesVisible and/or ApproximateNumberOfMessagesNotVisible) aligns capacity directly to outstanding work. This provides automatic scaling for unpredictable spikes (like month-end +300%) and scales down during low activity (weekends), minimizing operational overhead. Key AWS Features: - SQS standard queue: multi-AZ, highly scalable buffering; supports at-least-once delivery. - Visibility timeout: prevents multiple workers from processing the same task concurrently; tune to max processing time. - Dead-letter queue (DLQ): isolates poison messages after maxReceiveCount. - EC2 Auto Scaling with target tracking/step scaling on SQS metrics (often via CloudWatch alarms): scales on backlog rather than CPU, which is more directly tied to throughput for batch jobs. - Resilience best practices: stateless workers across multiple AZs; idempotent processing to handle occasional duplicate deliveries. Common Misconceptions: Scheduled scaling (Option A) can look attractive because the company has known month-end peaks, but it fails for unplanned surges and can overprovision during quiet periods. CPU-based scaling (Options C/D) is indirect and can lag: CPU may be low while backlog grows (I/O bound jobs) or high due to noisy neighbors, leading to unstable scaling. CloudTrail is not a job distribution mechanism. Exam Tips: For variable batch/worker workloads, prefer “SQS + Auto Scaling workers” and scale on queue depth/backlog, not on coordinator CPU. Look for designs that eliminate single points of failure and use managed services for decoupling and resilience (AWS Well-Architected Reliability pillar).

2
Question 2

A global gaming company operates a multiplayer online game platform on AWS. The platform handles millions of concurrent players during peak gaming hours across different time zones. The company needs a scalable, near-real-time solution to share millions of player activity events (scores, achievements, item purchases) with several internal analytics applications. Player events must be processed to mask personally identifiable information (PII) before being stored in a NoSQL database for fast query performance by leaderboard services. What should a solutions architect recommend to meet these requirements?

DynamoDB Streams can emit item-level changes, but DynamoDB is not the best primary ingestion bus for millions of concurrent events that must be shared broadly in near real time. Also, DynamoDB does not provide a native “rule” mechanism to remove/mask PII on write; you would need application logic or a Lambda-based pipeline. This option also couples ingestion directly to the database, reducing flexibility for multiple analytics consumers.

Kinesis Data Firehose is optimized for loading streaming data into storage/analytics destinations (commonly S3, Redshift, OpenSearch) with optional Lambda transformation. However, Firehose is not designed for multiple independent near-real-time consumers; it delivers to destinations rather than acting as a shared event stream. Additionally, DynamoDB is not a typical/native Firehose destination, making this architecture mismatched to the stated DynamoDB requirement.

Kinesis Data Streams provides a scalable, low-latency ingestion layer that can handle very high event volumes and supports multiple consumers reading the same stream (fan-out). Lambda can process records from the stream to mask PII before writing sanitized events into DynamoDB for fast leaderboard queries. Other analytics applications can consume directly from the same Kinesis stream (optionally using enhanced fan-out) to meet the near-real-time sharing requirement.

Storing batched events as S3 files and processing them with Lambda is a batch-oriented pattern that increases end-to-end latency and is less suitable for “near-real-time” requirements. It also adds operational complexity around file sizing, partitioning, and reprocessing. While S3 is excellent for durable data lakes, it is not the right primary mechanism for real-time event fan-out to multiple analytics applications.

Question Analysis

Core Concept: This question tests designing a scalable, near-real-time event ingestion and fan-out architecture using streaming services, plus secure data handling (PII masking) before persistence in a low-latency NoSQL store. The key services are Amazon Kinesis Data Streams (real-time streaming and multiple consumers), AWS Lambda (stream processing/transformation), and Amazon DynamoDB (fast key-value/NoSQL queries for leaderboards). Why the Answer is Correct: Option C best matches all requirements: (1) ingest millions of concurrent player events with elastic throughput, (2) share the same event stream with several internal analytics applications in near real time, and (3) ensure PII is masked before storing in DynamoDB for fast leaderboard queries. Kinesis Data Streams is purpose-built for high-throughput, low-latency streaming and supports multiple independent consumers reading the same data (fan-out) without building custom distribution. Lambda can be invoked from Kinesis to transform each record (mask/remove PII) and then write sanitized events to DynamoDB. Key AWS Features / Best Practices: - Kinesis Data Streams: shard-based scaling for write/read throughput; retention for replay; enhanced fan-out for dedicated per-consumer throughput and low latency. - Lambda + Kinesis integration: event source mapping with batching, retries, and checkpointing; use idempotent writes to DynamoDB to handle retries safely. - DynamoDB: on-demand capacity or auto scaling to handle variable peak loads; partition key design for leaderboard access patterns; optional DynamoDB Streams for downstream change capture (but not required here). - Security: mask PII in the stream processing layer before persistence; apply least-privilege IAM and consider encryption at rest (KMS) for Kinesis and DynamoDB. Common Misconceptions: A seems attractive because DynamoDB Streams can publish changes, but it is not designed as the primary high-throughput ingestion and multi-consumer event bus, and “a rule in DynamoDB to remove PII upon write” is not a native DynamoDB feature. B mentions Firehose, but Firehose is primarily a delivery service to destinations like S3/Redshift/OpenSearch; it is not a multi-consumer near-real-time bus, and DynamoDB is not a standard Firehose destination. D relies on S3 batch files, which increases latency and is not near real time. Exam Tips: When you see “millions of events,” “near-real-time,” and “several applications consume the same events,” think Kinesis Data Streams (or MSK) rather than S3 batching or DynamoDB Streams. If the requirement says “mask PII before storing,” place transformation (Lambda/Kinesis Data Analytics) before the database sink. Also verify service capabilities: DynamoDB doesn’t have write-time transformation rules, and Firehose has limited destination/consumer patterns compared to Kinesis Data Streams.

3
Question 3

A media production company operates a video editing workflow on AWS using Linux-based workstations. Currently, the company uses two Amazon EC2 instances running NFS servers to store project files, raw footage, and rendered videos. The NFS servers replicate data between each other to ensure redundancy. The creative team requires seamless access to shared storage across multiple editing workstations without changing their current workflow. The company needs a highly available and durable storage solution that maintains the current file access patterns used by the editing workstations. What should a solutions architect recommend to meet these requirements?

Amazon S3 provides excellent durability and availability, but it is object storage accessed via APIs, not an NFS/POSIX file system. Even with S3 access points, workstations would need different tooling (SDK/CLI) or a file gateway/mount solution, and file locking/rename semantics differ. This violates the requirement to keep the current shared file access patterns without workflow changes.

AWS Storage Gateway File Gateway is intended mainly for hybrid use cases where on-premises clients need NFS/SMB access with cached local performance and S3 as the backing store. Mounting a File Gateway “on EC2 NFS servers” adds complexity and does not inherently provide a clean, highly available shared file system for multiple editing workstations. It also doesn’t directly replace the need for HA shared storage in AWS like EFS does.

Amazon EFS is a managed NFS file system that multiple Linux workstations/EC2 instances can mount concurrently with minimal workflow change. EFS is a regional service that stores data across multiple AZs for high availability and durability, removing the need for EC2-based NFS replication and failover management. Using the Standard storage class fits frequently accessed media and supports elastic scaling as projects grow.

Amazon FSx for Lustre is optimized for high-performance parallel workloads (HPC, large-scale rendering/compute) and can integrate with S3, but it is not the simplest match for “keep NFS workflow” and general shared storage. The option’s mention of “cross-AZ replication” is misleading; FSx for Lustre HA is not typically configured as cross-AZ replication in this manner. It can be overkill and adds operational/design complexity compared to EFS.

Question Analysis

Core Concept: This question tests selecting a managed, highly available shared file system that preserves existing POSIX/NFS file access patterns for Linux workstations. The key requirement is “seamless access to shared storage” without workflow changes, plus high availability and durability. Why the Answer is Correct: Amazon EFS is a fully managed NFS file system designed for multiple Linux clients to mount concurrently. It eliminates the operational burden and fragility of self-managed NFS servers on EC2 (patching, scaling, failover, replication logic). EFS stores data redundantly across multiple Availability Zones within a Region, providing high durability and availability by design. Because clients mount EFS over NFS (v4.x), the editing workstations can continue using the same shared-file workflow and file semantics. Key AWS Features: EFS supports NFS mounts from many EC2 instances and workstations in a VPC, elastic capacity (no pre-provisioning), and Multi-AZ resilience as a regional service. Use mount targets in each AZ for high availability and low-latency access. Choose EFS Standard storage class for frequently accessed media; optionally enable lifecycle management to transition colder content to EFS Infrequent Access. For security, use security groups, IAM authorization for EFS, and encryption at rest/in transit. Common Misconceptions: S3 is highly durable but is object storage, not a POSIX/NFS file system; moving to S3 typically requires application/workflow changes (different APIs, semantics, and tooling). Storage Gateway File Gateway is primarily for hybrid/on-premises caching and does not make EC2-hosted NFS servers “highly available” by itself; it also introduces an extra layer and is not the simplest fit for in-AWS shared storage. FSx for Lustre is a high-performance parallel file system, but it is not the default choice for “keep NFS workflow” and “high availability/durability” without additional complexity; also, “cross-AZ replication” is not a standard configuration knob in the way implied. Exam Tips: When you see “Linux shared file storage,” “NFS,” “multiple clients,” and “no workflow changes,” default to EFS. Use FSx for Lustre when the question emphasizes HPC/throughput/parallel I/O and tight integration with S3 for scratch or compute-heavy pipelines. Use S3 when object storage semantics are acceptable. Use Storage Gateway primarily for hybrid connectivity to AWS storage from on-premises environments.

4
Question 4

A healthcare technology startup is developing a telemedicine platform that handles sensitive patient medical records and prescription data. The platform allows patients to schedule appointments, upload medical documents, and receive electronic prescriptions from licensed physicians. The company must comply with HIPAA regulations and ensure that sensitive patient data remains protected even from system administrators and database operators. The platform needs to support real-time patient consultations with 99.9% uptime while maintaining strict data confidentiality. Which solution meets these requirements?

EFS encryption at rest/in transit and security groups provide strong protection against network interception and storage media exposure, but they do not ensure confidentiality from privileged users who can mount the file system or manage instances with access. Administrators with OS-level access could still read files in plaintext once mounted. Also, EFS is a file system, not ideal as the primary system of record for structured medical/prescription data requiring query/transaction semantics.

Amazon RDS for PostgreSQL is the strongest choice because healthcare records and prescription workflows are typically relational and transactional. The decisive security control is application-side encryption of sensitive fields before they are stored, with AWS KMS used for key management, so the database stores ciphertext rather than plaintext for protected PHI. This approach better satisfies the requirement to keep data confidential even from database operators and many administrators. To meet the uptime target in practice, the solution should be deployed with high-availability features such as Multi-AZ, although that detail is an architectural addition rather than something explicitly stated in the option.

DynamoDB SSE-KMS encrypts data at rest, but DynamoDB decrypts data for authorized reads. This typically does not meet the requirement to keep data confidential from administrators/operators with legitimate access because access to the table (or roles with read permissions) can still retrieve plaintext. IAM restrictions help, but the question explicitly requires protection even from admins, which usually implies client-side/field-level encryption rather than only server-side encryption.

FSx for Lustre is optimized for high-performance compute workloads (e.g., HPC, ML, media processing) and is not a typical choice for storing and querying patient records/prescriptions. Encryption and AD integration address at-rest protection and authentication, but do not inherently prevent privileged administrators from accessing plaintext once they have filesystem access. It also doesn’t directly address the need for database-like access patterns and fine-grained field confidentiality.

Question Analysis

Core Concept: This question tests “data confidentiality from privileged users” (including system administrators/DB operators) in a HIPAA-regulated workload. The key concept is that server-side encryption (at rest/in transit) protects against media loss and some infrastructure threats, but does not inherently prevent authorized administrators of the service/account from accessing plaintext. To keep data confidential even from operators, you typically need application/client-side encryption (often field-level) with strict key access controls. Why the Answer is Correct: Option B uses Amazon RDS for PostgreSQL for a relational medical-record/prescription system and applies AWS KMS-backed client-side encryption to sensitive fields before they are stored. Because encryption happens in the application (or a trusted client), the database only ever sees ciphertext for protected columns. This design meaningfully reduces the risk that DBAs, RDS administrators, or anyone with SQL-level access can read sensitive PHI, aligning with the requirement that data remains protected even from system administrators and database operators. Key AWS Features: - Client-side/field-level encryption: encrypt PHI in the app layer; store ciphertext in RDS. - AWS KMS CMKs: centralized key management, rotation, audit via AWS CloudTrail, and fine-grained key policies. - RDS Multi-AZ: supports the 99.9% uptime requirement for the database tier (and is a common exam cue for HA). - HIPAA alignment: use a HIPAA-eligible service (RDS is eligible) and execute a BAA with AWS; enforce least privilege via IAM and database roles. Common Misconceptions: Many choose “server-side encryption with KMS” (like DynamoDB SSE-KMS) thinking it blocks admins. It generally does not—data is decrypted for authorized reads. Network controls (security groups) also don’t solve insider/privileged access once someone has legitimate access paths. Exam Tips: When a question says “protect data even from administrators/operators,” look for client-side encryption, envelope encryption, field-level encryption, or patterns where the service never receives plaintext. Pair that with HA requirements (e.g., Multi-AZ) and compliance basics (BAA, logging, least privilege).

5
Question 5

A consulting firm uses AWS Organizations to manage multiple AWS accounts for its clients. The firm needs to audit IAM user permissions to identify over-privileged users. The solution must review all IAM permissions with minimal administrative overhead. Which solution will meet these requirements with the LEAST administrative overhead?

Network Access Analyzer evaluates network reachability inside Amazon VPC environments. It analyzes paths based on constructs such as route tables, security groups, network ACLs, and gateways. It does not inspect IAM identity-based permissions or determine whether an IAM user has excessive privileges. As a result, it is unrelated to the requirement to audit IAM user permissions.

A CloudWatch alarm can notify administrators when certain events occur, typically by using CloudTrail logs and metric filters. However, that approach only shows actions that users actually perform, not the full set of permissions they have been granted. A user can still be over-privileged even if they never exercise those permissions. This option also creates more operational overhead because alarms, filters, and event patterns must be designed and maintained.

AWS IAM Access Analyzer is the best answer among the options because it is the AWS-native service for analyzing IAM and resource policies and can work across AWS Organizations. It helps identify broad or unintended access and supports policy validation, which is useful when auditing permissions at scale. Because it is fully managed, it requires far less administrative effort than building custom cross-account review processes. Although other IAM features such as last accessed information may also help with over-privilege analysis, they are not offered as choices here.

Amazon Inspector is a vulnerability management service for workloads such as Amazon EC2, container images in Amazon ECR, and AWS Lambda functions. It does not review IAM user permissions to determine whether identities are over-privileged. Inspector findings focus on software vulnerabilities and exposure issues rather than IAM authorization design. Therefore it does not satisfy the requirement to audit IAM permissions across multiple accounts.

Question Analysis

Core Concept: This question tests how to audit and continuously review IAM permissions across multiple AWS accounts in an AWS Organizations environment with minimal operational effort. The key service is IAM Access Analyzer (part of IAM), which helps identify unintended or overly broad access by analyzing policies and access paths. Why the Answer is Correct: AWS IAM Access Analyzer is designed to help you understand who has access to what, and to identify overly permissive access. For auditing over-privileged IAM users, the most relevant capability is Access Analyzer’s policy analysis and findings (including checks for broad principals, external access, and guidance for policy refinement) and its integration with AWS Organizations for multi-account visibility. It is a managed capability that reduces the need to build custom scripts, aggregate logs manually, or maintain bespoke tooling—therefore meeting the “LEAST administrative overhead” requirement. Key AWS Features: - Organization-wide analyzers: You can create an analyzer that covers an entire organization (or specific accounts), centralizing findings. - Policy validation and analysis: Access Analyzer can analyze IAM policies (identity-based and resource-based) to highlight risky patterns and help refine permissions. - Continuous monitoring: Findings update as policies change, supporting ongoing audits rather than one-time reviews. - Security best practice alignment: Supports least privilege by identifying broad access and helping tighten policies. Common Misconceptions: A common trap is confusing Network Access Analyzer with IAM permission auditing. Network Access Analyzer focuses on network reachability (VPC routing, security groups, NACLs) rather than IAM authorization. Another misconception is thinking CloudWatch alarms on activity equate to permission audits; activity monitoring does not reveal whether permissions are excessive. Amazon Inspector is for vulnerability management (EC2, ECR, Lambda) and does not audit IAM privilege scope. Exam Tips: When you see “audit IAM permissions,” “over-privileged users,” and “minimal overhead,” look for managed IAM governance tools: IAM Access Analyzer (and in broader contexts, IAM Access Advisor/last accessed info, IAM Identity Center, or AWS Config rules). Distinguish between identity authorization (IAM) and network reachability (VPC/network analyzers). Choose services that natively support AWS Organizations for multi-account scale.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

A global media company 'WorldNews' has reporters located all over the world who need to send live broadcasts to a broadcast system hosted on AWS in the us-east-1 Region. The reporters use software on their mobile phones to send live streams through the Real-Time Messaging Protocol (RTMP). The quality of the streams is inconsistent due to network latency and congestion from various geographical locations. A solutions architect must design a solution that provides the reporters with the ability to send the highest quality streams by accelerating their TCP connections back to the broadcast system, regardless of their location. What should the solutions architect use to meet these requirements?

AWS Global Accelerator is designed to improve performance for TCP/UDP applications with global users by providing static anycast IPs and routing traffic to the nearest edge location, then across the AWS global backbone to the optimal endpoint. This reduces latency and mitigates internet congestion effects, which directly improves RTMP (TCP) stream contribution quality into us-east-1.

Amazon CloudFront primarily accelerates HTTP/HTTPS delivery and uses caching at edge locations to reduce origin load and latency for content distribution. While CloudFront can support some streaming delivery patterns, it is not the right fit for accelerating arbitrary TCP-based ingestion like RTMP from reporters into a regional broadcast system. The requirement is TCP acceleration, not CDN caching.

AWS Client VPN provides secure remote access into a VPC over VPN tunnels. It is intended for private connectivity and access control, not for optimizing performance of public, latency-sensitive media contribution traffic. Adding VPN encapsulation can add overhead and does not provide the global anycast ingress and backbone routing benefits needed to reduce congestion and improve RTMP stream quality.

Using EC2 instances with Elastic IP addresses can host the broadcast ingestion endpoint, but it does not inherently solve global latency and congestion. Reporters would still traverse variable public internet routes to reach us-east-1. Without a service like Global Accelerator (or a specialized global ingest architecture), there is no anycast entry point, no intelligent edge routing, and no managed acceleration across the AWS backbone.

Question Analysis

Core Concept: This question tests global network performance optimization for latency-sensitive, TCP-based ingestion (RTMP) into a single AWS Region. The key service is AWS Global Accelerator, which improves end-user to AWS application performance by routing traffic onto the AWS global network as quickly as possible. Why the Answer is Correct: Reporters are globally distributed and pushing live RTMP streams to a broadcast system in us-east-1. RTMP typically runs over TCP, and long-haul internet paths can introduce latency, jitter, and packet loss, degrading throughput and stream quality. AWS Global Accelerator provides static anycast IP addresses that accept TCP/UDP traffic at the nearest AWS edge location, then carry it over the AWS backbone to the optimal regional endpoint (e.g., NLB/ALB/EC2/EIP). This “fast entry” to AWS plus intelligent routing around congestion directly addresses inconsistent quality caused by internet variability. Key AWS Features: Global Accelerator uses anycast IPs, health checks, and traffic dials to route to healthy endpoints and can fail over across endpoints/Regions if designed that way. For TCP acceleration, it reduces the distance traffic travels on the public internet and leverages AWS’s managed edge network. It supports Network Load Balancers (common for TCP pass-through), Application Load Balancers, EC2 instances, and Elastic IPs as endpoints. This aligns with AWS Well-Architected Performance Efficiency principles: minimize latency and use managed global networking. Common Misconceptions: CloudFront is often chosen for “edge performance,” but it is primarily a CDN for HTTP(S) and cacheable content; it is not designed to accelerate arbitrary TCP ingestion like RTMP to an origin. Client VPN is for private access to VPC resources, not for optimizing global media contribution workflows. Using EC2 with Elastic IPs alone does not provide global anycast ingress or backbone routing optimization; it still relies on standard internet paths. Exam Tips: When you see “accelerate TCP/UDP connections from global users to AWS endpoints” and “regardless of location,” think AWS Global Accelerator. When you see “cache static/dynamic web content at edge” think CloudFront. For live video contribution protocols over TCP (RTMP) needing consistent global ingress performance, Global Accelerator in front of an NLB is a common pattern.

7
Question 7
(Select 3)

A healthcare technology company needs to process medical imaging data using Amazon Elastic Kubernetes Service (Amazon EKS) clusters and Amazon Aurora databases. Due to strict HIPAA compliance requirements, the processing must occur within their private medical facility. The company plans to implement AWS Outposts to maintain compliance while leveraging AWS services. The solutions architect is collaborating with the company's IT infrastructure team to deploy the medical imaging solution. The team needs to understand their operational responsibilities for the AWS Outposts deployment. Which operational responsibilities belong to the company's IT infrastructure team for the AWS Outposts implementation? (Choose three.)

Correct. With AWS Outposts, the customer is responsible for site readiness and ongoing facility operations, including stable power and reliable network connectivity (uplinks) from the Outposts rack to the on-prem network and the AWS Region. If power or network is unstable, services on Outposts (including EKS worker nodes and Aurora on Outposts) can be impacted regardless of AWS-managed hardware.

Incorrect. AWS is responsible for managing the Outposts infrastructure, including the underlying virtualization layer and the physical/managed infrastructure that enables AWS services to run on Outposts. Customers manage their workloads (Kubernetes objects, databases usage, IAM configuration), but they do not administer the Outposts virtualization layer or the underlying storage infrastructure as they would in a self-managed virtualization stack.

Correct. Physical security of the on-premises environment is the customer’s responsibility: controlling facility access, implementing badges/locks, surveillance, and ensuring only authorized personnel can access the Outposts racks. This aligns with the AWS shared responsibility model and is particularly important for HIPAA compliance where physical safeguards are required for systems handling ePHI.

Incorrect. AWS is responsible for the availability and integrity of the Outposts rack infrastructure components (compute servers, networking hardware inside the rack, and power units within the rack) as part of the managed service. While the customer must supply external power and network connectivity, AWS owns and manages the Outposts hardware inventory and its operational health.

Incorrect. Physical hardware maintenance and component replacement for Outposts equipment is performed by AWS (break/fix, replacements, and hardware servicing). The customer should not be expected to open the rack to replace components. The customer’s role is to provide appropriate access for AWS personnel (or approved processes) and maintain the facility conditions required for safe operation.

Correct. Customers must provision and manage sufficient capacity for their workloads on Outposts to handle failures and maintenance. For Amazon EKS on Outposts, this means designing node group capacity headroom (e.g., N+1), using appropriate scaling policies, and planning for disruption events so the cluster can continue scheduling critical pods during hardware failures or scheduled maintenance windows.

Question Analysis

Core Concept: This question tests the shared responsibility model for AWS Outposts. Outposts extends AWS infrastructure and services on-premises, but operational ownership is split: AWS owns and operates the Outposts rack hardware and the AWS-managed service control plane components, while the customer owns the on-premises facility requirements (power, space, cooling, physical security) and must architect workloads for resilience within the available Outposts capacity. Why the Answer is Correct: A is correct because the customer must provide and maintain adequate, stable power and network connectivity (uplinks) for the Outposts rack. Without reliable power and network, the Outpost cannot function or connect to the AWS Region for service management. C is correct because physical security of the facility (access controls, surveillance, restricted areas) is the customer’s responsibility. This is especially critical for HIPAA workloads where safeguarding systems that process ePHI includes controlling physical access to infrastructure. F is correct because customers are responsible for workload-level resilience on Outposts. For Amazon EKS on Outposts, the customer must plan capacity headroom and node group sizing to tolerate failures and maintenance events (e.g., N+1 capacity), since Outposts is a finite on-prem pool and AWS cannot automatically “burst” to local hardware that doesn’t exist. Key AWS Features / Best Practices: Outposts requires customer-provided power, cooling, rack space, and network backhaul to the Region. AWS monitors and maintains the Outposts hardware, but customers must design Kubernetes cluster capacity, pod disruption budgets, multi-node group strategies, and scaling policies appropriate to the constrained on-prem footprint. For compliance, customers align facility controls with HIPAA administrative/physical safeguards while leveraging AWS service controls and logging. Common Misconceptions: Many assume AWS manages everything “like a Region.” In reality, AWS manages the Outposts rack hardware and underlying infrastructure, but not the customer’s facility, nor the customer’s application capacity planning. Another trap is thinking the customer replaces failed components; AWS performs hardware break/fix for Outposts. Exam Tips: For Outposts responsibility questions, remember: Customer = site readiness (power/network/space/cooling), physical security, and workload architecture/capacity planning. AWS = delivery/installation, hardware ownership, hardware maintenance, and management of the Outposts infrastructure layer and supported AWS service components.

8
Question 8

A video streaming platform operates a fleet of Amazon EC2 instances distributed across multiple Availability Zones to handle video transcoding workloads. The instances are managed by an Amazon EC2 Auto Scaling group and receive traffic through a Network Load Balancer. Performance testing shows that the video transcoding quality and processing speed are optimal when EC2 instances maintain CPU utilization between 35-45%, with 42% being the ideal target. What should a solutions architect recommend to automatically maintain the optimal CPU utilization across all instances in the Auto Scaling group?

Step scaling uses CloudWatch alarms and scaling steps (e.g., add N instances when CPU crosses a threshold). It can be configured to keep CPU within a band, but it requires multiple alarms/steps and careful tuning. It is generally less straightforward than target tracking for maintaining a single ideal set point like 42%, and it increases operational complexity and risk of oscillation.

Target tracking scaling is purpose-built to maintain a metric at a specified target value. Setting the ASG average CPU utilization target to 42% will automatically scale out/in to keep CPU near that level, with Auto Scaling managing the underlying CloudWatch alarms. This directly matches the requirement to maintain optimal CPU utilization automatically across all instances with minimal configuration.

A Lambda function that polls CloudWatch and adjusts desired capacity is a custom solution that duplicates native Auto Scaling functionality. It adds operational overhead, potential errors, and scaling lag (especially with a 5-minute interval), which can hurt performance for variable transcoding demand. Unless there is a unique metric/logic that Auto Scaling cannot support, this is not the recommended approach.

Scheduled scaling adjusts capacity based on time-based patterns (peak/off-peak). It does not react to real-time CPU utilization changes and cannot guarantee maintaining 35-45% CPU or the 42% target when demand deviates from the schedule. It can complement dynamic scaling, but alone it does not meet the requirement to automatically maintain optimal CPU utilization.

Question Analysis

Core Concept - The question tests Amazon EC2 Auto Scaling dynamic scaling, specifically the difference between step scaling and target tracking scaling. It also touches on using Amazon CloudWatch metrics (CPUUtilization) to keep a fleet at a desired operating point. Why the Answer is Correct - A target tracking scaling policy is designed to maintain a metric at a specified target value, similar to a thermostat. Because performance testing identified an ideal CPU utilization target (42%) and an acceptable band (35-45%), target tracking is the most direct and AWS-recommended approach. Auto Scaling continuously adjusts desired capacity to keep average ASG CPUUtilization near 42% across instances, automatically scaling out when utilization rises and scaling in when it falls. Key AWS Features - Target tracking scaling uses predefined or custom metrics; here, the predefined ASG metric Average CPU utilization is appropriate. It automatically creates and manages the underlying CloudWatch alarms, reducing operational overhead and misconfiguration risk. It also supports scale-in behavior controls (e.g., scale-in cooldown/instance warmup) to avoid thrashing, which is important for transcoding workloads that may have startup time before contributing capacity. This aligns with AWS Well-Architected Performance Efficiency principles: use managed services and automation to maintain performance targets. Common Misconceptions - Step scaling (Option A) can approximate a range using multiple alarms and steps, but it is more complex and less precise for maintaining a single ideal target; it reacts to thresholds rather than continuously steering toward 42%. A Lambda-based controller (Option C) is a custom reinvention of built-in Auto Scaling capabilities and introduces latency, failure modes, and maintenance burden. Scheduled scaling (Option D) is useful for predictable demand patterns, but it cannot maintain a specific CPU target in real time when load is variable. Exam Tips - When a question states a specific desired metric value (e.g., “keep CPU at 42%”), choose target tracking scaling. When it describes discrete thresholds and different scaling increments (e.g., “add 2 instances if CPU > 70%”), step scaling is more likely. Prefer native Auto Scaling policies over custom Lambda controllers unless there is a clear requirement that built-in policies cannot meet.

9
Question 9

A biotechnology research institute needs to implement high performance computing (HPC) infrastructure on AWS for genomic sequencing analysis. The institute's HPC workloads operate on Linux-based systems. Each genomic analysis workflow utilizes hundreds of Amazon EC2 Spot Instances, completes within 2-6 hours, and produces thousands of sequence data files that must be stored in persistent storage for collaborative research and regulatory compliance over 10+ years. The institute requires a cloud storage solution that enables transfer of existing genomic datasets from on-premises storage to long-term persistent cloud storage, making data accessible for processing by all EC2 instances. The solution must provide a high-performance file system integrated with persistent storage for reading reference genomes and writing analysis output files with low latency. Which combination of AWS services meets these requirements?

FSx for Lustre is designed for Linux HPC and provides a high-performance parallel POSIX file system that can be mounted by many EC2 instances simultaneously. Its native integration with S3 allows importing existing genomic datasets for fast processing and exporting outputs back to S3 for durable, long-term retention. This matches the need for low-latency shared access during 2–6 hour jobs and persistent storage for 10+ years.

FSx for Windows File Server provides SMB shares optimized for Windows environments and Active Directory integration. The workload is explicitly Linux-based HPC with parallel I/O requirements, where SMB/Windows semantics are not the best fit. While Windows FSx can integrate with some backup/archival patterns, it is not the standard solution for HPC-scale throughput across hundreds of Linux Spot Instances.

S3 Glacier is an archival storage class intended for infrequent access with retrieval delays, not for active, low-latency reads/writes during HPC processing. EBS provides block storage typically attached to a single EC2 instance and does not provide a shared file system for hundreds of nodes. This combination fails both the high-performance shared file system requirement and the active processing access pattern.

An S3 bucket with a VPC endpoint improves private connectivity to S3, but S3 is object storage and does not provide POSIX file system semantics or low-latency shared file access required by many HPC applications. EBS gp2 is block storage for individual instances and cannot be used as a scalable shared file system across hundreds of Spot Instances. This option does not meet the HPC file system requirement.

Question Analysis

Core Concept: This question tests selecting an HPC-optimized shared file system for Linux compute fleets that also provides durable, long-term persistence for large datasets. In AWS, the common pattern is a high-performance parallel file system (for low-latency, high-throughput POSIX access) integrated with an object store for durable, cost-effective retention. Why the Answer is Correct: Amazon FSx for Lustre is purpose-built for HPC on Linux and supports a parallel file system that can scale throughput to serve hundreds of EC2 instances concurrently—ideal for genomic workflows that fan out across many Spot Instances. FSx for Lustre integrates natively with Amazon S3: you can import existing datasets from S3 into the file system namespace for fast processing (e.g., reference genomes) and export results back to S3 for persistent storage. This matches the requirement to transfer on-prem datasets to long-term cloud storage (S3) while providing a high-performance shared file system for reads/writes during the 2–6 hour jobs. Key AWS Features: - FSx for Lustre provides POSIX semantics, low latency, and very high aggregate throughput/IOPS for parallel workloads. - S3 integration supports data repository associations (linking an S3 bucket/prefix to the FSx file system) enabling import/export workflows and keeping S3 as the durable system of record. - S3 supports lifecycle policies to transition data to S3 Glacier/Deep Archive for 10+ year retention and compliance, while keeping metadata and access controls centralized. - This architecture aligns with ephemeral compute (Spot Instances) + persistent storage best practice: compute can be interrupted, but data remains durable in S3. Common Misconceptions: A frequent trap is choosing EBS for shared storage. EBS is block storage attached to a single instance (with limited multi-attach scenarios) and is not a scalable shared file system for hundreds of nodes. Another trap is focusing only on archival (Glacier) without providing a high-performance POSIX file system for active processing. Exam Tips: When you see “HPC”, “Linux”, “hundreds of instances”, “low-latency shared file system”, think FSx for Lustre. When you also see “long-term retention” and “data lake/object storage”, pair it with S3 (and optionally lifecycle to Glacier classes). Windows FSx is for SMB/Windows workloads, not Linux HPC parallel I/O.

10
Question 10

A game development studio, GameCraft, uses two AWS accounts: a 'Sandbox' account for development and a 'Production' account for live games. The studio needs to deploy new game updates from the Sandbox account to the Production account. Initially, only a few senior engineers in the development team need access to the Production account. Over time, more engineers will require temporary access for testing. Which solution provides secure and scalable cross-account access for the engineers?

Creating IAM users in both accounts scales poorly and increases security risk. It duplicates identities, requires managing long-term credentials in Production, and complicates offboarding and rotation. It also violates best practices that favor federation/SSO and role assumption for cross-account access. While it can work functionally, it is not the secure, scalable approach expected on AWS exams.

A role in the Sandbox account cannot directly “grant itself” permissions in the Production account. Permissions are evaluated in the account that owns the resources, so Production must either trust a principal from Sandbox (via a role in Production) or use resource-based policies where applicable. This option misunderstands the cross-account pattern: the role should be created in the target (Production) account.

This is the standard AWS cross-account access pattern. Create the IAM role in the Production account, attach least-privilege permissions for deployment/testing, and set a trust policy to allow the Sandbox account (ideally a specific Sandbox role) to assume it. Then grant Sandbox engineers sts:AssumeRole to that Production role. This provides temporary credentials, centralized control in Production, and easy scaling as more engineers need access.

IAM groups are not principals that can be trusted across accounts, and you cannot directly attach “permissions to the Production account” from a Sandbox group in a way that grants access to Production resources. Cross-account access requires either a role in the target account with a trust policy or resource-based policies on specific services. This option reflects a common misunderstanding of IAM groups’ scope (account-local only).

Question Analysis

Core Concept: This question tests secure cross-account access using AWS IAM roles and STS (AssumeRole). The best practice is to avoid long-term credentials in the target account and instead use temporary credentials via role assumption. Why the Answer is Correct: Option C creates an IAM role in the Production account (the resource-owning/target account) and configures the role trust policy to trust the Sandbox account as a principal. Developers authenticate in the Sandbox account (ideally via IAM Identity Center or federated SSO), then assume the Production role to obtain short-lived credentials. This is secure (no shared passwords/keys), scalable (add/remove engineers by changing who can assume the role in Sandbox), and supports temporary access for testing by controlling session duration and permissions. Key AWS Features: 1) Role trust policy (in Production): Allows sts:AssumeRole from the Sandbox account (and optionally restricts to specific principals/roles, external ID, or conditions like MFA). 2) Permission policy (in Production role): Grants least-privilege permissions needed for deployments/testing. 3) Caller permissions (in Sandbox): Developers (or a Sandbox “Deployer” role/group) get iam:PassRole only if needed and sts:AssumeRole permission for the Production role ARN. 4) Temporary credentials: STS issues time-bound credentials; CloudTrail logs AssumeRole events in both accounts for auditability. Common Misconceptions: Many assume permissions can be “granted across accounts” directly to users/groups (option D) or that the role should live in the source account (option B). In AWS, access to resources in another account is typically achieved by a role in the target account that trusts the source account, not by attaching cross-account permissions to a group alone. Exam Tips: For cross-account human access, look for “IAM role in the target account + trust policy + sts:AssumeRole.” Prefer roles over creating IAM users in multiple accounts. Remember: the trust policy answers “who can assume,” while the permission policy answers “what they can do.” Add conditions (MFA, session tags, source identity) for stronger security and scalable governance. Domain: This maps to Design Secure Architectures because it focuses on least privilege, temporary credentials, and secure cross-account access patterns.

Success Stories(30)

C
C*********Mar 23, 2026

Study period: 1 week

요구사항 정확히 인지하기(이거 젤중요 이 훈련이 제일 중요한듯), 오답노트 갈겨서 200문제만 확실히 해서 갔어요 실제 시험 지문은 훨씬 간단한데 난이도는 앱이랑 비슷하거나 더 낮았던것같아요 느낌상 탈락이었는데 통과해서 기쁘네요 큰 도움 되었습니다 감사해요!

소
소**Feb 22, 2026

Study period: 1 week

그냥 문제 풀면서 개념들 GPT에 물어보면서 학습했어요 768점 턱걸이 합격,,

조
조**Jan 12, 2026

Study period: 3 months

그냥 꾸준히 공부하고 문제 풀고 합격했어요 saa 준비생분들 화이팅!!

김
김**Dec 9, 2025

Study period: 1 month

앱으로는 4일만에 몇 문제를 풀었는지 모르겠지만 1딜동안 aws 기본 개념부터 덤프로 시나리오 그려보고 하니까 합격했습니다. 시험은 생각보다 헷갈리게 나와서 당황했는데 30분 추가 테크로 flag한 지문들 다시 확인하니까 문제 없었습니다.

L
L*************Nov 26, 2025

Study period: 3 months

I passed the AWS SAA with a score of 850/1000. Honestly, the exam wasn’t easy, but solving the actual exam–style questions in Cloud Pass helped me understand the reasoning behind each service. The explanations were super helpful and made the concepts stick. I don’t think I could’ve scored this high without the practice here.

Other Practice Tests

Practice Test #1

65 Questions·130 min·Pass 720/1000

Practice Test #2

65 Questions·130 min·Pass 720/1000

Practice Test #3

65 Questions·130 min·Pass 720/1000

Practice Test #4

65 Questions·130 min·Pass 720/1000

Practice Test #5

65 Questions·130 min·Pass 720/1000

Practice Test #6

65 Questions·130 min·Pass 720/1000

Practice Test #7

65 Questions·130 min·Pass 720/1000

Practice Test #9

65 Questions·130 min·Pass 720/1000

Practice Test #10

65 Questions·130 min·Pass 720/1000
← View All AWS Certified Solutions Architecture - Associate (SAA-C03) Questions

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified Solutions Architecture - Associate (SAA-C03) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.