CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. AWS
  3. AWS Certified DevOps Engineer - Professional (DOP-C02)
AWS Certified DevOps Engineer - Professional (DOP-C02)

AWS

AWS Certified DevOps Engineer - Professional (DOP-C02)

374+ Practice Questions with AI-Verified Answers

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 374+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every AWS Certified DevOps Engineer - Professional (DOP-C02) answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

SDLC AutomationWeight 22%
Configuration Management and IaCWeight 17%
Resilient Cloud SolutionsWeight 15%
Monitoring and LoggingWeight 15%
Incident and Event ResponseWeight 14%
Security and ComplianceWeight 17%

Practice Questions

1
Question 1

A company operates 10 Amazon OpenSearch Service domains with Auto-Tune enabled across 2 AWS Regions and needs to visualize every Auto-Tune action (for example, memory or shard rebalancing adjustments) on an Amazon CloudWatch dashboard at 1-minute resolution for the last 24 hours; which solution will meet this requirement?

Correct. OpenSearch Auto-Tune emits events that EventBridge can match. Lambda can parse each event and publish a CloudWatch custom metric (e.g., Count=1) with dimensions (Domain, ActionType, Region). CloudWatch dashboards can graph these metrics at 1-minute resolution over the last 24 hours, providing a clear visualization of every Auto-Tune action across domains/Regions.

Incorrect. CloudTrail management events record control-plane API calls (e.g., CreateDomain, UpdateDomain), not the internal Auto-Tune actions taken by the service. Sending CloudTrail to CloudWatch Logs and using metric filters is useful for auditing API activity, but it won’t reliably capture every Auto-Tune memory/shard adjustment event, so it won’t meet the requirement.

Incorrect. EventBridge can trigger actions, but “changing the status of a CloudWatch alarm” is not how alarms work; alarm state is evaluated from metric data against thresholds. Even if you forced some workflow, an alarm shows state (OK/ALARM/INSUFFICIENT_DATA), not a per-action, 1-minute resolution time series of all Auto-Tune actions for visualization.

Incorrect. CloudTrail data events are for data-plane operations on supported resources (e.g., S3 object-level, Lambda invoke, DynamoDB item-level) and are not applicable to OpenSearch Auto-Tune actions. This option also misapplies CloudTrail for operational tuning events. Therefore, it cannot provide a complete, 1-minute resolution visualization of Auto-Tune actions.

Question Analysis

Core Concept: This question tests event-driven monitoring and near-real-time visualization. Amazon OpenSearch Service Auto-Tune emits events (configuration changes and tuning actions). To visualize “every Auto-Tune action” at 1-minute resolution on a CloudWatch dashboard, you need to convert those discrete events into CloudWatch metrics with 1-minute granularity. Why the Answer is Correct: EventBridge is the native way to capture OpenSearch Auto-Tune events as they occur. By routing those events to a Lambda function, you can publish a CloudWatch custom metric (PutMetricData) each time an Auto-Tune action happens (optionally with dimensions like DomainName, Region, ActionType). CloudWatch dashboards can graph custom metrics at 1-minute periods and show the last 24 hours. This approach works across 10 domains and 2 Regions by deploying the rule/Lambda per Region (or centralizing via event bus forwarding) and using a multi-Region dashboard. Key AWS Features: - Amazon EventBridge rules for service events (Auto-Tune notifications). - AWS Lambda for lightweight event transformation/enrichment. - CloudWatch custom metrics with dimensions and 1-minute period visualization. - CloudWatch dashboards support cross-account/cross-Region widgets (as applicable) and time range selection (last 24 hours). Common Misconceptions: CloudTrail is for API activity auditing, not operational service events like Auto-Tune actions. Even if you could capture something in logs, metric filters are derived from log ingestion and are not the intended/most reliable mechanism for OpenSearch Auto-Tune event visualization. Also, alarms represent threshold state, not a per-action time series. Exam Tips: When you see “visualize events/actions” at a specific resolution, think: Event source (EventBridge) → transform (Lambda) → metric (CloudWatch custom metric) → dashboard. Use CloudTrail primarily for governance/audit of API calls, and CloudWatch alarms for threshold-based alerting, not for counting/plotting every discrete operational action.

2
Question 2

A fintech startup stores Terraform modules and unit tests in an AWS CodeCommit repository; an AWS CodePipeline pipeline triggers an AWS CodeBuild project whenever code is merged into the release branch, and the build must run the unit tests within a 12-minute timeout and, only if they pass, create an annotated Git tag named release-${CODEBUILD_BUILD_NUMBER} on the most recent commit and push that tag back to the repository; how should the company configure the CodeBuild project to meet these requirements?

Correct. Creating an annotated tag is a Git operation that requires repository metadata (.git) and a configured remote. Using native Git clone in CodeBuild ensures the build has commit history and can tag the exact commit that was built. After unit tests pass, the build can run git tag -a release-${CODEBUILD_BUILD_NUMBER} and git push to CodeCommit, assuming the CodeBuild role has GitPull/GitPush permissions.

Incorrect. Although cloning with Git is good, using AWS CLI to “create a repository tag” is not the right model for Git annotated tags. CodeCommit does not provide a simple AWS CLI command that creates an annotated Git tag object equivalent to git tag -a. The requirement is specifically an annotated Git tag on the most recent commit, which is best satisfied with native Git commands.

Incorrect. Copying source files with AWS CLI (or CodeBuild’s default source download) typically does not include .git metadata, so you cannot reliably create an annotated tag that references the correct commit object. Additionally, AWS CLI does not directly create annotated Git tag objects in CodeCommit. Without a Git clone, you also risk tagging the wrong commit or being unable to push a tag ref.

Incorrect. This combines two issues: (1) copying source files rather than cloning means no .git metadata, preventing proper Git tagging; and (2) the concept of a “repository tag” via AWS CLI does not meet the requirement for an annotated Git tag object. Even if a ref could be manipulated, it would not produce an annotated tag with a message/signature as required.

Question Analysis

Core Concept: This question tests SDLC automation with CodePipeline/CodeBuild and how CodeBuild checks out source from CodeCommit. The key is understanding the difference between a full Git clone (with .git metadata) versus CodeBuild’s default source download behavior, and how to perform Git operations (annotated tags + push) back to CodeCommit. Why the Answer is Correct: To create an annotated Git tag on “the most recent commit” and push it back, the build environment must have the Git repository metadata and a proper remote configured. CodeBuild can be configured to use Git to clone the repository (instead of only downloading a source ZIP). With a real clone, the build can run unit tests and, only on success, execute: git tag -a release-${CODEBUILD_BUILD_NUMBER} -m "..." and git push origin release-${CODEBUILD_BUILD_NUMBER}. This precisely meets the requirement for an annotated Git tag and ensures the tag points to the exact commit that was built. Key AWS Features / Configurations: - CodeBuild source credential/permissions: the CodeBuild service role must allow CodeCommit Git operations (e.g., codecommit:GitPull and codecommit:GitPush) on the repo. - Authentication: use CodeBuild’s integrated CodeCommit credential helper (git-remote-codecommit) or HTTPS with credential helper so git push works non-interactively. - buildspec.yml phases: run tests in build phase; in post_build (or after tests) create/push tag. Ensure the tag step runs only if tests succeed (default behavior: subsequent commands don’t run if a prior command fails, unless overridden). - Timeout: set CodeBuild project timeout to 12 minutes (or less) to satisfy the constraint. Common Misconceptions: Many assume AWS CLI can “create a Git tag” in CodeCommit. CodeCommit’s API supports repository triggers, approvals, and references, but Git tags are Git objects typically created via Git commands. While you can manipulate refs via APIs in some Git hosting contexts, the exam expectation is: to create annotated tags, you need a Git repo checkout and native Git. Exam Tips: When a requirement explicitly mentions Git constructs (annotated tags, branches, commit pointers), prefer native Git operations in the build container. Also remember: CodeBuild’s default source handling may not include .git, so any solution requiring git tag/log/describe usually implies enabling a Git clone and ensuring the build role can push back to the repo.

3
Question 3
(Select 3)

A maritime analytics company is building a telemetry ingestion service for 2,000 cargo vessels; each vessel’s engine control unit emits nine metrics every 150 milliseconds, producing high-volume time-series data. The company must persist these events to Amazon Timestream and run a scheduled daily dashboard query that scans the most recent 24 hours with sub-second latency. Which combination of actions will provide the FASTEST query performance? (Choose three.)

Correct. Batched writes (100–200 records per WriteRecords) reduce request overhead, improve throughput, and lower the chance of throttling for very high-ingest workloads. While batching is primarily an ingest optimization, it helps ensure data arrives quickly and consistently into the memory store, supporting timely, low-latency queries over the most recent 24 hours. This is a common Timestream best practice for scale.

Incorrect. Sending each event as its own WriteRecords request dramatically increases API calls and overhead, and it is more likely to hit service limits or throttling under extreme ingest rates. This can cause ingestion lag, which can negatively affect dashboards that depend on fresh data. Per-record writes are rarely optimal for high-volume telemetry pipelines in Timestream.

Incorrect. Modeling each metric as a separate single-measure record creates nine rows per timestamp per vessel, inflating record count and increasing the amount of data the query engine must scan and aggregate. For a dashboard query scanning 24 hours, this typically increases latency and cost. Single-measure modeling can be useful when measures have different dimensions or sparse timestamps, but not here.

Correct. Multi-measure records store all nine metrics in one record with a shared timestamp and dimensions, reducing row count and improving query efficiency (less scanning, fewer rows to aggregate). This is especially beneficial for high-frequency telemetry where multiple measures are emitted together. It is a key Timestream modeling pattern for performance and cost optimization.

Incorrect. Configuring memory retention longer than magnetic retention is generally the opposite of recommended tiering and can be unnecessarily expensive. It also doesn’t address the core requirement as cleanly as ensuring the 24-hour query window is fully in memory (option F). While more memory retention could keep data hot, the proposed relationship (memory > magnetic) is not the typical or best-practice configuration.

Correct. Keeping the memory store retention window long enough to cover the 24-hour dashboard query (e.g., 48 hours) ensures the query reads from the low-latency memory store rather than the slower magnetic store. Using a much longer magnetic retention (e.g., 180 days) preserves historical data cost-effectively. This configuration directly targets fastest query performance for recent data.

Question Analysis

Core Concept: This question tests Amazon Timestream data modeling and storage-tier configuration for high-ingest time-series workloads with fast, recent-window analytics. Timestream has a memory store (optimized for fast queries on recent data) and a magnetic store (cost-optimized for historical data). Query latency is strongly influenced by whether data is served from memory and by how efficiently records are modeled and ingested. Why the Answer is Correct: A (batched writes) improves ingestion efficiency and reduces per-request overhead, throttling risk, and write amplification. With 2,000 vessels emitting 9 metrics every 150 ms, the raw event rate is extremely high; batching (e.g., 100–200 records per WriteRecords call) is a best practice to sustain throughput and keep ingestion stable, which indirectly supports query performance by ensuring data lands promptly in the memory store. D (multi-measure records) is a key Timestream optimization: storing all nine metrics as a single multi-measure record (same timestamp and dimensions) reduces row count by ~9x compared to single-measure modeling. Fewer rows and better locality typically yield faster scans and aggregations for dashboard queries. F ensures the 24-hour dashboard query is fully served from the memory store by keeping the memory retention window longer than the query lookback (e.g., 48 hours). Memory store is designed for low-latency queries; if any portion of the 24-hour range spills into magnetic store, query latency generally increases. Key AWS Features: - Timestream memory vs magnetic store retention policies (hot vs cold tiers) - Multi-measure records to reduce record count and improve query efficiency - Batched WriteRecords to maximize throughput and reduce API overhead Common Misconceptions: Option E sounds like “more memory is better,” but it reverses the intended tiering (memory should be shorter than magnetic). Keeping memory longer than magnetic is atypical, increases cost, and doesn’t inherently improve performance beyond ensuring the query window is in memory (which F already achieves more appropriately). Option C can seem simpler, but it multiplies rows and often slows queries. Exam Tips: For Timestream performance questions, prioritize: (1) keep the queried time range in memory store, (2) reduce row/record count with multi-measure modeling when metrics share timestamp/dimensions, and (3) batch writes to sustain ingestion and avoid throttling. Remember: memory retention is usually shorter; magnetic retention is longer for history.

4
Question 4
(Select 2)

A logistics enterprise operates a multi-OU AWS Organizations setup across two AWS Regions with 68 member accounts and has acquired a fintech startup that uses 11 standalone AWS accounts with separate billing; the platform team must centralize administration under a single management account while retaining break-glass full administrative control across all imported accounts and must centrally aggregate and group security findings across the entire environment with new accounts automatically included as they are onboarded; which combination of actions should the platform team take to meet these requirements with minimal operational overhead? (Choose two.)

Incorrect. Inviting accounts into an organization is right, but the SCP statement is wrong: SCPs cannot grant permissions to the management account (or anyone). SCPs only define the maximum permissions that accounts can use; they do not create IAM roles or allow cross-account access by themselves. Break-glass access requires an assumable IAM role (or equivalent) in each member account.

Correct. After inviting the startup accounts into the organization, creating (or ensuring the existence of) an OrganizationAccountAccessRole in each member account that trusts the management account and has AdministratorAccess provides centralized, break-glass administrative access. This is the standard Organizations pattern for management-account-to-member-account administration with minimal ongoing overhead.

Correct. AWS Security Hub is designed to aggregate, normalize, and group security findings across AWS accounts and Regions. With AWS Organizations integration, you can designate a delegated administrator, enable organization-wide configuration, and automatically enroll new accounts as they join the organization—meeting the requirement for centralized aggregation with automatic inclusion.

Incorrect. AWS Firewall Manager centralizes the administration of firewall-related policies (AWS WAF, Shield Advanced, security groups, Network Firewall, DNS Firewall) across an organization. It is not intended to be the central aggregation and grouping service for security findings across the entire environment. Security Hub is the correct service for findings aggregation.

Incorrect. Amazon Inspector provides vulnerability management findings (e.g., EC2, ECR, Lambda) and can operate across multiple accounts with delegated administration, but it does not serve as the central, cross-service findings aggregation and grouping layer for the whole environment. The requirement is broader and maps to Security Hub’s organization-wide findings aggregation.

Question Analysis

Core Concept: This question tests AWS Organizations account onboarding and centralized security operations. Specifically: (1) how the management account retains emergency (break-glass) administrative access to member accounts, and (2) how to centrally aggregate and automatically include security findings across all accounts using an organization-integrated security service. Why the Answer is Correct: To centralize administration, the startup’s standalone accounts should be invited into the existing AWS Organization. For break-glass full administrative control, the management account needs a cross-account role in each member account that it can assume with AdministratorAccess. The canonical mechanism is the OrganizationAccountAccessRole (or an equivalent admin role) that trusts the management account; this is exactly what option B describes and is aligned with how Organizations enables centralized access after account creation/invitation. For centralized aggregation and grouping of security findings with automatic inclusion of newly onboarded accounts, AWS Security Hub is the correct service. When integrated with AWS Organizations, Security Hub supports delegated administrator, multi-account/multi-Region aggregation, and auto-enrollment of new organization accounts, minimizing ongoing operational overhead (option C). Key AWS Features: - AWS Organizations: account invitation and consolidated governance. - Cross-account IAM role (OrganizationAccountAccessRole): trusted by the management account; grants AdministratorAccess for emergency access. - AWS Security Hub + Organizations: delegated admin, organization-wide enablement, automatic account enrollment, and centralized findings aggregation across accounts/Regions. - Best practice: use delegated administrator for security tooling to avoid using the management account for day-to-day operations. Common Misconceptions: - SCPs do not “grant” permissions; they only set permission guardrails (maximum allowed). Therefore, an SCP cannot by itself create break-glass admin access for the management account. - Firewall Manager is for centralized firewall policy management (WAF, Shield Advanced, security groups, etc.), not a general security findings aggregator. - Amazon Inspector produces vulnerability findings, but it is not the broad, cross-service findings aggregation layer requested. Exam Tips: - Remember: IAM policies grant permissions; SCPs restrict them. - For org-wide security findings aggregation with auto-enrollment, think “Security Hub + Organizations + delegated admin.” - For centralized admin access into member accounts, look for “assume role into member accounts” patterns (OrganizationAccountAccessRole or equivalent).

5
Question 5

A video analytics company manages 28 AWS accounts in a single AWS Organizations organization with all features enabled and uses AWS CloudFormation StackSets for baseline deployments while AWS Config already monitors general S3 settings, and the security team now requires a preventative control that ensures every S3 PutObject across all accounts and Regions uses AWS Key Management Service (AWS KMS) server-side encryption (SSE-KMS) and blocks any noncompliant upload attempts with minimal operational overhead; which solution will meet these requirements?

AWS Config conformance packs with s3-bucket-server-side-encryption-enabled provide a detective control: they evaluate bucket settings and report compliance. Even with SNS notifications, Config does not prevent or block a noncompliant PutObject request in real time. It also focuses on bucket-level configuration rather than enforcing per-request SSE-KMS usage for every upload attempt across all accounts and Regions.

This SCP targets s3:CreateBucket, which is the wrong API action for the requirement. The company needs to enforce encryption on object uploads (PutObject), not on bucket creation. Additionally, the condition key s3:x-amz-server-side-encryption is relevant to object upload requests, so applying it to CreateBucket would not reliably enforce SSE-KMS for all uploaded objects.

CloudTrail data events plus EventBridge and SNS is an after-the-fact detection and alerting pattern. It can identify unencrypted PutObject calls and notify the security team, but it does not block the upload attempt. The requirement explicitly asks for a preventative control that blocks noncompliant uploads with minimal operational overhead, which points to an SCP rather than monitoring/alerting.

An SCP that denies s3:PutObject unless s3:x-amz-server-side-encryption equals aws:kms is a preventative, org-wide guardrail. Attached to the organization root, it applies across all accounts and Regions and blocks noncompliant uploads regardless of IAM permissions. This meets the requirement to ensure every PutObject uses SSE-KMS and to stop noncompliant attempts with minimal ongoing operations.

Question Analysis

Core Concept: This question tests preventative, organization-wide security controls for Amazon S3 using AWS Organizations Service Control Policies (SCPs). SCPs are guardrails that set the maximum available permissions for accounts/OUs, enabling you to block noncompliant API calls before they succeed. Why the Answer is Correct: The requirement is to ensure every S3 PutObject across all accounts and Regions uses SSE-KMS and to block noncompliant uploads with minimal operational overhead. An SCP attached at the organization root can explicitly deny s3:PutObject when the request does not specify AWS KMS server-side encryption (x-amz-server-side-encryption = aws:kms). Because explicit denies in SCPs apply regardless of IAM permissions, this creates a preventative control that stops noncompliant PutObject requests across all member accounts (including new accounts) without deploying agents, rules, or per-account tooling. Key AWS Features: - AWS Organizations SCPs: Central, scalable guardrails; explicit Deny overrides Allow. - S3 condition keys: Use s3:x-amz-server-side-encryption to require aws:kms on PutObject. This enforces encryption at request time (preventative). - Organization root attachment: Ensures coverage across all accounts/OUs and Regions with minimal ongoing operations. Common Misconceptions: - AWS Config (including conformance packs) is primarily detective/remediative, not preventative. It can report noncompliance and trigger notifications or remediation, but it does not inherently block a PutObject call. - CloudTrail + EventBridge is also detective: it can alert after the fact, but cannot prevent the upload. - Denying CreateBucket does not address object uploads; encryption requirements must be enforced on PutObject (and potentially multipart upload-related actions in real implementations). Exam Tips: When the question says “preventative control” and “block noncompliant attempts” across many accounts with low overhead, think SCPs (or sometimes permission boundaries) rather than Config/CloudTrail. Also ensure the action in the policy matches the required behavior: object encryption is enforced on s3:PutObject, not on bucket creation. In practice, you may also consider related S3 actions (e.g., multipart upload) and bucket policies, but for org-wide enforcement, SCP is the canonical exam answer.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

A DevOps engineer is building a data-forwarding service where an AWS Lambda function reads batched records from an Amazon Kinesis Data Streams shard and forwards them to an internal ERP SOAP endpoint over a VPC. Roughly 12% of incoming records fail validation and must be handled manually; the Lambda event source mapping is configured with an Amazon SQS dead-letter queue (DLQ) as an on-failure destination, and the function uses batch processing with retries enabled. During load testing, the engineer observes that the DLQ contains many records that have no data issues and were already accepted by the ERP service, indicating that successful items from partially failing batches are being retried and eventually sent to the DLQ along with the failures. Which event source configuration change should the engineer implement to reduce the number of error-free records sent to the DLQ while preserving current throughput and retry behavior?

Increasing retry attempts does not reduce the number of error-free records retried; it typically increases it. With Kinesis batch retries, the whole batch is retried when any record causes a failure, so more retries means more duplicate sends to the ERP endpoint and a higher chance that good records eventually land in the DLQ after repeated failures. This can also increase downstream load and complicate idempotency.

Enabling bisect (split) batch on function error causes Lambda to automatically divide a failing batch into smaller batches and retry them, isolating the bad records. This reduces reprocessing of successful records and minimizes the number of valid records that end up in the DLQ due to a poisoned batch. It preserves the existing retry model and generally maintains throughput while reducing failure blast radius.

Increasing parallelization factor increases the number of concurrent batches processed per shard, improving throughput and reducing lag. However, it does not change the fundamental behavior that a single failing record causes the entire batch to be retried. Therefore, it will not reduce the number of good records retried or sent to the DLQ; it may even increase duplicate deliveries to the ERP endpoint under failure conditions.

Decreasing maximum record age causes Lambda to stop retrying older records sooner and send them to the on-failure destination earlier. This does not isolate the failing record within a batch; instead, it can increase the number of records (including valid ones) that are discarded to the DLQ because the batch keeps failing and ages out. It reduces recovery time at the cost of higher data loss/noise.

Question Analysis

Core Concept: This question tests AWS Lambda event source mappings for Amazon Kinesis Data Streams, specifically how batch retry semantics interact with partial failures and on-failure destinations (DLQs). With Kinesis, Lambda reads a batch from a shard and, by default, treats the batch as an atomic unit for success/failure. Why the Answer is Correct: When any record in a Kinesis batch causes the Lambda invocation to fail (for example, validation failure leading to an exception), the entire batch is retried. That means records that were already successfully forwarded to the ERP endpoint can be reprocessed, potentially causing duplicates and, after retries are exhausted, being sent to the DLQ along with the truly bad records. Enabling “bisect batch on function error” (split/bisect) changes the retry behavior so that on an error, Lambda automatically splits the batch into smaller batches (binary search) and retries those. This isolates the problematic records into the smallest failing batch, dramatically reducing the number of good records that get retried and ultimately sent to the DLQ, while keeping the same overall throughput target and preserving the existing retry mechanism. Key AWS Features: - Event source mapping setting: BisectBatchOnFunctionError for Kinesis/DynamoDB Streams. - On-failure destination (SQS DLQ) for event source mappings, used when records expire or retries are exhausted. - Batch processing behavior: without bisection, a single bad record poisons the whole batch. This aligns with Well-Architected Reliability principles: limit blast radius and isolate failures. Common Misconceptions: It’s tempting to increase retries (A) to “fix” transient issues, but that increases duplicate processing and can worsen DLQ noise. Increasing parallelization (C) improves throughput but does not change batch atomicity; it can even amplify duplicates. Reducing max record age (D) can push more records to DLQ sooner, increasing loss/noise rather than isolating failures. Exam Tips: For Kinesis/DynamoDB Streams + Lambda, remember: batch is the unit of retry unless you enable bisection or implement partial batch response (where supported). If the symptom is “good records end up in DLQ because one record fails,” think BisectBatchOnFunctionError (or partial batch response) before tuning retries or concurrency.

7
Question 7

A global payment platform needs a well-architected design for a webhook receiver that must serve merchants in us-east-1 and eu-west-1 with p95 latency under 150 ms and at least 99.99% availability; the stack must use Amazon API Gateway, AWS Lambda, and Amazon DynamoDB, and the solution must provide active-active operation with automatic failover across Regions while ensuring writes are available in both Regions.

Route 53 health checks can provide failover between Regional API endpoints, but this design uses separate, Region-local DynamoDB tables with no replication mechanism. That fails the requirement for active-active operation with writes available in both Regions and consistent data across Regions. It also creates operational complexity for reconciliation and can lead to divergent state during partial outages.

This is the best match: Route 53 latency-based routing sends traffic to the closest healthy Regional API Gateway endpoint, supporting the 150 ms p95 latency goal. Each Region uses local Lambda for low latency and isolation. DynamoDB global tables provide multi-Region, multi-writer capability with automatic replication, enabling active-active writes and resilience with automatic failover across Regions.

This is effectively active-passive with a custom failover mechanism. Running a Lambda every 5 minutes to probe health and update Route 53 is slower than native Route 53 health checks and introduces additional failure modes and propagation delays. It also conflicts with the requirement for automatic failover and high availability (99.99%), where you want built-in health checks and active-active traffic handling.

A single API Gateway in us-east-1 is not multi-Region and becomes a single point of failure and latency bottleneck for EU merchants. API Gateway cannot transparently “forward” requests to a Lambda in another Region as a native closest-Region feature; cross-Region invocation would add latency and complexity. A single-Region DynamoDB table also does not meet the requirement for multi-Region active-active writes.

Question Analysis

Core Concept: This question tests multi-Region active-active architecture for low latency and high availability using API Gateway + Lambda + DynamoDB, and specifically how to keep writes available in both Regions with automatic failover. The key services are Amazon Route 53 (global traffic management), API Gateway Regional endpoints (multi-Region API front door), and DynamoDB global tables (multi-Region, multi-writer replication). Why the Answer is Correct: Option B uses Route 53 latency-based routing (LBR) with health checks to send each merchant to the closest healthy Regional API endpoint (supporting p95 < 150 ms by minimizing RTT). If a Region becomes unhealthy, Route 53 health checks stop returning that endpoint, providing automatic failover. Each Region runs Lambda locally (avoids cross-Region invocation latency and dependency). DynamoDB global tables provide active-active, multi-Region writes: each Region can accept writes locally and replicate asynchronously to the other Region, meeting the requirement that writes are available in both Regions. Key AWS Features: - Route 53 Latency-Based Routing + health checks: directs users to the lowest-latency healthy endpoint; supports active-active. - API Gateway Regional endpoints: deployed per Region; pair naturally with Route 53. - Lambda in-region execution: reduces latency and avoids cross-Region blast radius. - DynamoDB global tables (v2019.11.21): multi-Region replication with multi-writer capability; integrates with DynamoDB Streams for replication and supports high availability across Regions. Common Misconceptions: A common trap is assuming “two Regional DynamoDB tables” equals active-active. Without global tables (or custom replication), you risk split-brain data, inconsistent reads, and no automatic cross-Region continuity. Another misconception is using manual or scheduled failover logic (Option C), which is slower, more failure-prone, and not aligned with 99.99% availability goals. Exam Tips: When you see “active-active across Regions” and “writes available in both Regions” with DynamoDB, the exam almost always expects DynamoDB global tables. Pair that with Route 53 LBR (for latency) or failover routing (for primary/secondary). For strict latency targets, prefer in-Region Lambda + Regional API endpoints rather than cross-Region forwarding.

8
Question 8

A data platform team created an AWS CloudFormation template to let teams quickly launch and tear down a nightly ETL proof of concept; the template provisions an AWS Glue job and an Amazon S3 bucket with versioning enabled and a lifecycle rule that retains objects for 90 days, and when the stack is deleted within 24 hours of creation CloudFormation shows DELETE_FAILED for the S3 bucket because it still contains objects and delete markers, so the team needs the MOST efficient way to ensure all resources are removed automatically and the stack deletion completes without errors.

DeletionPolicy: Delete only tells CloudFormation what to do with the resource when the stack is deleted, but it does not override S3’s requirement that a bucket must be empty before deletion. With versioning enabled, delete markers and old versions still count as contents. The stack will still fail with DELETE_FAILED if objects/versions remain, so this does not solve the root cause.

A Lambda-backed custom resource is a common CloudFormation pattern to perform actions CloudFormation cannot do natively, such as emptying a versioned bucket. On Delete, the function can call ListObjectVersions to retrieve versions and delete markers, then issue DeleteObjects with version IDs to remove everything. This ensures the bucket is empty so CloudFormation can delete it and complete stack teardown automatically.

Manually emptying the bucket (including versions) will work, but it is not efficient and does not meet the requirement for automatic removal. It introduces operational toil and risk of inconsistent cleanup, especially for nightly proof-of-concept stacks. Certification questions typically penalize manual steps when an automated IaC-based approach is feasible.

Replacing the solution with CodePipeline is unnecessary and does not address the core issue: S3 bucket deletion requires emptying versioned contents. A pipeline stage for teardown is an anti-pattern for simple stack lifecycle management and adds complexity, cost, and maintenance. CloudFormation already orchestrates resource creation/deletion; the missing piece is automated bucket cleanup, best handled by a custom resource.

Question Analysis

Core Concept: This question tests CloudFormation stack deletion behavior with Amazon S3 buckets, especially when versioning is enabled. CloudFormation can delete an S3 bucket only if it is empty. With versioning, “empty” means no current objects, no noncurrent versions, and no delete markers. Why the Answer is Correct: Option B is the most efficient and fully automated approach: a Lambda-backed custom resource performs cleanup during stack deletion. On RequestType=Delete, the function lists and deletes all object versions and delete markers (and any remaining current objects). Once the bucket is truly empty, CloudFormation can successfully delete the bucket resource and complete stack deletion without DELETE_FAILED. This aligns with IaC best practices: stacks should be self-contained and fully reversible (create and delete cleanly) without manual steps. Key AWS Features: - CloudFormation custom resources (Lambda-backed) allow imperative actions during stack lifecycle events (Create/Update/Delete). - S3 versioning introduces delete markers and multiple versions; deletion requires DeleteObjects against version IDs and delete markers. - CloudFormation resource dependency control (DependsOn) can ensure the custom cleanup runs before the bucket deletion is attempted. - IAM least privilege: the Lambda role needs s3:ListBucket, s3:ListBucketVersions, and s3:DeleteObject / s3:DeleteObjectVersion (and potentially s3:DeleteObjectTagging if used). Common Misconceptions: A common trap is assuming CloudFormation’s DeletionPolicy: Delete will “force delete” a non-empty bucket. It does not; S3 still refuses deletion if anything remains. Another misconception is that lifecycle rules will help here; lifecycle expiration is not immediate and won’t run within 24 hours, especially for versions and delete markers. Exam Tips: When you see “S3 bucket deletion fails” + “versioning enabled,” immediately think: you must delete object versions and delete markers. For CloudFormation, the standard exam pattern is a Lambda-backed custom resource (or newer native mechanisms if explicitly mentioned) to empty the bucket during stack deletion. Prefer automated, repeatable IaC solutions over manual cleanup steps.

9
Question 9
(Select 2)

A financial analytics platform runs its stateless API on Amazon EC2 instances in Auto Scaling groups behind Application Load Balancers across two Availability Zones per Region and stores transactional data in Amazon Aurora PostgreSQL; the business mandates a maximum RPO of 2 hours and a maximum RTO of 10 minutes at all times for both the data and the application across Regions. Which combination of deployment strategies will meet these requirements? (Choose two.)

Incorrect. Single-AZ Aurora clusters in multiple Regions do not provide continuous replication between Regions, so RPO is not assured. Single-AZ also reduces availability within each Region and increases recovery time for AZ-level failures. “Aurora automatic recovery” primarily addresses instance/storage failures within a Region, not cross-Region disaster recovery with guaranteed RPO/RTO targets.

Correct. Aurora PostgreSQL Global Database is purpose-built for cross-Region DR with low-latency replication (typically seconds), easily meeting a 2-hour RPO. In a regional failure, promoting the secondary Region to primary restores write capability quickly. Updating the application to use the promoted cluster endpoint is the standard cutover step to complete database failover within the RTO.

Incorrect. You cannot place independent Aurora clusters behind an NLB to distribute SQL traffic across Regions as if they were interchangeable endpoints; this would not provide consistent reads/writes and would create data divergence without a supported multi-writer architecture. Aurora does not support active/active multi-Region writers in this manner, and NLB is not a database replication or consistency mechanism.

Correct. Deploying the stateless API stack in two Regions and using Route 53 failover routing with health checks provides automated regional traffic failover. Because ALBs and Auto Scaling groups exist in both Regions, the secondary Region can serve traffic immediately when DNS fails over, supporting a 10-minute RTO for the application tier when properly configured and tested.

This option is not the expected exam answer, but the explanation should be more precise. AWS Global Accelerator can provide fast cross-Region failover for the application tier by routing users to healthy ALB endpoints in another Region, so from a pure application-availability perspective it can support the required RTO. However, the question asks for a deployment strategy combination typically associated with disaster recovery, and Route 53 failover routing is the more canonical AWS DR pattern tested for regional failover. In addition, the wording about placing both ALBs in a single endpoint group is not the usual multi-Region design pattern, which makes D the clearer and more standard choice for the application component.

Question Analysis

Core Concept: This question tests multi-Region disaster recovery (DR) design for both the application tier and the database tier with strict RPO/RTO targets. Key services are Amazon Aurora PostgreSQL Global Database for low-RPO cross-Region replication and Amazon Route 53 failover routing for regional application failover. Why the Answer is Correct: To meet an RPO of 2 hours and an RTO of 10 minutes “at all times across Regions,” you need (1) continuously replicated data to a second Region and (2) a pre-provisioned, ready-to-serve application stack in the second Region with automated traffic failover. Option B provides the data strategy: Aurora Global Database replicates storage-level changes from the primary Region to secondary Regions with typical replication lag measured in seconds, far better than the 2-hour RPO requirement. During a regional disaster, you can promote the secondary Region to be the new primary (planned or unplanned failover) and then point the application to the new writer endpoint. Option D provides the application strategy: running the API stack in two Regions behind ALBs and using Route 53 failover routing with health checks enables DNS-based regional failover. Because the stack is already deployed and scaled via Auto Scaling in both Regions, the application RTO can be within minutes, aligning with the 10-minute requirement. Key AWS Features: Aurora Global Database offers cross-Region replication designed for DR and fast recovery. Promotion of a secondary cluster is the standard DR action to restore write capability. Route 53 failover routing uses health checks to shift traffic from the primary ALB to the secondary ALB when the primary becomes unhealthy. This aligns with Well-Architected Reliability principles: redundancy, automated failover, and tested recovery procedures. Common Misconceptions: Some assume Multi-AZ within a Region satisfies “across Regions” requirements; it does not. Others try to “load balance” databases across Regions (option C), but Aurora clusters are not designed to be active/active writers across Regions behind an NLB. Also, Global Accelerator (option E) can improve failover and performance, but the question asks for a combination that meets requirements; Route 53 failover already satisfies the RTO/RPO when paired with Aurora Global Database. Exam Tips: For strict cross-Region RPO/RTO on Aurora, look for “Aurora Global Database” plus a regional traffic management/failover mechanism (Route 53 failover or Global Accelerator). For stateless apps, active/standby or active/active regional stacks with health-based routing are typical patterns. Avoid answers that only address intra-Region HA or attempt cross-Region SQL load balancing.

10
Question 10

A global media company uses AWS Organizations with two OUs (root and analytics); the root OU has a single SCP that contains one Allow statement for all actions on all resources, while the analytics OU (which contains four accounts including the ext-ml account, ID 222222222222) has an SCP that allows only s3:* and glue:* and explicitly denies all other actions by using a Deny statement with NotAction [s3:, glue:] on all resources; in the ext-ml account, a DevOps engineer's IAM user has the AdministratorAccess policy attached, and when the engineer attempts in us-east-1 to create an Amazon RDS db.m6g.large instance via the console and AWS CLI (rds:CreateDBInstance), the request fails with AccessDeniedException indicating it was blocked by an organization service control policy. Which change will allow the engineer to successfully create the RDS instance in the ext-ml account?

Incorrect. AmazonRDSFullAccess is an IAM policy. IAM permissions cannot override an explicit Deny from an SCP. The error explicitly states the action was blocked by an Organizations SCP, so adding more IAM permissions (even admin) will not help until the SCP no longer denies rds:CreateDBInstance.

Incorrect. Attaching a new SCP that allows RDS to the ext-ml account does not override the analytics OU SCP’s explicit Deny. SCPs combine as guardrails, and explicit Deny in any applicable SCP still blocks the action. You would need to remove/modify the deny condition in the OU SCP (or move the account).

Correct. The analytics OU SCP explicitly denies all actions except s3:* and glue:* via Deny with NotAction. Because RDS is not exempted, rds:CreateDBInstance is denied. Updating this OU SCP to include rds:* (or specific required RDS actions) in the NotAction exception (or otherwise permitting RDS) removes the explicit deny and allows IAM AdministratorAccess to succeed.

Incorrect. A root OU SCP that allows all actions does not cancel a child OU SCP’s explicit Deny. SCP evaluation requires the action to be permitted by all applicable SCP constraints; any explicit Deny still wins. Therefore, adding an allow-RDS SCP at the root will not enable RDS in an OU that explicitly denies it.

Question Analysis

Core Concept - This question tests how AWS Organizations Service Control Policies (SCPs) set the maximum available permissions for accounts, regardless of IAM permissions. SCPs do not grant permissions by themselves; they define guardrails. An explicit Deny in an SCP cannot be overridden by any IAM policy, including AdministratorAccess. Why the Answer is Correct - The ext-ml account is in the analytics OU, which has an SCP that allows only s3:* and glue:* and explicitly denies everything else using a Deny with NotAction [s3:*, glue:*]. Because rds:CreateDBInstance is not in the NotAction allow-list, it matches the Deny statement and is explicitly denied for all principals in all accounts in that OU. Therefore, the engineer’s AdministratorAccess IAM policy is irrelevant for RDS actions: the request is blocked at the Organizations layer. To allow RDS instance creation, you must change the effective SCPs applying to the ext-ml account so that RDS actions are not explicitly denied. Updating the analytics OU SCP to include rds:* in the NotAction exception (or otherwise removing the explicit deny for RDS) is the direct fix. Key AWS Features - SCP evaluation is “intersection-based”: an action must be allowed by IAM and not blocked by SCPs. Explicit Deny always wins. OU-level SCPs apply to all accounts in the OU; account-level SCPs are additive constraints (more guardrails), not overrides. A root OU SCP that allows * does not negate a child OU’s explicit deny. Common Misconceptions - A frequent mistake is assuming that attaching AmazonRDSFullAccess (or even AdministratorAccess) will fix an SCP denial. It will not. Another misconception is that attaching an “allow RDS” SCP at the account or root will override the OU deny; it won’t, because explicit deny remains in effect. Exam Tips - When you see “blocked by an organization service control policy,” immediately inspect SCPs for explicit Deny or allow-lists (NotAction patterns). If an OU SCP uses Deny with NotAction, you must add the needed service actions to the NotAction list (or redesign the SCP) at the OU (or move the account to an OU where it’s permitted).

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

An online gaming platform operates 12 Amazon EC2 instances across us-east-1 and us-west-2 in an Auto Scaling group; AWS Health publishes scheduled instance retirement and underlying host maintenance notifications that require reboots, and the company mandates that remediation occur only within an existing AWS Systems Manager maintenance window every Sunday from 01:00 to 03:00 UTC, so how should a DevOps engineer configure an Amazon EventBridge rule to automatically handle these notifications while ensuring reboots run only inside the maintenance window?

Incorrect. Using AWS Health as the event source is the right starting point, but targeting AWS-RestartEC2Instance directly from EventBridge causes the automation to run as soon as the event is received. That means the reboot would happen immediately instead of waiting for the approved Sunday 01:00–03:00 UTC maintenance window. The option therefore fails the core scheduling requirement even though the source service is appropriate.

Incorrect. Systems Manager maintenance window events are not the correct trigger for detecting scheduled instance retirement or underlying host maintenance notifications. The impacted resources and maintenance requirement originate from AWS Health, not from the maintenance window itself. This option also does not explain how the specific affected EC2 instances would be identified and passed to the restart automation.

Correct. EventBridge should listen for AWS Health notifications related to scheduled instance retirement and underlying host maintenance or scheduled change events affecting the EC2 instances. A Lambda function can then register or update an SSM Maintenance Window task in the existing Sunday window so that the AWS-RestartEC2Instance Automation runbook runs only during the approved time. This design satisfies both requirements: automatic response to AWS Health notifications and strict enforcement of the maintenance window for the reboot action.

Incorrect. EC2 instance state-change notifications are not the authoritative signal for upcoming scheduled retirement or host maintenance actions. Those notifications come from AWS Health and are published before the disruptive event so remediation can be planned. Although Lambda could register a maintenance window task, this option starts from the wrong event source and may miss the required proactive handling.

Question Analysis

Core Concept: This question tests event-driven remediation using Amazon EventBridge, AWS Health events, and AWS Systems Manager (SSM) Maintenance Windows/Automation. The key requirement is not just detecting AWS Health notifications for scheduled instance retirement and underlying host maintenance, but enforcing that disruptive actions such as reboots occur only during a pre-approved maintenance window. Why the Answer is Correct: AWS Health publishes account-specific operational events for EC2, including scheduled instance retirement and underlying host maintenance or scheduled change events that can require a reboot. EventBridge can match those AWS Health events and trigger downstream automation. However, directly invoking a reboot runbook from EventBridge would execute immediately, which violates the requirement to act only during the existing Sunday 01:00–03:00 UTC maintenance window. Option C correctly uses EventBridge to detect the Health event and then invokes a Lambda function that registers an SSM Maintenance Window task against the existing window so the AWS-RestartEC2Instance runbook executes only when that window opens. Key AWS Features: - AWS Health integration with EventBridge for account-specific EC2 operational notifications. - SSM Maintenance Windows to constrain execution to approved time ranges. - SSM Automation runbooks such as AWS-RestartEC2Instance to perform standardized remediation. - Lambda as orchestration glue to translate the Health event payload into a maintenance window task for the affected instances. - Regional awareness, because AWS Health events identify affected resources in specific Regions and the SSM action must run in the Region where the instances exist. Common Misconceptions: A is tempting because it correctly starts from AWS Health, but it invokes the restart automation immediately and therefore ignores the maintenance window requirement. B incorrectly treats Systems Manager maintenance window events as the source of truth for identifying impacted EC2 instances, but the actual trigger must come from AWS Health. D uses EC2 state-change notifications, which are not the authoritative source for upcoming scheduled retirement or host maintenance events and may occur too late. Exam Tips: When a question says remediation must happen only during a maintenance window, SSM Maintenance Windows are the enforcement mechanism. EventBridge is commonly used to detect AWS Health events, but you often need Lambda or another orchestration layer to defer execution into the approved window. Also, for scheduled retirement and host maintenance, AWS Health is the correct event source rather than EC2 state-change notifications.

12
Question 12

A research analytics platform runs data collectors on Amazon EC2 instances in an Auto Scaling group (minimum 4, maximum 20) that is created and maintained by AWS CloudFormation; each instance must start with a 12 KB TOML configuration file at /etc/collector/config.toml that is versioned in the same source repository as the infrastructure templates, and when the CloudFormation stack is updated with a change to this configuration, all running instances must pick up the change within 3 minutes without replacing instances while newly launched instances must always have the latest configuration—what solution will accomplish this with minimal delay?

Incorrect. AWS Config rules evaluate resource compliance and can trigger notifications/remediation, but they are not designed to distribute a versioned file to EC2 instances. Putting file content in InputParameters is not a practical or intended mechanism for instance bootstrapping. Systems Manager Resource Data Sync does not “poll for configuration updates” to apply changes; it syncs inventory/compliance data to S3 for reporting.

Partially correct idea (cfn-init + cfn-hup), but incorrect placement. Embedding the file content in the EC2 launch template is not what cfn-hup monitors; cfn-hup watches CloudFormation resource metadata changes. If the config is only in the launch template data, existing instances will not automatically update from stack changes unless you also manage it via CloudFormation::Init metadata and re-run cfn-init.

Incorrect. Embedding config in the launch template only affects newly launched instances; it does not update already-running instances within 3 minutes without replacement. Additionally, Systems Manager Resource Data Sync is not a configuration distribution mechanism and will not apply file changes to instances. This option fails both the in-place update requirement and the correct use of SSM features.

Correct. CloudFormation::Init metadata can define the TOML file content and destination path. UserData runs cfn-init at boot so new instances always get the latest config. cfn-hup runs continuously and polls CloudFormation metadata; when the stack is updated with new file content, cfn-hup triggers cfn-init to re-apply the metadata, updating /etc/collector/config.toml on all running instances within the polling interval (e.g., <= 3 minutes) without replacing instances.

Question Analysis

Core Concept: This question tests CloudFormation-driven instance configuration management for Auto Scaling groups, specifically using CloudFormation::Init (cfn-init) to render files from stack metadata and cfn-hup to detect metadata changes and re-run cfn-init on already-running instances. Why the Answer is Correct: Option D is the canonical AWS pattern for keeping EC2 instance configuration in sync with CloudFormation stack updates without replacing instances. You place the TOML content in AWS::CloudFormation::Init metadata (typically on the Launch Template, Launch Configuration, or the EC2 instance resource in the template). At boot, UserData runs cfn-init to write /etc/collector/config.toml. For in-place updates, cfn-hup runs as a daemon and polls the CloudFormation metadata for changes; when the stack is updated with new TOML content, cfn-hup triggers cfn-init again, rewriting the file on all existing instances. By setting cfn-hup’s interval (e.g., 1 minute), you can meet the “within 3 minutes” requirement with minimal delay. Key AWS Features: - AWS::CloudFormation::Init metadata “files” section to create/update /etc/collector/config.toml with correct ownership/permissions. - cfn-init invoked from UserData at launch to ensure newly launched instances always get the latest configuration. - cfn-hup daemon to poll for metadata changes and invoke cfn-init on updates (no instance replacement required). - Works naturally with Auto Scaling groups created/maintained by CloudFormation, aligning with IaC and repeatability. Common Misconceptions: - Embedding config in a Launch Template alone does not update running instances; it only affects future launches. - AWS Config rules are for compliance evaluation, not for distributing or applying configuration files to instances. - Systems Manager Resource Data Sync is for aggregating inventory/compliance data to S3, not for pushing configuration changes to instances. Exam Tips: When you see “CloudFormation update should change config on existing EC2 instances without replacement,” think “cfn-init + cfn-hup.” When you see “new instances must always have latest config,” ensure cfn-init runs at boot via UserData. Also remember that polling interval configuration is how you satisfy strict propagation-time requirements. (Reference: AWS CloudFormation helper scripts documentation for cfn-init and cfn-hup; AWS Well-Architected Operational Excellence pillar for consistent, automated configuration management.)

13
Question 13

A DevOps team is preparing a blue/green release for a customer-facing API that runs on Amazon EC2 behind a single Application Load Balancer (ALB) using HTTPS on port 443; instances run in two Auto Scaling groups (api-blue-asg and api-green-asg) with separate launch templates, and each group registers to its own ALB target group (tg-blue and tg-green) with health checks on /healthz and a healthy threshold of 3; both environments share an existing Amazon RDS MySQL database; a Route 53 alias record api.acme.dev with a 300-second TTL points to the ALB’s DNS name; the team must shift 100% of traffic at once from blue to green with less than 30 seconds of impact and must not rely on DNS propagation. Which solution will meet these requirements?

Correct. Deploy to api-green-asg first and wait until tg-green passes health checks (healthy threshold met), ensuring capacity is warm and ready. Then modify the ALB listener default action to forward 100% to tg-green. This shifts traffic at the ALB layer (no DNS propagation) and can complete in seconds, meeting the <30-second impact requirement if green is already healthy.

Incorrect. Switching the ALB listener to tg-green before the rolling update means traffic is sent to instances that may be launching, failing health checks, or running the old/partial version. Because ALB only routes to healthy targets, you could see reduced capacity or 5xx/connection issues during deployment, violating the requirement for minimal impact during the cutover.

Incorrect. Updating the blue launch template and rolling api-blue-asg is a rolling deployment, not a blue/green cutover. It does not provide an instantaneous 100% traffic shift; instead, it gradually replaces instances behind the same target group. This increases risk (mixed versions serving traffic) and can exceed the 30-second impact requirement if any instance replacement causes capacity dips or errors.

Incorrect. Changing Route 53 to point to a “green endpoint” relies on DNS caching and propagation behavior outside AWS control. Even with a 300-second TTL, clients and resolvers may retain cached records longer, and existing TCP/TLS sessions to the ALB won’t immediately move. The requirement explicitly forbids relying on DNS propagation for the cutover.

Question Analysis

Core Concept: This question tests blue/green deployment traffic shifting using an Application Load Balancer (ALB) and target groups, avoiding DNS-based cutovers. The key idea is that ALB listener rules/actions can shift traffic instantly within AWS, while DNS changes depend on client/recursive resolver caching and TTL. Why the Answer is Correct: Option A ensures the green environment is fully deployed and passing health checks in tg-green before any traffic is sent to it. Once tg-green has healthy targets (healthy threshold of 3 on /healthz), the team updates the ALB listener’s default forward action to send 100% of traffic to tg-green. This cutover happens at the load balancer control plane and does not rely on Route 53 propagation. The impact is typically limited to in-flight connections and the time for the ALB to apply the listener change, which can meet the “<30 seconds” requirement when green is already warm and healthy. Key AWS Features: - ALB listener default action/rules: can forward to a specific target group; changing it provides an immediate traffic switch. - Target group health checks: ensure only healthy instances receive traffic; waiting for tg-green to be healthy prevents sending users to unready instances. - Auto Scaling groups + separate launch templates: support parallel environments (blue and green) for safe cutovers. - Route 53 alias to ALB: fine for stable entrypoint, but not used for the cutover here. Common Misconceptions: A frequent trap is using DNS (Route 53) to shift traffic (Option D). Even with a 300-second TTL, many clients and resolvers may cache longer, and existing connections won’t move immediately. Another misconception is switching traffic before green is healthy (Option B), which risks immediate user-facing errors. Exam Tips: When requirements say “must not rely on DNS propagation” and “shift 100% at once,” look for ALB/NLB listener or target group switching (or weighted target groups if gradual shifting is allowed). Always ensure the new target group is healthy before directing production traffic. For blue/green on EC2 behind an ALB, the cleanest cutover is updating listener rules/actions, not changing DNS.

14
Question 14

A global research consortium operating 14 AWS accounts across 6 Regions needs eight internal project teams to provision infrastructure only through pre-approved blueprints, and the security office requires automated multi-account, multi-Region detection and alerting when any resource deviates from the intended configuration while preventing direct use of raw CloudFormation templates; which strategy should be used to meet these requirements?

CloudFormation service roles can standardize permissions, but this does not prevent teams from using arbitrary/raw CloudFormation templates; they could still submit any template they want. CloudFormation drift detection is also not an always-on, centralized multi-account/multi-Region compliance mechanism; it is stack-scoped and typically initiated on demand. It also won’t cover resources not managed by stacks.

AWS Config managed rules can detect noncompliant configurations across accounts/Regions (with an aggregator), but this option fails the governance requirement: it still allows teams to deploy CloudFormation stacks directly from raw templates. A service role only controls what CloudFormation can do, not what templates users are allowed to submit. The question explicitly requires provisioning only through pre-approved blueprints and preventing raw template usage.

AWS Service Catalog enforces provisioning through curated, pre-approved products (blueprints) and can be shared across accounts/Regions. A launch constraint ensures stacks are created using a centrally managed IAM role, providing consistent guardrails and least privilege. AWS Config with an aggregator provides centralized, automated multi-account/multi-Region detection and alerting for configuration deviations, meeting both governance and compliance monitoring requirements.

Service Catalog helps with approved products, and template constraints can restrict parameter values, but this does not address centralized, continuous drift/compliance detection across 14 accounts and 6 Regions. CloudFormation drift detection events are limited to stack drift (not broader configuration compliance), and relying on EventBridge notifications from drift detection presumes drift detection is being run and does not provide the same comprehensive, aggregated compliance view as AWS Config.

Question Analysis

Core Concept: This question tests governance of infrastructure provisioning (approved blueprints only) and continuous compliance/drift detection across multiple AWS accounts and Regions. The key services are AWS Service Catalog (controlled self-service provisioning) and AWS Config with a configuration aggregator (centralized, multi-account/multi-Region compliance visibility and alerting). Why the Answer is Correct: Option C is the only strategy that simultaneously (1) forces teams to provision only from pre-approved blueprints, (2) prevents direct use of raw CloudFormation templates, and (3) provides automated, centralized detection and alerting for configuration deviations across 14 accounts and 6 Regions. AWS Service Catalog lets administrators curate Products (often backed by CloudFormation) and share them to accounts/OU’s, so project teams can deploy only those approved products via the Service Catalog interface/API—without being granted permissions to run arbitrary CloudFormation templates. A launch constraint enforces that product stacks are created using a centrally managed IAM role, ensuring consistent permissions, guardrails, and auditability. Key AWS Features: - AWS Service Catalog Products/Portfolios: publish approved “blueprints” and control who can launch them. - Launch constraints: force stack creation to assume a specific IAM role (centralized, least-privilege, consistent tagging/permissions). - AWS Config recorder + managed/custom rules: evaluate resource configurations against intended policies. - AWS Config Aggregator: aggregates configuration and compliance data from multiple accounts and Regions into a central account for unified monitoring and reporting. - Alerting: Config rule noncompliance can trigger notifications (commonly via EventBridge/SNS) from the central account. Common Misconceptions: CloudFormation drift detection (Options A/D) only compares deployed stacks to their templates and is not a comprehensive multi-account/multi-Region compliance system; it also requires drift detection to be initiated and is limited to stack-managed resources. AWS Config (Option B) is great for compliance, but without Service Catalog it does not prevent teams from using raw CloudFormation templates. Exam Tips: When you see “pre-approved blueprints” and “prevent direct template use,” think AWS Service Catalog. When you see “multi-account, multi-Region detection/alerting,” think AWS Config + Aggregator (often with Organizations integration). Combine them for governed provisioning plus continuous compliance at scale.

15
Question 15
(Select 2)

A media streaming startup operates a multi-account environment in AWS Organizations with all features enabled. The operations (source) account uses AWS Backup in us-west-2 to protect 120 Amazon EBS volumes and encrypts recovery points with a customer managed KMS key (alias/ops-backup). To meet DR requirements, the company configured cross-account copy in the management account, created a backup vault named dr-vault and a customer managed KMS key (alias/dr-backup) in a newly created DR account, and then updated the backup plan in the operations account to copy all recovery points to the DR account's dr-vault. When a backup job runs in the operations account, recovery points are created successfully in the operations account, but no copies appear in the DR account's dr-vault. Which combination of steps will allow AWS Backup to copy recovery points to the DR account's vault? (Choose two.)

Correct. Cross-account copy requires the destination backup vault (dr-vault) to trust/allow the source (operations) account to write/copy recovery points into it. This is done with a resource-based backup vault access policy in the DR account. Without this explicit allow, AWS Backup cannot create the copied recovery point in the destination vault, so nothing appears there.

Incorrect. The DR account does not need to read from the source account’s backup vault for cross-account copy. The copy is initiated by AWS Backup based on the backup plan and writes into the destination vault. The key authorization point for the vault is on the destination vault policy (allowing write), not on the source vault policy granting read.

Incorrect. Backup vault policies do not grant access to KMS keys in another account. KMS authorization is controlled by the KMS key policy (and possibly grants), not by the backup vault access policy. You must update the destination CMK policy in the DR account to allow AWS Backup (and the source account context) to use it.

Incorrect. Sharing the source account CMK (alias/ops-backup) with the DR account is not the primary requirement for cross-account copy into a DR vault encrypted with a DR-owned CMK. AWS Backup copies and re-encrypts the recovery point using the destination vault’s encryption key. The missing permission is typically on the destination vault policy and destination CMK policy.

Correct. Because the copied recovery points in the DR vault are encrypted with the DR account’s customer managed KMS key (alias/dr-backup), that key policy must allow AWS Backup (and the source account acting through the service) to use the key for encryption-related operations. Without these KMS permissions, AWS Backup cannot encrypt the copied recovery points and the copy operation fails.

Question Analysis

Core Concept: This question tests AWS Backup cross-account copy prerequisites in an AWS Organizations environment, specifically the two authorization layers involved: (1) the destination backup vault access policy and (2) AWS KMS key policies for encrypting the copied recovery points. Even with Organizations “all features,” AWS Backup still requires explicit resource-based permissions on the destination vault and KMS permissions to use the destination CMK. Why the Answer is Correct: For cross-account copy, AWS Backup in the source account must be allowed to write into the destination vault in the DR account. That permission is granted by a resource-based policy on the destination backup vault (dr-vault). Without it, the copy operation is denied and no recovery points appear in the DR vault. Additionally, because the destination vault encrypts recovery points with a customer managed KMS key (alias/dr-backup), the AWS Backup service (acting on behalf of the source account) must be permitted by the destination CMK key policy to use the key for encryption operations (e.g., GenerateDataKey, Encrypt, Decrypt as required by the service). If the key policy does not allow this, AWS Backup cannot encrypt the copied recovery point in the DR account and the copy fails. Key AWS Features: - AWS Backup cross-account copy requires a destination vault access policy that allows the source account to perform copy/write actions. - AWS KMS CMKs are controlled primarily by key policies (not just IAM). Cross-account usage must be explicitly allowed in the key policy. - In multi-account DR designs, the destination account typically owns the vault and CMK; the source account is granted limited permissions to copy into that vault. Common Misconceptions: A frequent mistake is assuming that enabling AWS Organizations “all features” automatically authorizes cross-account backup copy. It does not; vault policies and KMS key policies still must be configured. Another misconception is thinking the source CMK must be shared with the DR account; for copy, AWS Backup re-encrypts using the destination CMK, so the critical permission is on the destination CMK. Exam Tips: When you see “cross-account copy” + “customer managed KMS key,” immediately check for two required permissions: destination vault policy (resource-based) and destination KMS key policy. If copies don’t appear, it’s almost always one (or both) of these missing permissions rather than the source vault policy.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

During a security audit, an AWS CodeBuild project that builds a container image downloads a Helm chart (5 MB) from an Amazon S3 bucket by issuing an unauthenticated HTTP GET to a public object URL in us-east-1 during the pre_build phase. The security team mandates that all S3 access for this project must use IAM-based authentication, the S3 bucket must block public access, no long-term credentials may be stored in the build environment, and the build must continue to retrieve the chart via a command-line tool. What is the MOST secure way to remediate this issue?

Incorrect. CodeBuild does not provide a security control called a custom “AllowedBuckets” setting that enforces S3 authorization. Even if you switch to the AWS CLI, you still must remove public access and ensure the bucket policy/IAM permissions are correctly configured. This option is vague and does not explicitly meet the requirement to block public access and enforce IAM-based authentication.

Incorrect. S3 does not support “HTTPS basic authentication” as a native bucket feature. Using cURL with a static token also violates the requirement that no long-term credentials be stored in the build environment. This approach introduces secret management risk and is not aligned with AWS best practices for service-to-service authentication.

Correct. This option removes the unauthenticated access path by using S3 Block Public Access and a bucket policy so the object can no longer be fetched anonymously from a public URL. It grants the CodeBuild service role only the required s3:GetObject permission on the specific bucket or prefix, which follows least-privilege IAM design. CodeBuild automatically provides temporary role credentials to the build container, so the AWS CLI can authenticate to S3 without storing long-term access keys. This also satisfies the requirement to continue using a command-line tool while enforcing IAM-based authentication end to end.

Incorrect. Embedding an IAM access key and secret access key as environment variables is explicitly disallowed (“no long-term credentials”). It also increases blast radius if leaked via logs, misconfiguration, or compromised build containers. The secure pattern is to use the CodeBuild service role with temporary credentials, not static IAM user keys.

Question Analysis

Core Concept: This question tests secure access to Amazon S3 from AWS CodeBuild using IAM roles and temporary credentials (STS), while enforcing S3 Block Public Access and least-privilege authorization. It also implicitly tests secure SDLC practices: no long-term secrets in build environments and using AWS-native auth mechanisms. Why the Answer is Correct: Option C is the most secure remediation because it removes unauthenticated public access (the audit finding) and replaces it with IAM-based, short-lived credentials automatically provided to CodeBuild via its service role. By enabling S3 Block Public Access and tightening the bucket policy, the bucket no longer serves objects publicly. Then, granting the CodeBuild service role only s3:GetObject on the specific bucket/prefix ensures least privilege. Finally, using the AWS CLI in the buildspec satisfies the requirement to continue retrieving the chart via a command-line tool, and the CLI transparently uses the role’s temporary credentials injected into the build container. Key AWS Features / Best Practices: - S3 Block Public Access (account/bucket level) to prevent public ACLs and public bucket policies. - Bucket policy to explicitly control access (e.g., allow only the CodeBuild role principal; optionally add conditions like aws:PrincipalArn, aws:SourceVpce if using VPC endpoints). - CodeBuild service role (IAM role) with scoped permissions to the exact object path (arn:aws:s3:::bucket/prefix/*). - AWS SDK/CLI credential provider chain in CodeBuild that uses STS-issued temporary credentials (no static secrets). Common Misconceptions: Some assume “making it private” is enough without updating IAM permissions or that storing access keys as environment variables is acceptable. Others confuse “custom settings” in CodeBuild with actual authorization controls. Security audits typically require demonstrable IAM auth, least privilege, and elimination of public access paths. Exam Tips: When you see requirements like “no long-term credentials,” “must use IAM-based authentication,” and “block public access,” the pattern is: lock down the resource (S3 BPA + bucket policy) and grant access via an IAM role assumed by the service (CodeBuild service role). Prefer AWS CLI/SDK with role-based temporary credentials over any static tokens or embedded keys.

17
Question 17

A media streaming startup discovers that its disaster recovery (DR) Kubernetes cluster was mistakenly deployed to the same AWS Region as production. The production microservices run on Amazon EKS with managed node groups provisioned by Terraform, and all container images are already replicated to a new DR Region in Amazon ECR. The applications mount an Amazon FSx for NetApp ONTAP NFS volume for shared state, and no application data resides on the EKS nodes. The DevOps engineer updated the infrastructure code to accept a Region variable and stood up the DR control plane. The file storage in the DR Region must achieve an RPO of 10 minutes. Which solution will meet these requirements?

This is an improvised pipeline (FSx NFS -> S3 -> CRR) that adds complexity and likely fails the intent of “file storage” DR for an NFS-mounted application. It changes the storage interface (NFS to object) and requires custom logic to detect/copy changes, handle deletes, permissions, and consistency. It may not reliably meet a 10-minute RPO and is not an FSx for ONTAP DR pattern.

AWS Backup supports FSx for ONTAP backups and cross-Region copy, but a 10-minute backup frequency is generally impractical and may not be supported as a standard plan frequency for this workload, and backup/restore is not the same as near-real-time replication. Even if scheduled frequently, backup jobs and copy jobs introduce latency and operational uncertainty, risking violation of the 10-minute RPO.

This targets the wrong data. The question states no application data resides on EKS nodes, and instance store volumes are ephemeral and cannot be snapshotted to EBS snapshots. Even if the nodes used EBS, snapshotting worker nodes would not capture the shared NFS state on FSx for ONTAP. This option is both technically flawed and misaligned with the stated architecture.

This is the correct, purpose-built solution. Deploying FSx for ONTAP in the DR Region and configuring volume-level SnapMirror provides incremental, scheduled replication with predictable RPO. A 5-minute schedule comfortably meets the 10-minute RPO requirement. In failover, you break the mirror and mount the replicated NFS volume from the DR EKS cluster, matching the application’s existing storage interface.

Question Analysis

Core Concept: This question tests cross-Region disaster recovery for stateful storage used by Amazon EKS workloads, specifically Amazon FSx for NetApp ONTAP. The key requirement is meeting an RPO of 10 minutes for the shared NFS state, independent of stateless compute (EKS nodes) and container images (already replicated via ECR). Why the Answer is Correct: FSx for ONTAP natively supports NetApp SnapMirror replication between ONTAP volumes, including cross-Region replication when you deploy a target FSx for ONTAP file system in the DR Region. By configuring volume-level SnapMirror with a 5-minute schedule, the solution can consistently achieve an RPO better than 10 minutes (assuming normal replication health). This directly addresses the only remaining DR gap: the shared state on the NFS volume. Key AWS Features / Best Practices: - FSx for ONTAP provides ONTAP data management capabilities (snapshots, SnapMirror, cloning) as a managed AWS service. - SnapMirror is purpose-built for DR replication with frequent, incremental transfers and predictable RPOs. - Volume-level replication aligns with microservice shared-state patterns where the application mounts a specific NFS volume. - In a DR event, you can break the SnapMirror relationship and mount the destination volume from the DR EKS cluster. Common Misconceptions: A common trap is assuming “backup” equals “DR replication.” AWS Backup is excellent for point-in-time recovery and compliance, but it is not designed for 10-minute RPO operational DR for FSx for ONTAP at that cadence, and service limits/backup windows make it unreliable for such aggressive RPOs. Another misconception is focusing on EKS nodes; the prompt explicitly states no application data resides on nodes, so node snapshots are irrelevant. Exam Tips: When you see FSx for NetApp ONTAP plus strict RPO requirements, think SnapMirror (replication) rather than AWS Backup (restore-based recovery). Also, separate concerns: images (ECR replication) and compute (IaC redeploy) are solved; the remaining requirement is state replication. Choose the service-native replication mechanism that meets RPO with minimal operational complexity.

18
Question 18

An organization builds container images in an engineering account (Account A: 111111111111) in ap-northeast-2 and pushes them to a single Amazon ECR repository named svc-web; a CodePipeline pipeline deploys to a staging Amazon EKS cluster in Account A (Kubernetes 1.29) and, after tests pass, promotes the same image to production; the company is moving the production EKS cluster to a separate operations account (Account B: 222222222222) in the same Region; the production VPC in Account B has no internet gateway or NAT gateway, and all image downloads must remain on the AWS private network; compliance requires that images continue to be pulled from the single ECR repository in Account A (no replication or duplicate repositories are allowed); which solution will meet these requirements?

Incorrect. Creating a new ECR repository in Account B directly violates the requirement that images must continue to be pulled from the single ECR repository in Account A (no duplicate repositories). While the endpoints would help with private connectivity, this option changes the source of truth to a local repo in Account B, which is explicitly disallowed by compliance requirements.

Incorrect. Cross-account repository policy and IAM permissions are necessary but not sufficient because Account B’s production VPC has no IGW or NAT. Without VPC endpoints, the EKS nodes cannot reach the ECR API/registry endpoints or S3 to download image layers. Image pulls would fail due to lack of network path, and traffic would not be constrained to the AWS private network.

Incorrect. ECR private image replication would create a copy of the repository/images in Account B (a destination repository), which violates the requirement that no replication or duplicate repositories are allowed and that images must continue to be pulled from the single repository in Account A. Replication is a common best practice for multi-account resiliency, but it is explicitly prohibited here.

Correct. This meets all constraints: a single ECR repository remains in Account A, Account B is granted cross-account pull via an ECR repository policy, and Account B’s pulling principal has ECR read permissions. The production VPC remains private (no IGW/NAT) while still enabling image pulls through ECR interface endpoints (ecr.api and ecr.dkr) and an S3 gateway endpoint for layer downloads, keeping traffic on the AWS private network.

Question Analysis

Core Concept: This question tests cross-account Amazon ECR access from a private (no IGW/NAT) Amazon EKS cluster, using AWS PrivateLink (VPC interface endpoints) so image pulls stay on the AWS private network. It also tests ECR repository policies for cross-account pull and the network path ECR uses (ECR API/DKR plus S3 for image layers). Why the Answer is Correct: Option D is the only solution that simultaneously (1) keeps a single ECR repository in Account A, (2) enables Account B’s production EKS nodes (or IRSA-based pull role) to authenticate and pull images cross-account, and (3) allows those pulls to succeed without internet egress by providing private connectivity. EKS nodes in Account B must reach the ECR control plane endpoints (ecr.api for auth/token and ecr.dkr for registry operations) via interface endpoints, and must also retrieve image layers that are stored in Amazon S3. In a private VPC, the S3 access must be provided via an S3 gateway endpoint (or equivalent private routing), otherwise pulls will fail. Key AWS Features: - ECR cross-account access: Use an ECR repository policy on svc-web in Account A granting Account B’s principal(s) permissions such as ecr:BatchGetImage, ecr:GetDownloadUrlForLayer, and ecr:BatchCheckLayerAvailability. - IAM permissions on the pulling principal in Account B: The node instance role (or a dedicated IRSA role used by kubelet/container runtime where supported) must also allow the same ECR read actions. - Private connectivity: VPC interface endpoints for com.amazonaws.ap-northeast-2.ecr.api and com.amazonaws.ap-northeast-2.ecr.dkr, plus an S3 gateway endpoint, ensure traffic stays on the AWS network and works without NAT/IGW. Common Misconceptions: - “Repository policy alone is enough” (Option B): Even with correct IAM, without endpoints the private VPC cannot reach ECR/S3. - “Just replicate” (Option C): Violates the explicit requirement of no replication/duplicate repositories. - “Create a repo in Account B” (Option A): Also violates the single-repository requirement. Exam Tips: For private subnets with no egress, remember ECR pulls typically require BOTH ECR interface endpoints (api + dkr) AND S3 access (gateway endpoint). For cross-account ECR, you need permissions in two places: the repository policy (resource-based) and the caller’s IAM policy (identity-based).

19
Question 19
(Select 3)

An energy startup stores 120 GB of sensor diagnostics exported hourly in CSV format in an Amazon S3 bucket under a date/hour partition prefix (for example, s3://env-data/diagnostics/year=2025/month=08/day=16/hour=00-23). A team of SQL analysts needs to run ad hoc queries using standard SQL without managing any servers and must publish interactive dashboards with line and bar charts refreshed every 24 hours. The team also needs an efficient, automated, and persistent way to capture and maintain table metadata (columns, data types, partitions) for the CSV files as new hourly objects arrive. Which combination of steps will meet these requirements with the least amount of effort? (Choose three.)

Incorrect. AWS X-Ray is an application performance monitoring (APM) and distributed tracing service used to analyze request flows and latency across microservices. It does not query CSV data in S3, manage table metadata, or provide BI dashboards. It might be used to trace an ingestion pipeline application, but it does not meet the SQL analytics and visualization requirements described.

Correct. Amazon QuickSight is AWS’s managed BI service for building interactive dashboards with visualizations such as line and bar charts. It integrates with Athena as a data source and supports scheduled refresh (e.g., every 24 hours). This satisfies the requirement to publish interactive dashboards without managing servers or BI infrastructure.

Correct. Amazon Athena provides serverless, standard SQL querying directly on data stored in S3, ideal for ad hoc analysis of CSV files. Analysts can query partitioned prefixes (year/month/day/hour) efficiently when partitions are defined in the catalog. Athena eliminates the need to provision or manage database servers and fits the “least effort” requirement for querying.

Incorrect for “least effort.” Amazon Redshift can query large datasets and supports BI dashboards, but it typically requires more setup: provisioning/operating a cluster (or configuring Redshift Serverless), designing schemas, and often loading/transforming data (COPY/ETL) for best performance. For hourly CSV files already in S3 and ad hoc queries, Athena is simpler and more serverless.

Correct. The AWS Glue Data Catalog is a persistent, managed metadata repository for tables, schemas, and partitions used by Athena and other analytics services. A Glue crawler can automatically infer CSV schema and continuously discover new hourly partitions as objects arrive under date/hour prefixes, minimizing manual DDL and ongoing maintenance.

Incorrect. Amazon DynamoDB is a NoSQL key-value/document database and is not a native metastore for Athena/Presto-style SQL querying. Using DynamoDB to store schema/partition metadata would require building and maintaining custom logic to keep it synchronized and would not integrate seamlessly with Athena and QuickSight the way the Glue Data Catalog does.

Question Analysis

Core Concept: This question tests a classic serverless analytics pattern on AWS: storing raw files in Amazon S3, querying them with Amazon Athena (serverless SQL), maintaining metadata and partitions in the AWS Glue Data Catalog, and visualizing results in Amazon QuickSight. It also implicitly tests automation of schema/partition discovery for continuously arriving data. Why the Answer is Correct: (C) Amazon Athena lets SQL analysts run ad hoc queries directly against CSV data in S3 without provisioning or managing servers. It supports standard SQL (Presto/Trino-based) and is designed for interactive querying of data lakes. (E) The AWS Glue Data Catalog provides a persistent, managed metastore for table definitions (columns, data types) and partitions. A Glue crawler can automatically infer schema from CSV and discover new partitions as hourly prefixes arrive, keeping metadata current with minimal manual effort. (B) Amazon QuickSight is the managed BI service for interactive dashboards (line/bar charts) and can refresh datasets on a schedule (e.g., every 24 hours). QuickSight integrates natively with Athena (and the Glue Data Catalog tables Athena uses), enabling a low-ops pipeline from S3 to dashboards. Key AWS Features: Athena uses the Glue Data Catalog as its default metastore, so defining tables/partitions in Glue makes them immediately queryable. Glue crawlers can be scheduled or triggered to update partitions as new S3 prefixes appear. QuickSight can use Athena as a data source, import to SPICE for faster dashboard performance, and schedule refreshes every 24 hours. Common Misconceptions: Redshift (D) is powerful but requires cluster management (or even with Serverless, still involves data modeling/loading decisions) and is more effort than querying in-place with Athena for ad hoc analysis. DynamoDB (F) is not a metastore for SQL engines and would require custom schema/partition management logic. X-Ray (A) is for tracing distributed applications, not analytics/visualization. Exam Tips: When you see “ad hoc SQL,” “no servers,” and “data in S3,” think Athena. When you see “persistent metadata,” “schema/partitions,” and “automated discovery,” think Glue Data Catalog + crawler. When you see “interactive dashboards” and “scheduled refresh,” think QuickSight. This trio (S3 + Glue + Athena + QuickSight) is a common Well-Architected, low-ops analytics reference pattern.

20
Question 20

A data governance team is concerned that a developer might unintentionally make an Amazon RDS DB snapshot in production publicly accessible; no developer should be able to modify snapshot attributes to share with all accounts, and the team must be notified within 5 minutes if any snapshot tagged Environment=prod in us-east-1 or us-west-2 is public, so how can this be automated at scale without manual review?

CloudTrail + Athena is primarily a log analytics approach and is not a strong near-real-time compliance control. Athena queries are typically scheduled/batch and don’t guarantee notification within 5 minutes unless you build additional orchestration. It also doesn’t prevent developers from making snapshots public; it only detects attempts/events after the fact. Remediation via Lambda is possible, but this option is weaker and more complex than Config + explicit deny guardrails.

This is the best match: an explicit IAM Deny ensures developers cannot modify snapshot attributes to make snapshots public, even if another policy later allows it. A custom AWS Config rule can evaluate whether an RDS snapshot is publicly restorable and can be scoped to Environment=prod and deployed in us-east-1 and us-west-2. Config can notify via SNS/EventBridge within minutes, meeting the 5-minute requirement at scale.

Removing the permission by simply “not granting” it is not a durable guardrail; future policy changes could accidentally add it back, and other principals might still have it. A Lambda scheduled every 5 minutes is polling-based, adds operational overhead, and can miss changes between runs or fail silently. AWS Config is purpose-built for continuous compliance and integrates cleanly with alerting, making this option inferior.

RDS DB snapshots do not have IAM roles attached, so the premise is incorrect. IAM permissions are attached to principals (users/roles), not to snapshots as resource-attached roles. While AWS Config can check for public snapshots, the described mechanism (verifying snapshots have roles with deny statements) is not feasible. This option reflects a misunderstanding of IAM’s permission model and how RDS snapshots are controlled.

Question Analysis

Core Concept: This question tests preventive and detective controls for data exposure using IAM explicit deny (preventing risky actions) plus AWS Config (continuous compliance evaluation and near-real-time notification). The risk is making an RDS DB snapshot publicly restorable by setting the snapshot attribute restore permissions to "all" via ModifyDBSnapshotAttribute. Why the Answer is Correct: Option B combines (1) an IAM policy with an explicit Deny on rds:ModifyDBSnapshotAttribute to ensure developers cannot change snapshot restore permissions to public, and (2) a custom AWS Config rule scoped to production snapshots (tag Environment=prod) in the required regions to detect if any snapshot is public and alert within minutes. This meets both requirements: no developer can make the snapshot public, and the governance team is notified quickly if a prod snapshot becomes public for any reason (e.g., a different role, automation, or a misconfigured break-glass account). Key AWS Features: - IAM explicit Deny: Overrides any Allow, making it the strongest guardrail. You can further constrain with conditions (e.g., deny when rds:AttributeName is restore and rds:Values contains all) to avoid blocking legitimate non-public sharing. - AWS Config custom rule: Evaluates resource configuration and can be scoped using tags and region-specific rule deployment. Config integrates with Amazon SNS/EventBridge for notifications, typically within minutes of configuration change. - Multi-account/scale: Config rules and IAM guardrails can be deployed consistently using AWS Organizations (e.g., StackSets), aligning with governance at scale. Common Misconceptions: A common trap is relying on log queries (CloudTrail + Athena) or scheduled polling (Lambda every 5 minutes). Those are detective-only, can miss edge cases, and don’t inherently prevent the action. Another misconception is thinking resources like snapshots “have IAM roles attached” (they don’t), so you cannot enforce per-snapshot IAM role controls. Exam Tips: When a question requires both “must not be possible” and “notify quickly,” look for a combination of preventive controls (SCP/IAM explicit deny) and continuous compliance monitoring (AWS Config). Prefer Config over periodic Lambda polling for compliance posture, and prefer explicit deny over “not granting” permissions because future policy changes could accidentally reintroduce the permission.

Success Stories(6)

주
주**Nov 25, 2025

Study period: 2 months

앱의 문제들 3번이나 반복해서 풀고 떨어지면 어떡하지 불안했었는데 시험에서 문제들이 굉장히 유사하게 많이 나와서 쉽게 풀 수 있었어요. 감사해요!

**********Nov 20, 2025

Study period: 1 month

After going through a Udemy course, I wanted to do some practice exams before taking the real exam. Cloud pass is good resource for exam. I didn't complete every questions, i only completed 70% questions. But i passed! Thanks cloud pass

U
u*********Nov 18, 2025

Study period: 1 month

A lot of the questions in this app questions indeed appeared on the exam, very helpful.

S
S*******Oct 31, 2025

Study period: 1 month

Passed the exam DOP-C02 Oct 2025. These practice questions were essential for my preparation. The services covered in the test practices match the exam content very well.

D
D****Oct 27, 2025

Study period: 1 month

passed the dop exam with the help of Cloud pass questions. The real exam is full of tricky questions and these sets helped me prepared for it.

Practice Tests

Practice Test #1

75 Questions·180 min·Pass 750/1000

Practice Test #2

75 Questions·180 min·Pass 750/1000

Other AWS Certifications

AWS Certified Solutions Architecture - Associate (SAA-C03)

AWS Certified Solutions Architecture - Associate (SAA-C03)

Associate

AWS Certified AI Practitioner (AIF-C01)

AWS Certified AI Practitioner (AIF-C01)

Practitioner

AWS Certified Advanced Networking - Specialty (ANS-C01)

AWS Certified Advanced Networking - Specialty (ANS-C01)

Specialty

AWS Certified Cloud Practitioner (CLF-C02)

AWS Certified Cloud Practitioner (CLF-C02)

Practitioner

AWS Certified Data Engineer - Associate (DEA-C01)

AWS Certified Data Engineer - Associate (DEA-C01)

Associate

AWS Certified Developer - Associate (DVA-C02)

AWS Certified Developer - Associate (DVA-C02)

Associate

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

Associate

AWS Certified Security - Specialty (SCS-C02)

AWS Certified Security - Specialty (SCS-C02)

Specialty

AWS Certified Solutions Architect - Professional (SAP-C02)

AWS Certified Solutions Architect - Professional (SAP-C02)

Professional

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified DevOps Engineer - Professional (DOP-C02) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.