CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
AWS Certified Developer - Associate (DVA-C02)
AWS Certified Developer - Associate (DVA-C02)

Practice Test #6

Simulate the real exam experience with 65 questions and a 130-minute time limit. Practice with AI-verified answers and detailed explanations.

65Questions130Minutes720/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

A fintech startup serves its single-page web app (index.html, app.js, styles.css) from an Amazon S3 bucket named fintech-ui-prod behind an Amazon CloudFront distribution whose default TTL is 86,400 seconds (min TTL 0), and objects are cached due to Cache-Control: max-age=86400. After a CI/CD job deploys build 2025.08.15 by overwriting the same object keys in S3, the team confirms the new artifacts in S3, but end users still see the old UI for hours through CloudFront. How should the developer ensure the updated assets are delivered immediately via CloudFront?

S3 Object Lock is a data protection/compliance feature that prevents deletion or modification of objects for a retention period (WORM). It does not control CloudFront caching or force edge locations to re-fetch updated objects. In fact, Object Lock could hinder overwriting objects depending on retention settings, making it unrelated and potentially harmful for this deployment scenario.

An S3 lifecycle policy to delete previous versions (or noncurrent versions) affects storage management and cost, not CloudFront edge caches. Even if old versions are removed from S3, CloudFront can still serve the previously cached object until its TTL expires or an invalidation occurs. Lifecycle policies also operate on a schedule, not as an immediate post-deploy cache refresh mechanism.

A CloudFront invalidation explicitly removes cached objects from edge locations before TTL expiry. After deploying the new build to S3 (overwriting the same keys), invalidating the changed paths (specific files or /*) ensures the next viewer request triggers CloudFront to fetch the latest objects from S3. This is the correct, immediate way to refresh CloudFront when filenames/paths remain constant and TTLs are long.

Changing the CloudFront origin bucket after each deployment is an anti-pattern. It adds operational complexity, risks misconfiguration, and does not align with standard CI/CD practices. While it could force CloudFront to fetch from a different origin, it’s unnecessary and disruptive compared to invalidations or versioned asset filenames. It also doesn’t scale well and can break permissions, OAC/OAI settings, and DNS behavior.

Question Analysis

Core Concept: This question tests Amazon CloudFront caching behavior in front of an Amazon S3 origin, specifically how TTLs and Cache-Control headers cause CloudFront edge locations to continue serving cached objects even after the origin objects are overwritten. Why the Answer is Correct: CloudFront caches objects at edge locations based on the cache key (typically path + query strings/headers/cookies per behavior) and retains them until they expire (TTL) or are explicitly removed. Here, the default TTL is 86,400 seconds and the objects include Cache-Control: max-age=86400, so CloudFront is expected to serve the cached (old) index.html/app.js/styles.css for up to a day. Overwriting the same keys in S3 does not automatically purge CloudFront caches. Creating a CloudFront invalidation for the updated paths (e.g., /index.html, /app.js, /styles.css, or /*) forces CloudFront to evict those cached objects so the next request fetches the new versions from S3 immediately. Key AWS Features / Best Practices: CloudFront invalidations are the standard mechanism to remove cached content before TTL expiry. A common deployment best practice for SPAs is also “versioned assets” (e.g., app.20250815.js) with long TTLs, and a shorter TTL for index.html, but given the question’s requirement for immediate delivery after overwriting keys, invalidation is the direct fix. Min TTL 0 allows CloudFront to honor lower TTLs, but the current Cache-Control still sets a 1-day max-age. Common Misconceptions: Teams often assume that updating S3 objects automatically updates CloudFront. It does not—CloudFront is a separate cache layer. Another misconception is that deleting old S3 versions or changing bucket settings affects what CloudFront already cached; it won’t until the cache entry expires or is invalidated. Exam Tips: When you see “users still get old content for hours” with CloudFront + long TTL/Cache-Control and “same object keys overwritten,” the exam pattern is: either (1) invalidate paths, or (2) use cache-busting versioned filenames. If the question asks for “immediately” and doesn’t mention changing filenames, choose invalidation.

2
Question 2

A media analytics company runs a Node.js API on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer; traffic fluctuates from 1,000 requests per second at night to 12,000 requests per second during promotions, causing CPU spikes and occasional memory pressure. The engineering team must gather per-instance, 1-minute OS-level metrics (including memory utilization, swap usage, disk I/O, and file system utilization) within the next two weeks to right-size the fleet. The team also needs to monitor custom application metrics such as cacheHitRate and queueDepth emitted by the service, without making significant code changes. Which solution will meet these requirements?

Incorrect. AWS X-Ray focuses on distributed tracing (latency breakdowns, service maps, traces/segments) and can provide insights into request paths and downstream calls. It does not collect OS-level metrics like memory utilization, swap usage, filesystem utilization, or disk I/O at 1-minute granularity. X-Ray also is not the standard mechanism for ingesting arbitrary custom metrics such as cacheHitRate and queueDepth into CloudWatch.

Correct. The Amazon CloudWatch agent can be installed on EC2 instances to publish 1-minute OS-level metrics that are not available by default (memory, swap, disk, filesystem). It also supports collecting custom application metrics via StatsD or collectd, enabling the service to emit cacheHitRate and queueDepth with minimal changes (often configuration-only if a metrics library already exists). This meets the two-week timeline and right-sizing needs.

Incorrect. Modifying the application to publish metrics via the AWS SDK could work for custom application metrics, but it requires code changes and does not inherently provide OS-level metrics like memory, swap, and filesystem utilization. You would still need an agent or additional host instrumentation to gather those OS metrics. This approach increases development effort and risk compared to using the CloudWatch agent’s built-in capabilities.

Incorrect. AWS CloudTrail records AWS API calls for governance, compliance, and auditing (who did what, when, from where). It does not capture per-instance OS performance metrics or custom application metrics. CloudTrail logs are useful for security investigations and change tracking, not for right-sizing based on CPU/memory/disk utilization or for monitoring application-level KPIs like cacheHitRate and queueDepth.

Question Analysis

Core Concept: This question tests Amazon CloudWatch observability for EC2-based workloads: collecting OS-level metrics beyond default EC2 metrics, and ingesting custom application metrics with minimal code changes. Why the Answer is Correct: By default, EC2 publishes basic CloudWatch metrics (CPUUtilization, NetworkIn/Out, DiskRead/Write ops for some instance types/EBS), but it does not publish memory utilization, swap usage, file system utilization, or detailed disk I/O at the OS level. The Amazon CloudWatch agent is designed to run on instances and push these additional system-level metrics to CloudWatch at a configurable interval (including 1-minute). It also supports collecting custom metrics via StatsD and collectd, which allows the Node.js service to emit metrics (e.g., cacheHitRate, queueDepth) to a local daemon/UDP endpoint with minimal or no significant code changes (often just configuration of an existing metrics library). Key AWS Features: CloudWatch agent supports the “metrics” section for memory, swap, disk, and filesystem, and can be configured for 60-second collection. It can be deployed uniformly across an Auto Scaling group using user data, a launch template, or AWS Systems Manager. Custom metrics can be ingested through StatsD/collectd integration, enabling standardized metric namespaces, dimensions (e.g., AutoScalingGroupName, InstanceId), and CloudWatch Alarms/Dashboards for right-sizing decisions. Common Misconceptions: X-Ray is for distributed tracing and request-level performance analysis, not OS metrics collection. CloudTrail is for API auditing, not performance telemetry. Writing custom code with the AWS SDK can publish custom metrics, but it does not solve OS-level memory/filesystem metrics without additional host instrumentation and creates unnecessary development effort and risk given the two-week timeline. Exam Tips: When you see requirements like memory, swap, disk, and filesystem utilization on EC2, the expected answer is “CloudWatch agent” (or sometimes SSM + CloudWatch agent). For custom metrics with minimal code changes, look for StatsD/collectd support or embedded metric format patterns; avoid X-Ray/CloudTrail unless the question is explicitly about tracing or auditing.

3
Question 3

A telemedicine provider hosts a latency-sensitive appointment API on AWS Elastic Beanstalk (Amazon Linux 2, load balanced with Auto Scaling) and needs to deploy a new version with zero user-visible downtime while routing exactly 25% of incoming requests to the new version and 75% to the current version for a 15-minute evaluation window before promoting or rolling back; which Elastic Beanstalk deployment policy meets these requirements?

Rolling deployments update instances in batches, keeping some capacity serving traffic while others are updated. This can reduce downtime, but it does not provide an exact, controlled percentage split (25%/75%) between two versions for a fixed evaluation window. During rolling, traffic goes to a mix of updated and non-updated instances based on batch progression, not a stable canary percentage.

Traffic-splitting deployments are purpose-built for canary-style releases in Elastic Beanstalk. They run the new version alongside the old version and use the environment load balancer to route a specified percentage of requests (e.g., 25%) to the new version for a defined evaluation period (e.g., 15 minutes). After evaluation, you can promote to 100% or roll back with no user-visible downtime.

In-place deployments update the existing instances directly (all at once or in a way that can temporarily reduce capacity). This approach can cause brief interruptions or performance degradation, especially for latency-sensitive APIs, and it does not support a controlled 25%/75% traffic split between versions. It’s generally less safe than immutable or traffic-splitting for production releases.

Immutable deployments create a new set of instances with the new version, verify health, and then switch over, which greatly reduces deployment risk and can achieve near-zero downtime. However, immutable is primarily an all-or-nothing cutover model and does not natively provide an exact 25%/75% request routing split for a timed evaluation window. That requirement maps directly to traffic-splitting.

Question Analysis

Core Concept: This question tests AWS Elastic Beanstalk deployment policies for achieving zero-downtime releases with controlled, percentage-based request shifting (a canary-style evaluation) in a load-balanced, Auto Scaling environment. Why the Answer is Correct: Elastic Beanstalk’s traffic-splitting deployment policy is designed specifically to route a defined percentage of incoming requests to a new application version while the rest continues to go to the existing version. It supports an evaluation period (here, 15 minutes) during which you can monitor metrics (latency, 5xx errors, custom health checks) and then either promote the new version to receive 100% of traffic or roll back. This meets both requirements: (1) zero user-visible downtime (because both versions run concurrently behind the load balancer) and (2) exact 25%/75% traffic distribution during the evaluation window. Key AWS Features / Configurations: Traffic-splitting deployments work with load-balanced Elastic Beanstalk environments and create a parallel set of instances for the new version. The environment’s load balancer then splits traffic between the old and new instance groups according to the configured percentage. After the evaluation time, Elastic Beanstalk can complete the deployment (shift all traffic) or you can abort/roll back. This aligns with Well-Architected reliability and operational excellence principles by reducing blast radius and enabling safer releases. Common Misconceptions: Many candidates confuse immutable deployments with traffic shifting. Immutable provides safer deployments by launching new instances and swapping them in, but it does not natively guarantee an exact 25%/75% request split for a timed evaluation window. Rolling and in-place can reduce downtime, but they update instances sequentially or directly, and do not provide precise, controlled canary traffic percentages. Exam Tips: When you see “route X% of requests to the new version” and “evaluate for N minutes then promote/rollback,” think canary/traffic shifting. In Elastic Beanstalk, the keyword is “traffic-splitting deployment.” If the question instead emphasizes “new instances first, then swap” without percentage routing, that points to immutable.

4
Question 4

A developer must migrate a high-stakes online examination platform to AWS to prepare for a 5x spike in concurrent users during two 90-minute exam windows each day. The application currently runs on two on-premises servers: one application/API server and one MySQL database server. The application server renders pages and stores user session objects in process memory. At peak (~15,000 concurrent users), the 16 GB RAM on the application server reaches 95% utilization, and median response time rises from 120 ms to over 1,200 ms. Profiling shows that most of the memory increase and slowdown is caused by managing additional user sessions. For the migration, the developer will use Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer across two Availability Zones. Which additional set of changes should the developer implement to improve the application's performance and scalability?

Storing both session data and application data in MySQL on EC2 is a poor fit. Sessions are high-churn and can generate heavy read/write traffic, locking, and connection overhead, which can worsen latency during spikes. Running MySQL on EC2 also increases operational burden (patching, backups, HA design) and does not inherently provide Multi-AZ failover. This option does not address stateless scaling effectively.

ElastiCache for Memcached is well-suited for session storage because it provides very low latency and offloads session memory from the application instances, enabling true horizontal scaling behind an ALB. Using RDS for MySQL for application data provides durability, backups, and high availability (Multi-AZ) with less operational overhead than self-managed MySQL. This combination directly targets the identified bottleneck and improves scalability.

Using Memcached for both session data and application data is risky because Memcached is an in-memory cache without persistence and is not a system of record. Cache node failures or evictions can cause data loss. While caching some application reads can help performance, the primary database must remain a durable store (e.g., RDS/Aurora). This option confuses caching with durable storage and is not appropriate for critical exam platform data.

EC2 instance store is local, ephemeral storage that is not shared across instances and is lost when an instance stops or terminates. In an Auto Scaling environment, instances are frequently replaced, so session data would be lost and users could be logged out or see inconsistent behavior unless you also enforce sticky sessions (which reduces scalability). It does not solve cross-instance session sharing and is operationally fragile.

Question Analysis

Core Concept: This question tests horizontal scalability and performance optimization by removing server-local session state (“sticky state”) and using managed services for caching and databases. When an application is placed behind an Application Load Balancer (ALB) and scaled with an Auto Scaling group (ASG), storing sessions in process memory prevents effective scaling because any instance can receive any request. Why the Answer is Correct: The bottleneck is session management consuming RAM and increasing latency at peak concurrency. Moving sessions out of the application process into a shared, low-latency, in-memory store allows EC2 instances to remain stateless and scale out cleanly. Amazon ElastiCache for Memcached is purpose-built for ephemeral, high-throughput caching and session storage. Separately, application data should be stored durably in a managed relational database; Amazon RDS for MySQL provides Multi-AZ availability, automated backups, patching, and predictable performance compared to self-managed MySQL on EC2. Key AWS Features: - ElastiCache for Memcached: sub-millisecond latency, horizontal scaling by adding nodes, simple key/value access pattern ideal for sessions, and avoids per-instance memory pressure. - RDS for MySQL: managed MySQL with automated backups, maintenance, monitoring, and Multi-AZ failover for high availability. - ALB + ASG across two AZs: works best with stateless app tiers; session externalization is a standard requirement. Common Misconceptions: - “Put sessions in MySQL” seems straightforward but shifts a high-churn, high-QPS workload to a relational database, increasing contention and latency. - “Store everything in cache” ignores durability and consistency requirements for application data. - “Use instance store” is fast but not shared across instances and is wiped on stop/terminate—unsuitable for sessions in a scaled, load-balanced fleet. Exam Tips: When you see ALB + Auto Scaling and in-memory sessions causing memory pressure, the expected fix is to make the app stateless by externalizing sessions to ElastiCache (Memcached/Redis). Use RDS (or Aurora) for durable relational data. Prefer managed services for HA and operational simplicity, especially for high-stakes, spiky workloads.

5
Question 5

A health analytics platform exposes an Amazon API Gateway HTTP API to ingest workout session events from partner wearable devices. The API invokes an AWS Lambda function that stores the events in an Amazon DynamoDB table; the company plans to onboard 15 additional device partners within 90 days, and some partners will require their own dedicated Lambda functions to receive events. The company has created an Amazon S3 bucket named athlete-events-archive in us-east-1 with object versioning enabled and needs to persist all session records and any updates to this bucket for future analysis with the least development effort. What should a developer do to ensure that all events and updates are stored in Amazon S3 with minimal development work?

This approach adds an extra API and requires modifying the original Lambda to call another endpoint for updates. It increases coupling, adds latency and failure modes, and does not automatically capture all DynamoDB updates (especially if future partner-specific Lambdas write directly to DynamoDB). It is also more development effort than using DynamoDB Streams, because you must maintain additional API Gateway/Lambda integration logic.

Kinesis Data Streams alone does not directly deliver data to S3; typically you would use Kinesis Data Firehose (or a consumer application) to persist to S3. More importantly, this requires modifying the ingest Lambda(s) to publish to Kinesis, and as new partners are onboarded with dedicated Lambdas, each producer must be updated and managed. This is more work than capturing changes at the DynamoDB layer.

DynamoDB Streams captures every insert and update made to the table, independent of which Lambda function or partner produced the write. A single Lambda subscribed via an event source mapping can process stream records and write them to the versioned S3 bucket, ensuring both initial records and subsequent updates are archived. This provides minimal development effort and clean separation between ingestion and archival.

Publishing to SNS from the ingest Lambda requires code changes in the producer(s), and future partner-specific Lambdas would also need to implement the SNS publish logic to ensure complete coverage. SNS is a fanout notification service, not a database change capture mechanism, so it won’t automatically capture updates that occur through other paths. It also introduces message size/format considerations and additional operational complexity.

Question Analysis

Core Concept: This question tests event persistence and change data capture (CDC) patterns on AWS with minimal application changes. The key services are Amazon DynamoDB Streams (to capture inserts/updates) and AWS Lambda (to process stream records) to archive data into Amazon S3. Why the Answer is Correct: The system already writes all workout session events into a DynamoDB table. The requirement is to persist all session records and any updates to an S3 bucket (with versioning enabled) with the least development effort, even as more partners and potentially more Lambda ingest functions are added. Enabling DynamoDB Streams captures every item-level change (INSERT, MODIFY, REMOVE) regardless of which Lambda function (or partner-specific function) performed the write. A single stream-processing Lambda can then transform each stream record and write it to the athlete-events-archive S3 bucket. This decouples ingestion from archiving and avoids modifying every current/future ingest Lambda. Key AWS Features: DynamoDB Streams provides an ordered, time-ordered sequence of item changes per partition key and can include NEW_IMAGE and/or OLD_IMAGE to capture full item state before/after updates. Lambda event source mapping handles polling, batching, retries, and scaling. S3 versioning complements this by retaining multiple versions of archived objects when updates occur. Common implementations write one object per change event (e.g., prefix by date/partner/table/PK) or maintain a “latest” key while relying on S3 versions. Common Misconceptions: It may seem simpler to publish to SNS or Kinesis from the ingest Lambda, but that requires changing each ingest function (and future partner functions), increasing development and operational overhead. Also, Kinesis-to-S3 typically uses Kinesis Data Firehose (not Data Streams alone), and SNS is not designed for durable replay/CDC from the database. Exam Tips: When you see “store all records and updates” from DynamoDB with minimal code changes, think DynamoDB Streams + Lambda. If the requirement is “archive to S3,” Streams is a classic CDC trigger. Remember: Data Streams doesn’t natively write to S3 without additional components (often Firehose), and pushing events from producers increases coupling and maintenance as producers multiply.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

An analytics company must deploy all Amazon Redshift provisioned clusters (one per environment: dev, test, prod) by using AWS CloudFormation templates executed by an AWS CodePipeline CI/CD workflow; for each deployment, the admin user password must be automatically generated as part of the stack creation, be exactly 32 characters long with at least 1 uppercase letter, 1 lowercase letter, 1 digit, and 1 special character, exclude the characters '"' and '@', and must never appear in plaintext in templates, build logs, or CloudFormation events; which solution meets these requirements with the least development effort?

A Lambda-backed custom resource can generate a password, but returning it in the custom resource response data increases exposure risk (it may appear in CloudFormation events or be retrievable via stack/resource inspection depending on implementation). It also requires writing and maintaining Lambda code, IAM permissions, and handling updates/deletes safely. This is not the least development effort compared to native Secrets Manager generation.

Generating the password in CodeBuild with aws secretsmanager get-random-password occurs outside CloudFormation stack creation, violating the requirement that it be generated as part of stack creation. Passing it as a parameter (even with NoEcho) still risks leakage in CI/CD logs, environment variables, or debugging output. NoEcho reduces display in CloudFormation, but does not guarantee it never appears in plaintext across the pipeline.

This approach adds a custom resource plus Secrets Manager, which is more complex than necessary. Although storing the password in Secrets Manager and using a dynamic reference is a good pattern, the custom Lambda generation is redundant because Secrets Manager can generate compliant passwords natively. It increases development effort and operational surface area (Lambda code, permissions, retries, idempotency).

This is the best fit: CloudFormation creates an AWS::SecretsManager::Secret and uses GenerateSecretString with PasswordLength=32, RequireEachIncludedType=true, and ExcludeCharacters including '"' and '@'. The Redshift cluster references the secret via a Secrets Manager dynamic reference, preventing plaintext exposure in templates and avoiding pipeline-side handling. It meets all constraints with minimal custom development.

Question Analysis

Core Concept: This question tests secure secret generation and consumption in infrastructure as code (IaC) using AWS CloudFormation, specifically leveraging AWS Secrets Manager and CloudFormation dynamic references to prevent secret exposure in templates, logs, and events. Why the Answer is Correct: Option D uses the native CloudFormation resource AWS::SecretsManager::Secret with GenerateSecretString to generate the Redshift admin password during stack creation. This meets the “automatically generated as part of the stack creation” requirement without adding custom code. The password policy requirements are satisfied by setting PasswordLength to 32, setting RequireEachIncludedType to true (ensures at least one uppercase, one lowercase, one digit, and one special character), and using ExcludeCharacters to omit '"' and '@'. The Redshift cluster’s MasterUserPassword is then set using a Secrets Manager dynamic reference (e.g., {{resolve:secretsmanager:secret-id:SecretString:password}}). Dynamic references are resolved at deployment time and are not stored in plaintext in the template; additionally, CloudFormation treats these as sensitive and does not display the resolved value in stack events. Key AWS Features: - AWS::SecretsManager::Secret GenerateSecretString: managed password generation with length and character constraints. - RequireEachIncludedType: enforces complexity requirements. - ExcludeCharacters: prevents disallowed characters. - CloudFormation dynamic references to Secrets Manager: injects secrets at deploy time without hardcoding. - Aligns with AWS Well-Architected Security pillar: protect credentials, automate rotation readiness, and minimize human handling. Common Misconceptions: - NoEcho parameters (Option B) hide values in some CloudFormation outputs, but the secret still traverses the pipeline/build environment and can leak via build logs, environment dumps, or parameter overrides; it also isn’t “generated as part of stack creation.” - Custom resources (Options A/C) can work but add Lambda code, IAM permissions, error handling, and lifecycle complexity; returning secrets in custom resource responses risks exposure paths and is more development effort than a native resource. Exam Tips: When requirements include “must never appear in plaintext” and “least development effort,” prefer managed services and native CloudFormation features: Secrets Manager + dynamic references. Remember that generating secrets in CI/CD steps is usually considered outside stack creation and increases leakage risk compared to in-stack generation and retrieval via dynamic references.

7
Question 7

A retail analytics team rehosted a nightly Node.js job as an AWS Lambda function that makes a sequence of calls to a partner invoicing REST API to fetch the previous month’s data and then generates billing summaries. The partner has introduced strict rate limits of 75 requests per minute and 8,000 requests per day; exceeding either returns HTTP 429, and the response headers include the per-minute and per-day quotas; the monthly pull now needs about 11,500 API calls and may spill over into a second day. What is the MOST operationally efficient way to refactor the serverless design to comply with these limits?

Step Functions can add Wait states and retry logic, but monitoring for 429s is reactive: you already exceeded the partner’s limits. It also doesn’t naturally handle the 8,000/day quota or provide a durable backlog across days without additional state management. Step Functions is useful for orchestration, but it’s not the most operationally efficient mechanism for sustained rate shaping at minute/day boundaries.

Amazon SQS is the most operationally efficient choice because it decouples the monthly pull into many small, durable work items that can be processed over time instead of in a single long-running Lambda invocation. Lambda can consume from the queue with limited concurrency and small batches, which reduces the chance of exceeding the partner API’s per-minute rate and makes it straightforward to pause processing when the daily quota is reached. Because the queue retains unprocessed messages, the remaining API calls can safely continue the next day without losing state or requiring complex orchestration. This pattern is a standard AWS serverless approach for protecting downstream systems with strict throughput constraints.

CloudWatch Logs metric filters and alarms are for observability and alerting, not precise real-time control. Alarms evaluate on periods and can lag; they also cannot directly “stop” a currently running Lambda invocation (Lambda has no native kill switch via CloudWatch). Even if you detected overage, you would still need a throttling mechanism to prevent further calls, making this operationally inefficient and unreliable.

Kinesis Data Firehose is designed to buffer and deliver streaming data to destinations like S3, Redshift, or OpenSearch. It does not orchestrate outbound REST API calls or enforce external rate limits. Writing requests to S3 and triggering Lambda from S3 events adds unnecessary latency and complexity and still doesn’t provide a clean, deterministic way to keep requests under 75/min and 8,000/day.

Question Analysis

Core Concept: This question is about choosing the most operationally efficient serverless pattern to handle a large amount of work against an external API that enforces strict rate limits. The best design is to decouple work generation from work execution so requests can be spread over time and safely continue into the next day when the daily quota is reached. Why correct: Amazon SQS is the best fit because it provides durable buffering for individual API work items, allowing the workload to be processed gradually instead of in one long Lambda run. A Lambda consumer can then be configured with low concurrency and small batches, and the application can add simple pacing or pause/resume behavior based on the partner’s quota headers so the system stays within both the per-minute and per-day limits. This is more operationally efficient than building a reactive orchestration around failures. Key features: SQS provides durable queueing and natural backlog handling when work must spill into a second day. Lambda event source mappings, reserved concurrency, and small batch sizes help limit parallelism, while scheduled enable/disable of consumption or lightweight quota tracking can enforce the daily cap. This pattern is resilient, scalable, and aligns with common AWS serverless integration designs. Common misconceptions: A Step Functions Wait state after receiving HTTP 429 is reactive and means the system has already violated the partner’s limit. CloudWatch alarms are not a precise runtime throttling mechanism for stopping in-flight Lambda work. Kinesis Data Firehose is for data delivery pipelines, not controlled outbound API invocation. Exam tips: When an external dependency has strict quotas and the workload may span multiple days, prefer a queue-based design that buffers work and lets consumers process at a controlled pace. On AWS exams, SQS plus Lambda is usually the most operationally efficient answer for smoothing and deferring work, even if some lightweight application logic is still needed for exact quota compliance.

8
Question 8

A developer has an AWS Lambda function triggered by Amazon DynamoDB Streams that must publish about 200 audit messages per minute to an Amazon SNS topic in the same account and Region by using the AWS SDK Publish API with no credentials specified in the code, but after deployment every publish attempt fails and CloudWatch Logs show AccessDeniedException (HTTP 403) for sns:Publish—how should the developer resolve this issue?

An interface VPC endpoint for SNS can be required if the Lambda function runs inside a VPC without NAT/egress and needs private connectivity to SNS. However, the symptom here is AccessDeniedException (HTTP 403), which indicates the request reached SNS and was authenticated but not authorized. A VPC endpoint would not resolve missing sns:Publish permissions; it addresses network path, not IAM authorization.

Lambda functions should not use developer long-term credentials. When no credentials are specified in the code, the AWS SDK uses the Lambda execution role’s temporary credentials automatically. Changing a developer’s IAM user permissions would not affect the runtime identity of the Lambda function in AWS. This option also violates best practices around credential management and least privilege.

The Lambda execution role is the identity used by the AWS SDK when the code does not specify credentials. The 403 AccessDenied for sns:Publish means this role lacks an IAM policy allowing sns:Publish on the SNS topic ARN. Adding an identity-based policy statement to the execution role (scoped to the target topic) is the correct and standard fix for this authorization failure.

A resource-based policy on a Lambda function controls who can invoke the function (for example, allowing SNS, EventBridge, or another account to invoke it). It does not grant the function permission to call SNS. Permissions for outbound calls from Lambda are governed by the Lambda execution role’s identity-based policies, so adding a Lambda resource policy would not fix sns:Publish AccessDenied.

Question Analysis

Core Concept: This question tests AWS IAM authorization for AWS Lambda when calling other AWS services (Amazon SNS) using the AWS SDK without embedding credentials. In Lambda, the SDK automatically obtains temporary credentials from the function’s execution role (an IAM role assumed by the Lambda service). Why the Answer is Correct: The AccessDeniedException (HTTP 403) for sns:Publish indicates the caller identity (the Lambda execution role session) is authenticated but not authorized to perform sns:Publish on the SNS topic. Because the code specifies no credentials, the only identity in play is the Lambda execution role. Therefore, the fix is to update the Lambda execution role’s identity-based policy to allow sns:Publish to the specific topic ARN (least privilege). Once added, the SDK calls will succeed without code changes. Key AWS Features: - Lambda execution role: Provides temporary credentials via AWS STS; the SDK uses these automatically. - IAM identity-based policies: Grant permissions to the role (e.g., Action: sns:Publish, Resource: topic ARN). - Least privilege: Scope permissions to the single topic rather than “*”. - DynamoDB Streams trigger is unrelated to SNS authorization; it only defines the event source mapping and required permissions for reading the stream. Common Misconceptions: - Networking vs permissions: A VPC endpoint (option A) addresses connectivity to SNS from inside a VPC, but a 403 AccessDenied is an authorization failure, not a network failure. Network issues typically show timeouts, DNS errors, or connection failures. - “Add developer credentials” (option B): Lambda should not rely on long-term developer credentials; best practice is to use the execution role. Also, the error occurs at runtime in AWS, not on a developer machine. - Resource-based policy on Lambda (option D): Lambda resource policies control who/what can invoke the function (or access function URLs), not what the function can call. Outbound permissions are controlled by the execution role. Exam Tips: When an AWS SDK call from Lambda fails with AccessDenied, first identify the principal: it is almost always the Lambda execution role. Then decide whether you need an identity-based policy on the role (most common) or a resource-based policy on the target service (used for cross-account access or specific services that support it). Match 403 to IAM authorization, not VPC routing.

9
Question 9
(Select 2)

A nonprofit organization is launching a temporary volunteer sign-up microsite for the next 4 months and expects up to 10,000 submissions. The site must accept HTTPS POST /signup requests and store submitted mobile phone numbers in an Amazon DynamoDB table named VolunteerContacts. A developer has implemented an AWS Lambda function that writes the data to DynamoDB and will deploy it with the AWS Serverless Application Model (AWS SAM). The developer must expose this Lambda function over HTTP with the least additional configuration. Which solutions will meet these requirements? (Choose two.)

Correct. Lambda function URLs provide a built-in HTTPS endpoint directly on a Lambda function with minimal setup. They support HTTP methods like POST and can be deployed easily alongside the function. This is well-suited for a temporary microsite needing a simple /signup endpoint. Security can be IAM-based or open (with careful controls), and logging/metrics integrate with CloudWatch.

Incorrect. A Gateway Load Balancer is designed to deploy and scale third-party virtual network appliances (firewalls, IDS/IPS) transparently using GENEVE. It is not intended to expose Lambda functions as HTTP endpoints. Using GWLB would add unnecessary network complexity and does not directly satisfy the requirement to accept HTTPS POST requests to a Lambda-backed endpoint.

Incorrect for “least additional configuration.” An NLB can target Lambda, but it operates at Layer 4 and is typically used for TCP/UDP/TLS pass-through and high-performance networking patterns. It does not provide API-style features (routing by path, request validation, auth options like JWT authorizers) and generally requires more setup than Function URLs or API Gateway for a simple POST /signup endpoint.

Incorrect. AWS Global Accelerator improves availability and performance by providing static anycast IPs and routing traffic to regional endpoints (like ALB, NLB, or API Gateway). It does not directly expose a Lambda function as an HTTP endpoint by itself. You would still need an underlying endpoint (API Gateway, Function URL, or load balancer), so it adds configuration rather than minimizing it.

Correct. Amazon API Gateway is a standard way to expose Lambda over HTTPS. With AWS SAM, you can define an API event (often an HTTP API for simplicity) to route POST /signup to the Lambda function with minimal template configuration. API Gateway also offers throttling, auth options, request/response controls, and easy integration with WAF and custom domains if needed.

Question Analysis

Core Concept: This question tests the simplest ways to expose an AWS Lambda function over HTTPS for an HTTP POST endpoint in a serverless architecture, especially when deploying with AWS SAM. The key services are Lambda Function URLs and Amazon API Gateway, both of which can front Lambda with minimal infrastructure. Why the Answer is Correct: A (Lambda function URLs) directly attaches an HTTPS endpoint to a Lambda function. It requires very little configuration (no separate API service), supports POST requests, and is ideal for lightweight, temporary microsites. It can use IAM auth or no auth, and can be protected further with resource-based policies and optional CORS settings. E (Amazon API Gateway) is the classic, fully managed way to expose Lambda over HTTP/HTTPS. With SAM, you can define an API event source in the template and deploy quickly. API Gateway provides robust request handling, throttling, validation, custom domains, WAF integration, and detailed metrics—useful even for a temporary site. Key AWS Features: Lambda Function URLs: built-in HTTPS endpoint, simple configuration, supports streaming responses (in some modes), integrates with CloudWatch logs/metrics, and can be restricted via IAM/resource policies. API Gateway: REST API or HTTP API. HTTP API is typically the “least configuration” variant within API Gateway for simple Lambda proxying and lower cost/latency. Both support TLS, stages, throttling, and can integrate with AWS WAF for additional protection. For DynamoDB writes, the Lambda execution role must allow dynamodb:PutItem (or relevant write actions) on the VolunteerContacts table. Common Misconceptions: Load balancers (NLB/GWLB) and Global Accelerator are often mistaken as generic ways to “put HTTPS in front,” but they are not the simplest or most direct for Lambda. NLB can integrate with Lambda, but it’s primarily for TCP/UDP/TLS pass-through patterns and is not an HTTP API management layer. GWLB is for inserting network appliances, not exposing application endpoints. Global Accelerator improves global routing to existing endpoints; it does not itself create an HTTP endpoint for Lambda. Exam Tips: When you see “expose Lambda over HTTP/HTTPS with least additional configuration,” think first of Lambda Function URLs and API Gateway (often HTTP API). Choose API Gateway when you need API management features; choose Function URLs for the simplest direct HTTPS endpoint. Eliminate options that are networking accelerators or appliance insertion tools unless the question explicitly mentions those needs.

10
Question 10

A medical imaging analytics startup processes zipped DICOM study archives and discards them after parsing. The archives are stored in an Amazon S3 bucket and average 1.3 GB each; none exceeds 1.9 GB. An AWS Lambda function is invoked once per archive, and the parsing is highly I/O bound, requiring 3–5 full reads of the same file. Which solution provides the most performance-optimized approach?

Incorrect. AWS Lambda does not support attaching an Amazon EBS volume to a function. EBS volumes are block storage for EC2 instances. If you need a mounted filesystem for Lambda, the supported option is Amazon EFS, but EFS is network-based and would not be as fast as local /tmp for repeated reads within a single invocation, especially when the data is discarded afterward.

Incorrect. You cannot attach an Elastic Network Adapter (ENA) to a Lambda function; Lambda’s networking is abstracted and managed by AWS. While Lambda can run in a VPC and benefit from improved networking over time, ENA is an EC2 construct. This option also doesn’t address the core optimization: avoiding 3–5 repeated full network reads from S3.

Correct. Increasing Lambda ephemeral storage to 2 GB allows the function to copy the 1.3–1.9 GB archive once from S3 into /tmp and then perform the 3–5 reads locally. This minimizes repeated S3 network transfers, reduces latency, and improves throughput for an I/O-bound parser. It’s the most direct and performance-optimized approach given the file size constraints.

Incorrect. Reading directly from S3 each time the parser needs the file causes 3–5 full downloads per invocation. Even if S3 is fast, repeated large transfers add significant latency and can become the dominant cost/time factor. This approach is simpler but not performance-optimized for repeated-read workloads; staging to /tmp is the standard optimization.

Question Analysis

Core Concept: This question tests AWS Lambda performance optimization for I/O-bound workloads that repeatedly read the same large object from Amazon S3. The key concept is minimizing repeated network I/O by staging data locally using Lambda’s configurable ephemeral storage (/tmp). Why the Answer is Correct: Because the parser performs 3–5 full reads of the same 1.3–1.9 GB archive, repeatedly streaming from S3 would incur multiple network transfers, higher latency, and variable throughput. The most performance-optimized approach is to download the archive once into local storage and then perform all subsequent reads from local disk. AWS Lambda supports increasing ephemeral storage up to 10,240 MB, so setting it to 2 GB (slightly above the 1.9 GB maximum) allows the function to copy the archive to /tmp and then read it multiple times at local speeds. This reduces S3 GET traffic, avoids repeated network bottlenecks, and typically yields the best end-to-end runtime for I/O-bound repeated-read patterns. Key AWS Features: - Lambda ephemeral storage (/tmp) is local to the execution environment and can be configured (512 MB default up to 10 GB). - Reuse of the same execution environment across invocations can further help (warm starts), but the question’s main win is within a single invocation: download once, read many times. - S3 is highly scalable, but each full reread is still a full network transfer; local staging is a standard optimization for repeated access. Common Misconceptions: - “Read directly from S3 each time” sounds simpler, but it multiplies network I/O and can dominate runtime. - “Attach EBS to Lambda” is not a standard Lambda capability; EBS attaches to EC2. For Lambda, the analogous persistent/shared storage is EFS, but that still introduces network file system latency and is not needed when data is discarded after parsing. - ENA is not something you manually attach to Lambda; Lambda networking is managed. Exam Tips: When a Lambda function needs to access the same large data multiple times in one invocation, prefer downloading once to /tmp (ephemeral storage) if it fits. Use EFS only when you need shared/persistent files across invocations or functions. Also remember Lambda’s ephemeral storage is configurable up to 10 GB, which is frequently tested in optimization scenarios.

Success Stories(8)

전
전**Nov 26, 2025

Study period: 1 month

점수는 793점으로 합격했어요! 하루에 최소 30문제는 풀었어요. 밖에서도 짬 날 때마다 풀 수 있으니 좋네요ㅎㅎ

김
김**Nov 24, 2025

Study period: 2 months

앱 문제가 시험이랑 굉장히 유사했어요. 그리고 해설들이 왜 틀렸는지 이해하는데 도움이 됐어요.

**********Nov 22, 2025

Study period: 1 month

Thank you very much, these questions are wonderful !!!

윤
윤**Nov 20, 2025

Study period: 2 months

1달 전에 합격했는데 지금 후기쓰네요. 시험하고 문제 구성이 비슷했어요

A
A******Nov 16, 2025

Study period: 2 months

I just passed the exam, and I can confidently say that this app was instrumental in helping me thoroughly review the exam material.

Other Practice Tests

Practice Test #1

65 Questions·130 min·Pass 720/1000

Practice Test #2

65 Questions·130 min·Pass 720/1000

Practice Test #3

65 Questions·130 min·Pass 720/1000

Practice Test #4

65 Questions·130 min·Pass 720/1000

Practice Test #5

65 Questions·130 min·Pass 720/1000
← View All AWS Certified Developer - Associate (DVA-C02) Questions

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified Developer - Associate (DVA-C02) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.