CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. AWS
  3. AWS Certified Advanced Networking - Specialty (ANS-C01)
AWS Certified Advanced Networking - Specialty (ANS-C01)

AWS

AWS Certified Advanced Networking - Specialty (ANS-C01)

276+ Practice Questions with AI-Verified Answers

Free questions & answersReal Exam Questions
AI-powered explanations
Detailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 276+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every AWS Certified Advanced Networking - Specialty (ANS-C01) answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Network DesignWeight 30%
Network ImplementationWeight 26%
Network Management and OperationWeight 20%
Network Security, Compliance, and GovernanceWeight 24%

Practice Questions

1
Question 1

A company has two AWS Direct Connect links to its on-premises data center. One Direct Connect link terminates in the us-east-1 Region, and the other Direct Connect link terminates in the af-south-1 Region. The company is using BGP to exchange routes with AWS. The company's on-premises environment needs to be configured to use the us-east-1 link as the primary path to AWS and the af-south-1 link as the secondary (backup) path. A network engineer must configure BGP on the on-premises router to ensure that the us-east-1 link is preferred for all traffic to AWS, and the af-south-1 link is used only if the primary link fails. The solution must use standard BGP attributes and AWS BGP community tags. How should a network engineer configure BGP to ensure that af-south-1 is used as a secondary link to AWS?

Incorrect. Although the local preference values are set properly for outbound traffic from on premises, the AWS communities are not. The backup af-south-1 link should carry the AWS low-preference community 7224:7300 so AWS de-prefers that path for return traffic, but this option places 7224:7300 on neither the correct path nor in the correct role. As a result, it does not cleanly establish af-south-1 as the AWS-side backup.

Correct. This option sets local preference to 200 on the us-east-1 BGP peer and 50 on the af-south-1 peer, so the on-premises router prefers us-east-1 for traffic going to AWS. It also applies AWS community 7224:7300 to the af-south-1 connection, which lowers AWS local preference for routes learned over that link and makes it the backup path for return traffic. Together, these settings create the intended primary/secondary behavior using both a standard BGP attribute and an AWS-supported BGP community.

Incorrect. This option reverses the customer-side local preference values, assigning a lower value to us-east-1 and a higher value to af-south-1. That would cause the on-premises router to prefer af-south-1 for traffic to AWS, which directly violates the requirement that us-east-1 be primary. Even if the communities were otherwise useful, the wrong local preference makes this option invalid.

Incorrect. This option is wrong on both major controls. It gives af-south-1 the higher local preference, causing the customer network to prefer the backup link, and it also applies the AWS low-preference community to us-east-1 instead of af-south-1. That combination makes the intended backup path more attractive and the intended primary path less attractive, which is the opposite of the requirement.

Question Analysis

Core concept: This question tests how to build deterministic primary/backup routing over two AWS Direct Connect connections by combining customer-side BGP best-path selection with AWS Direct Connect BGP communities. The on-premises router should prefer the us-east-1 session for traffic destined to AWS by assigning it a higher local preference, while the af-south-1 session should be treated as backup with a lower local preference. On the AWS side, the customer should tag routes advertised over the backup DX with the AWS low-local-preference community so AWS prefers the primary DX for return traffic when both paths are available. Why correct: The correct design is to set a higher local preference on the us-east-1 BGP peer and a lower local preference on the af-south-1 BGP peer so the enterprise network always chooses us-east-1 first for outbound traffic to AWS. In addition, the af-south-1 link should be tagged with AWS community 7224:7300, which tells AWS to assign a lower local preference to routes learned on that connection. This makes af-south-1 the less-preferred path from AWS back to on premises, so it functions as the backup unless the primary path fails. Key features: - Local Preference is a standard BGP attribute used inside the customer AS; higher values are preferred. - AWS Direct Connect supports BGP communities such as 7224:7300 to influence AWS local preference for inbound traffic toward customer prefixes. - Using both controls together provides symmetric primary/backup intent: customer-side preference for outbound traffic and AWS-side preference for return traffic. Common misconceptions: - Many candidates confuse AWS DX communities with customer local preference. AWS communities influence AWS's handling of routes you advertise, not your router's best-path decision. - Another common mistake is assuming 7224:7100 means 'primary' and 7224:7300 means 'secondary' in a generic sense. In practice, 7224:7300 is the explicit low-preference community used to de-prefer a backup path. - It is also easy to reverse local preference values, which would make the backup link active instead of standby. Exam tips: - For customer-to-AWS path preference, look first at local preference on the customer router: higher on the primary, lower on the backup. - For AWS-to-customer return path preference, look for the AWS DX community that lowers AWS local preference on the backup path, typically 7224:7300. - If an option gets local preference right but applies the low-preference AWS community to the primary link, it is not the best answer.

2
Question 2
(Select 2)

A company's security policy dictates that all outbound traffic from its AWS VPC to the on-premises data center must be inspected by a third-party security appliance. This appliance runs on an Amazon EC2 instance within the VPC. A network engineer has been tasked with improving the network performance between the on-premises data center and this security appliance. The existing connection is a standard VPN over the internet. The performance is currently a bottleneck for the on-premises users. The network engineer must improve the network performance between the on-premises data center and the EC2-based security appliance. The solution should focus on increasing throughput and reducing latency for this specific traffic flow. Which actions should the network engineer take to meet these requirements? (Choose two.)

Correct. Enhanced networking (ENA or Intel VF with SR-IOV) improves throughput, reduces latency, and increases PPS by bypassing more of the hypervisor networking stack. This is especially important for security appliances that inspect traffic and are sensitive to per-packet overhead. Many modern Nitro-based instances support ENA by default, but you must ensure the OS/AMI drivers and instance settings are configured properly.

Incorrect. A transit gateway primarily simplifies and scales routing between VPCs and on-premises networks. It does not inherently increase throughput or reduce latency for traffic that must reach a specific EC2-based appliance in the same VPC. In some cases it can add an extra hop and additional processing. TGW is a good choice for multi-VPC architectures, not for boosting single-instance appliance performance.

Correct. Increasing the EC2 instance size (or selecting a more network-optimized family) typically increases available network bandwidth, PPS, and CPU/memory resources. For third-party inspection appliances, CPU can be a major limiter due to encryption and deep packet inspection. Upsizing is a direct, exam-relevant lever to improve throughput and reduce latency caused by instance resource saturation.

Incorrect. Placement groups (cluster/spread/partition) are designed to influence how EC2 instances are placed relative to each other to improve inter-instance latency/throughput or fault isolation. They do not improve the WAN/VPN path from on-premises to a single EC2 instance. Since the traffic is coming from the data center, placement groups provide little to no benefit for this specific bottleneck.

Incorrect. Attaching multiple ENIs can help with network segmentation, appliance design patterns, or increasing the number of IPs/interfaces, but it does not automatically aggregate bandwidth for a single traffic flow. EC2 network performance is governed mainly by the instance type/size and enhanced networking. For higher throughput, you typically scale up the instance or scale out multiple appliances rather than rely on multiple ENIs.

Question Analysis

Core Concept: This question tests EC2 network performance optimization for a specific traffic path (on-premises to an EC2-based inspection appliance). The bottleneck is between the data center and the EC2 instance, so improvements should target the instance’s packet processing and network I/O capabilities rather than changing VPC routing constructs. Why the Answer is Correct: A (enhanced networking) directly increases packets per second (PPS), lowers latency, and reduces jitter by using SR-IOV-based drivers (ENA or Intel VF) to provide higher throughput and more consistent performance. This is a primary best practice for network appliances (firewalls/IDS/IPS) that are sensitive to PPS and per-packet overhead. C (increase the EC2 instance size) is also correct because network performance in AWS is strongly tied to instance type/size. Larger instances typically provide higher baseline and burst bandwidth, higher PPS, and more CPU resources for encryption/inspection. For a third-party security appliance, CPU and memory can be as limiting as NIC throughput, especially when doing deep packet inspection. Key AWS Features / Best Practices: - Use instance families optimized for networking (e.g., Nitro-based instances with ENA) and ensure ENA is enabled in the AMI/driver. - Select an instance size that provides sufficient “Network bandwidth” and “PPS” for the expected throughput; also consider CPU headroom for inspection. - If the VPN is the limiting factor overall, the typical architectural step would be AWS Direct Connect, but it is not an option here; therefore, the best available actions are to optimize the EC2 appliance’s networking and capacity. Common Misconceptions: - It’s tempting to choose Transit Gateway (B) to “improve performance,” but TGW is a routing hub; it does not inherently increase throughput/latency for a single EC2 appliance path, and it adds an extra hop. - Placement groups (D) help with low-latency east-west traffic between EC2 instances, not on-premises-to-EC2 over VPN. - Multiple ENIs (E) don’t aggregate bandwidth for a single flow and add complexity; scaling throughput is usually done by choosing the right instance type/size and enhanced networking, or horizontally scaling appliances behind load balancing. Exam Tips: When the question asks to improve performance “to an EC2 instance,” first think: enhanced networking (ENA/SR-IOV), instance type/size network limits, and CPU for packet processing. Only choose routing constructs (TGW, placement groups) when the problem is explicitly about multi-VPC connectivity, east-west latency, or routing scale—not raw appliance throughput.

3
Question 3

A company recently implemented a security policy that prohibits developers from launching VPC network infrastructure. The policy states that any time a NAT gateway is launched in a VPC, the company's network security team must immediately receive an alert to terminate the NAT gateway. The network security team needs to implement a solution that can be deployed across AWS accounts with the least possible administrative overhead. The solution also must provide the network security team with a simple way to view compliance history. The solution must be able to detect the creation of a NAT Gateway in any VPC, alert the security team, automatically terminate the resource, and provide a historical record of compliance. The solution must also be easily deployable across multiple AWS accounts with minimal manual effort. Which solution will meet these requirements?

Running a cron-based script on EC2 in each account creates significant administrative overhead (instances, patching, IAM, scheduling, scaling). A 5-minute interval is not immediate detection and can allow policy violations to persist. Logging to RDS adds cost and management and still doesn’t provide native compliance reporting. This approach is operationally heavy and not aligned with AWS managed governance patterns.

Lambda reduces server management, but the option describes “programmatically checks,” implying polling rather than event-driven detection. Polling still isn’t immediate and requires scheduling and custom state storage. OpenSearch per account is expensive and operationally complex, and it doesn’t provide the straightforward compliance history and audit views that AWS Config provides. SAM helps deployment, but governance/reporting remains custom.

GuardDuty is a threat detection service and does not provide a standard finding type for NAT gateway creation (the referenced finding type is not a typical GuardDuty taxonomy). Even if events were captured, storing runtime logs in S3 is not the same as compliance history with rule evaluations and timelines. This option also mixes services in a way that is less reliable for compliance governance than AWS Config.

AWS Config is purpose-built for compliance monitoring and provides an easy-to-consume compliance history and configuration timeline. A custom Config rule can detect prohibited NAT gateways, and SSM Automation remediation can automatically alert (e.g., via SNS) and delete the NAT gateway. CloudFormation StackSets enables centralized, low-touch deployment across many accounts/OUs with consistent IAM roles and runbooks, meeting the multi-account and minimal overhead requirements.

Question Analysis

Core Concept: This question tests governance and compliance automation across multiple AWS accounts. The best-fit pattern is AWS Config for continuous resource compliance evaluation, paired with automated remediation (AWS Systems Manager Automation) and multi-account deployment (AWS CloudFormation StackSets). Why the Answer is Correct: A custom AWS Config rule can evaluate whether prohibited resources (NAT gateways) exist or are created, and it records compliance state changes over time. That directly satisfies the requirement for a “simple way to view compliance history” because AWS Config provides a timeline of configuration changes and compliance results per resource and per rule. By attaching an SSM Automation remediation action, the security team can automatically respond: send an alert (typically via SNS/SES integration from the automation or a Lambda step) and then call the EC2 API to delete the NAT gateway. Finally, StackSets provides the least administrative overhead for deploying the same Config rule, IAM roles, and SSM runbooks consistently across many accounts (and OUs) in AWS Organizations. Key AWS Features: - AWS Config custom rules: evaluate resources and maintain compliance history. - Remediation with SSM Automation: standardized, auditable runbooks; can be set to auto-remediate on noncompliance. - CloudFormation StackSets: centralized, multi-account/multi-Region rollout with drift detection. - Well-Architected (Security pillar): continuous monitoring, automated remediation, and centralized governance. Common Misconceptions: - “Just detect creation events”: Event-driven detection alone (e.g., periodic scripts or ad hoc logs) often lacks durable compliance history and standardized reporting. - “GuardDuty will detect NAT gateway creation”: GuardDuty is for threat detection; NAT gateway creation is not a typical GuardDuty finding type. - “Cron on EC2 is simplest”: it increases ops burden, is not real-time, and doesn’t provide native compliance reporting. Exam Tips: When you see requirements like multi-account deployment, minimal overhead, automatic remediation, and compliance history, think AWS Config + (SSM Automation remediation) + StackSets/Organizations. AWS Config is the canonical service for compliance timelines and audit-ready history.

4
Question 4

A company is migrating applications from an on-premises data center to AWS, requiring data exchange with an on-premises mainframe. The solution must achieve 4 Gbps transfer speeds for peak traffic and ensure high availability and resiliency against circuit or router failures. Design a highly available, resilient networking solution that supports 4 Gbps and withstands circuit or router failures. Which solution will meet these requirements?

Option A is the most resilient design because it uses four separate Direct Connect connections across two Direct Connect locations and two different on-premises routers. This eliminates single points of failure for individual circuits, customer routers, and a single Direct Connect location. It also provides far more than 4 Gbps of available bandwidth even after losing a circuit, a router, or an entire location. Because the question explicitly requires support for 4 Gbps peak traffic while withstanding failures, this is the only option that clearly satisfies both capacity and resiliency requirements.

Option B uses only two 10 Gbps connections, one per Direct Connect location, each terminating on a different router. Although the bandwidth is sufficient, the design has only one circuit per location, so a single circuit failure removes all connectivity through that location. This provides less circuit-level resiliency than a four-connection design and does not align as well with a requirement to withstand circuit failures. AWS best practice for maximum resiliency is to use multiple connections across multiple locations and routers.

Option C provides four 1 Gbps connections for a total of 4 Gbps only during normal operation. If a single circuit fails, available bandwidth drops to 3 Gbps, which no longer meets the 4 Gbps transfer requirement. Likewise, if a router fails and two circuits are attached to that router, only 2 Gbps remains. Therefore, this option is resilient from an availability perspective but does not preserve the required throughput during the specified failure scenarios.

Option D provides only two 1 Gbps connections, for a maximum aggregate throughput of 2 Gbps. That is insufficient to meet the stated 4 Gbps peak traffic requirement even before considering any failures. In addition, losing one circuit or one router would reduce capacity even further. This option fails both the bandwidth and resiliency requirements.

Question Analysis

Core Concept: This question tests AWS Direct Connect resiliency design for both bandwidth and failure tolerance. The requirement is not just to reach 4 Gbps during normal operation, but to design a highly available and resilient solution that can withstand circuit or router failures while still supporting the required traffic. In Direct Connect design, that means providing redundant connections across multiple Direct Connect locations and multiple customer routers, while also ensuring enough remaining capacity after a failure. Why the Answer is Correct: Option A uses four 10 Gbps Direct Connect connections distributed across two Direct Connect locations and terminated on two separate on-premises routers. This design removes single points of failure at the circuit, location, and router levels. Even if a single circuit fails, an entire router fails, or one Direct Connect location becomes unavailable, the remaining links still provide well above the required 4 Gbps throughput. That makes it the only option that clearly satisfies both the bandwidth target and the resiliency requirement simultaneously. Key AWS Features / Best Practices: - Use at least two Direct Connect locations for facility-level redundancy. - Use separate customer edge routers so a single router failure does not interrupt connectivity. - Use multiple physical connections to avoid a single circuit becoming a bottleneck or single point of failure. - Size aggregate capacity so that required throughput is still available during failure scenarios, not only during steady state. Common Misconceptions: - Meeting 4 Gbps only in normal conditions is not enough when the question explicitly requires resiliency against circuit or router failures. - Four 1 Gbps links do not satisfy a 4 Gbps requirement after a single circuit failure, because only 3 Gbps would remain. - Two 10 Gbps links provide enough bandwidth, but they do not provide the same circuit-level resiliency as four links spread across two routers and two locations. Exam Tips: For AWS networking exam questions, pay close attention to whether bandwidth must be preserved during failures. When a question says the design must withstand circuit or router failures, assume the required throughput must still be supportable after one of those failures. The most resilient Direct Connect pattern uses multiple connections, multiple locations, and multiple customer routers with enough excess capacity to survive failures.

5
Question 5
(Select 3)

A company uses an AWS Direct Connect private VIF with a Link Aggregation Group (LAG) that consists of two 10 Gbps connections. The company's security team has implemented a new requirement for external network connections to provide layer 2 encryption. The company's network team plans to use MACsec support for Direct Connect to meet the new requirement. The network team must implement MACsec encryption on the existing Direct Connect connection to a private VIF without disrupting service and with the least possible downtime. Which combination of steps should the network team take to implement this functionality? (Choose three.)

Correct. MACsec requires MACsec-capable ports/circuits. If the existing two 10 Gbps connections in the LAG were not ordered/provisioned with MACsec support, you cannot reliably enable MACsec in place. Creating a new LAG with new MACsec-capable connections allows a parallel deployment, testing, and a controlled migration to meet the requirement with minimal downtime.

Correct. MACsec on Direct Connect uses a Connectivity Association Key (CAK) and Connection Key Name (CKN) to establish the secure association between your router and AWS. Associating CAK/CKN with the new LAG is a required configuration step before encryption can operate. Doing this on the new LAG supports a staged rollout without impacting the existing production LAG.

Incorrect. Internet Key Exchange (IKE) is used for IPsec VPN tunnels (Site-to-Site VPN), not for MACsec. MACsec is Layer 2 encryption (802.1AE) and does not negotiate keys using IKE. Selecting IKE here reflects a common confusion between MACsec and IPsec-based encryption solutions.

Incorrect. Configuring MACsec encryption mode on the existing LAG risks disruption because enabling encryption requires both ends to be configured consistently; mismatches can cause link drops. Additionally, if the existing circuits/ports do not support MACsec, this step cannot satisfy the requirement. The question emphasizes least downtime, which favors a parallel MACsec-capable LAG instead.

Correct. After creating a MACsec-capable LAG and associating CAK/CKN, you must configure the MACsec encryption mode on that new LAG (e.g., should_encrypt/must_encrypt depending on policy). This enables encrypted Layer 2 operation on the new connectivity so traffic can be migrated with a controlled cutover and minimal downtime.

Incorrect. While MACsec ultimately operates on the physical links, AWS configuration for MACsec is typically managed at the connection or LAG construct rather than requiring separate “mode” configuration on each member specifically as the primary step. The least-downtime, exam-appropriate approach is to build a new MACsec-capable LAG and configure it, not attempt piecemeal changes on existing member connections.

Question Analysis

Core Concept: This question tests AWS Direct Connect MACsec (IEEE 802.1AE) for Layer 2 encryption and how to introduce it with minimal downtime when using a Link Aggregation Group (LAG) and a private VIF. MACsec is a physical/link-layer encryption capability that requires MACsec-capable ports/circuits and uses a Connectivity Association Key (CAK) and Connection Key Name (CKN) to establish secure connectivity. Why the Answer is Correct: To avoid disrupting an in-use private VIF on an existing LAG, the least-downtime approach is to build a parallel, MACsec-capable path and then migrate traffic. You cannot “turn on” MACsec on circuits/ports that do not support it, and enabling MACsec on an active production LAG can cause a link flap if the far-end is not simultaneously configured. Therefore, the network team should (1) create a new LAG using new Direct Connect connections that explicitly support MACsec, (2) associate the CAK/CKN with that new LAG, and (3) configure the MACsec encryption mode on the new LAG. After validation, the private VIF/traffic can be moved to the new encrypted connectivity with a controlled cutover. Key AWS Features: Direct Connect MACsec is configured at the connection/LAG level using CAK/CKN and an encryption mode (e.g., should_encrypt or must_encrypt depending on requirements). MACsec is not IPsec and does not use IKE. LAG provides link redundancy/aggregation, but MACsec capability is tied to the underlying dedicated connections and their ports. Common Misconceptions: A frequent trap is confusing MACsec with IPsec VPN (IKE). Another is assuming MACsec can be enabled “in place” on an existing LAG without verifying MACsec-capable circuits/ports. Also, configuration is not done “per member connection” when the intent is to manage MACsec at the LAG level for consistent behavior. Exam Tips: When requirements specify Layer 2 encryption on Direct Connect, think MACsec (not VPN). For “least downtime,” prefer parallel build + cutover. If an option mentions IKE, it’s almost certainly referring to IPsec VPN, not MACsec. Also watch for wording about ports/circuits supporting MACsec—hardware capability is a gating factor.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6
(Select 2)

An investment platform must log all internet service traffic and retain it for 2 years. In dev, the team validated a cross‑account Amazon VPC Traffic Mirroring design using a Network Load Balancer (NLB) as the mirror target. In production, mirroring is enabled but some traffic is not mirrored and the missing packets appear random. Identify plausible explanations for intermittent/lost mirrored traffic with this design. Which statements explain why not all the traffic is mirrored? (Choose two.)

Incorrect. Security groups on the production account that hosts the source services do not explain random loss of mirrored packets in this scenario. A security group problem would usually cause deterministic blocking of specific traffic patterns, ports, or protocols rather than intermittent packet omissions. Also, Traffic Mirroring occurs from the source ENI in the VPC data plane, so the symptom described does not match a typical source-side security group issue. If security controls on the target path were wrong, the failure would more likely be consistent than random.

Correct. Although the guest operating system does not directly forward the mirrored copy, the monitored EC2 instance and its attached ENI still have finite bandwidth and packet-per-second limits. Traffic Mirroring increases the amount of traffic that must be handled by the underlying networking path, so heavily utilized instances can experience dropped mirrored packets when limits are approached. This is especially plausible in production where traffic volumes are much higher than in dev. On the exam, instance/ENI capacity constraints are a valid reason why not all mirrored traffic is delivered.

Incorrect. IAM policies govern who can create, modify, or delete traffic mirror sessions, targets, and filters through the control plane. Once mirroring is already enabled and operating, IAM is not involved in the runtime forwarding of mirrored packets. Therefore, a misconfigured IAM policy would typically prevent setup or changes, not cause sporadic packet loss in an active session. This option confuses provisioning permissions with data-plane behavior.

Correct. Amazon VPC Traffic Mirroring is a best-effort service, so mirrored packets are not guaranteed to be delivered. During periods of network congestion or resource contention, AWS prioritizes the original production traffic and may drop the mirrored copies. That behavior produces the exact symptom described in the question: random or intermittent missing packets rather than a complete failure. This is one of the most important design caveats for Traffic Mirroring in monitoring architectures.

Incorrect. Network Load Balancers do not have a documented 'warm-up delay' characteristic that would explain random packet loss in a Traffic Mirroring design. NLBs are designed to scale automatically and are not associated with the classic pre-warming concern seen in some older AWS load balancing contexts. While any downstream target capacity issue could affect analysis systems, the specific statement about NLB warm-up delays is technically misleading. For this question, AWS-documented mirroring best-effort behavior and source/ENI capacity limits are the plausible explanations.

Question Analysis

Core concept: Amazon VPC Traffic Mirroring provides a best-effort copy of network traffic for monitoring and analysis, not a guaranteed lossless packet capture service. In production, missing mirrored packets can occur when the source instance or ENI reaches bandwidth/packet-processing limits, or when the mirroring service drops copies during congestion because original application traffic is prioritized. The correct answers are B and D because they align with AWS-documented behavior around mirroring overhead and best-effort delivery. A common misconception is to blame IAM or security groups for random packet loss, but those typically cause configuration failures or deterministic blocking rather than intermittent missing mirrored packets. Exam tip: for Traffic Mirroring questions, focus on best-effort semantics, ENI/instance throughput limits, and the fact that mirrored copies are lower priority than the original traffic.

7
Question 7

A fintech company has multiple development environments across different AWS accounts, all operating in the us-east-1 Region. These environments host backend services on Amazon EC2 instances within private subnets, which are accessed via a Network Load Balancer (NLB) in a public subnet. For compliance reasons, API access to these services is restricted to a small number of approved third-party vendors. The NLB's security group is configured to only allow inbound TCP traffic on port 443 from a specific set of vendor IP address ranges. The company has a strict policy that whenever a new vendor is onboarded, their IP address range must be added to the NLB's security group in every account. A network engineer must find the most operationally efficient way to centrally manage these vendor IP address ranges across all accounts. The network engineer needs to implement a solution that allows for a single, centralized update of a new vendor's IP address range. This change must then be automatically reflected in the security groups of all relevant accounts without manual intervention in each account. The solution must be highly efficient and scalable. Which solution will meet these requirements in the MOST operationally efficient manner?

Not the most operationally efficient. A DynamoDB-driven Lambda updater requires building and maintaining custom automation, IAM roles, error handling, and deployment in every account. It introduces operational overhead and potential drift if the function fails or is misconfigured. While it can work, it is less scalable and less elegant than using a native VPC construct designed for centrally managed CIDR allow lists.

This option is less efficient because it adds EventBridge and Lambda to update security groups whenever the prefix list changes, which reintroduces custom orchestration and operational overhead. If a prefix list is the chosen abstraction, the goal should be to consume it directly where supported rather than trigger code to rewrite security group rules. It also omits the necessary cross-account sharing mechanism, so it does not fully solve the multi-account central-management requirement.

This is the best answer among the choices because a managed prefix list is the native AWS construct for centrally maintaining a reusable set of CIDR ranges. It is significantly more operationally efficient than building custom data stores and Lambda workflows to push security group updates into every account. AWS RAM also provides the intended cross-account sharing mechanism for the prefix list resource itself, making this the closest match to a centralized, scalable design pattern in the available options.

Similar to option A, this is a custom automation approach with higher operational burden. Storing CIDRs in S3 and running Lambda to update security groups across accounts requires per-account deployment, cross-account permissions, and robust handling for failures and concurrency. It is not as efficient or scalable as using a managed prefix list shared via AWS RAM, which is purpose-built for this use case.

Question Analysis

Core Concept: This question is testing centralized management of approved vendor CIDR ranges across multiple AWS accounts with minimal operational overhead. The ideal pattern would be to use a native AWS networking construct that can be updated once and then reused broadly, rather than building custom synchronization logic. However, care is required because not all shared network resources can be referenced by security groups across accounts in the same way they can be used in routing. Why the Answer is Correct: Among the provided options, using a managed prefix list is still the closest fit to a centralized and scalable design because it provides a single object to maintain the vendor CIDR ranges. Prefix lists are purpose-built for reusable CIDR management and are far more operationally efficient than storing CIDRs in DynamoDB or S3 and orchestrating updates with Lambda in every account. The key benefit is reducing the number of individual CIDR entries that must be tracked and updated, even though the exact cross-account security group usage is more constrained than the original explanation suggests. Key AWS Features: - VPC Managed Prefix Lists: A reusable collection of CIDR blocks that can simplify allow-list management. - AWS RAM: Enables sharing of certain resources, including customer-managed prefix lists, across accounts. - Operational efficiency: Native networking constructs are generally preferable to custom event-driven automation for static allow-list distribution. Common Misconceptions: A common misconception is that any shared network object can automatically be referenced by security groups across accounts exactly as if it were local. Another misconception is that Lambda-based synchronization is equivalent in efficiency to a native construct; in practice, custom automation adds deployment, IAM, retry, and drift-management overhead. Prefix lists reduce complexity, but the exact supported integrations must always be validated. Exam Tips: On AWS networking exams, when you see repeated CIDR allow lists that must be centrally maintained, managed prefix lists are usually the intended service to consider first. Also compare native AWS features against custom automation and prefer the native option unless the question explicitly requires bespoke orchestration. Be alert to service-integration boundaries, especially for cross-account use cases involving security groups.

8
Question 8

A company’s on‑premises firewall connects to AWS Transit Gateway with a single Site‑to‑Site VPN. Growing traffic now saturates the VPN. The company needs a secure, highly available design that scales VPN throughput from on‑premises to multiple VPCs. Increase aggregate VPN bandwidth without sacrificing availability or security. Which solution will meet these requirements?

Correct. Multiple BGP-based Site-to-Site VPN connections to a Transit Gateway combined with ECMP allows load sharing across multiple tunnels/paths, increasing aggregate throughput while maintaining IPsec security and high availability. BGP provides dynamic routing and is the typical prerequisite for ECMP behavior with TGW VPN attachments, enabling scalable, resilient connectivity from on-premises to multiple VPCs via TGW.

Incorrect. Static-routing VPNs to Transit Gateway generally do not provide the same ECMP load-sharing capability as dynamic (BGP) VPNs and are operationally brittle (manual route management and less graceful failover). Even if multiple tunnels exist, without BGP-driven ECMP you typically won’t achieve reliable active/active distribution of traffic to increase aggregate bandwidth.

Incorrect. VPN acceleration (where available) can improve performance characteristics (e.g., latency/jitter) by leveraging AWS global networking, but it does not fundamentally scale aggregate throughput the way multiple parallel VPN connections with ECMP do. The requirement is explicitly to increase aggregate VPN bandwidth without sacrificing availability; acceleration alone doesn’t add parallel paths for load sharing.

Incorrect. A self-managed software VPN appliance on EC2 increases operational overhead and can introduce new bottlenecks and availability concerns (instance sizing, scaling, patching, failover design). It also deviates from the managed TGW VPN approach and may not meet the “highly available” requirement unless you build a complex HA pair and routing design. The question asks for scaling VPN throughput to multiple VPCs via TGW, which is best met with managed VPN + ECMP.

Question Analysis

Core Concept: This question tests scaling and high availability for AWS Site-to-Site VPN connectivity into AWS Transit Gateway (TGW). The key concepts are using multiple VPN tunnels/attachments, dynamic routing with BGP, and Equal-Cost Multi-Path (ECMP) to increase aggregate throughput while maintaining resiliency. Why the Answer is Correct: Creating multiple dynamic (BGP) Site-to-Site VPN connections to a transit gateway and enabling ECMP allows traffic to be distributed across multiple VPN tunnels/paths in parallel. A single Site-to-Site VPN connection already includes two tunnels for HA, but a single VPN connection can still become a throughput bottleneck. By adding additional VPN connections (each with two tunnels) and using BGP with ECMP, the on-premises firewall and TGW can load-share flows across multiple tunnels, increasing aggregate bandwidth while preserving redundancy. This design remains secure (IPsec) and highly available (multiple tunnels across multiple connections). Key AWS Features / Best Practices: - Transit Gateway supports ECMP for VPN attachments when using dynamic routing (BGP). This is the standard AWS approach for scaling VPN throughput. - Use multiple VPN connections (and ideally multiple customer gateway devices or multiple public IPs on the same device if supported) to improve both capacity and resilience. - BGP enables automatic route exchange and faster failover than static routes, and is required for ECMP load sharing behavior in this context. - This pattern aligns with AWS Well-Architected (Reliability and Performance Efficiency): redundancy plus horizontal scaling. Common Misconceptions: - “Two tunnels equals double bandwidth”: the two tunnels in a single VPN are primarily for HA; load sharing is not guaranteed unless ECMP is used and the device supports it. - “Static routes are simpler”: static routing typically prevents ECMP with TGW VPN and reduces operational resilience because failover and routing changes are manual. - “VPN acceleration fixes saturation”: acceleration can reduce latency/jitter but does not replace the need for parallel paths to scale aggregate throughput. Exam Tips: When you see “TGW + VPN saturated + need more bandwidth + keep HA,” think “multiple BGP VPNs + ECMP.” If the question emphasizes scaling throughput across multiple VPCs, TGW is the hub, and ECMP with dynamic routing is the canonical AWS answer. Always prefer managed AWS networking features over self-managed VPN appliances unless there is a specific requirement they uniquely satisfy.

9
Question 9
(Select 3)

A company uses AWS Client VPN to allow remote users to access resources in multiple peered VPCs and an on-premises data center. The Client VPN endpoint route table has a single 0.0.0.0/0 entry, and its security group has no inbound rules and an outbound rule allowing all traffic to 0.0.0.0/0. Remote users report incorrect geographic location information in web search results. Resolve incorrect geographic location issues for Client VPN users with minimal service interruption. Which combination of steps should a network engineer take to resolve this issue with the LEAST amount of service interruption? (Choose three.)

Incorrect. AWS Site-to-Site VPN is for connecting networks (e.g., on-premises to VPC), not for individual remote users like Client VPN. Migrating users to Site-to-Site VPN would require new customer gateway devices/configuration and does not directly address geolocation; it also causes significant service interruption and operational change compared to adjusting Client VPN routing.

Correct. Enabling split-tunnel ensures only traffic destined for networks with explicit Client VPN routes is sent through the VPN. Internet-bound traffic stays on the user’s local ISP path, which typically restores correct geolocation in web services. This is a low-interruption configuration change and is a common best practice to avoid unnecessary internet backhaul through AWS.

Correct. After removing the default route and using split-tunnel, you must add explicit routes for each private destination (peered VPC CIDRs and on-premises CIDRs) to maintain access to internal resources. Without these routes, clients will not know to send that traffic into the VPN, causing loss of connectivity to corporate networks.

Incorrect. Removing the 0.0.0.0/0 outbound rule from the Client VPN endpoint security group does not solve the routing problem that causes geolocation issues. It would likely break legitimate outbound connectivity from VPN clients to internal resources (and possibly required AWS services) and introduces avoidable disruption. Geolocation is driven by egress path/IP, not SG rules.

Incorrect. Deleting and recreating the Client VPN endpoint in a different VPC is highly disruptive (new endpoint, associations, authorization rules, client configuration updates). It may change the egress IP range and thus geolocation, but it does not address the root cause (full-tunnel routing of internet traffic through AWS). Split-tunnel and route changes are the minimal-interruption fix.

Correct. Removing the 0.0.0.0/0 route from the Client VPN route table stops the VPN from attracting all destinations (full-tunnel behavior). Combined with split-tunnel and explicit private routes, this prevents internet traffic from egressing via AWS (which causes incorrect geolocation) while preserving access to internal networks with specific routes.

Question Analysis

Core Concept: This question tests AWS Client VPN routing behavior (full-tunnel vs split-tunnel), Client VPN route tables, and how default routes (0.0.0.0/0) affect internet egress and perceived geolocation. With full-tunnel, all client traffic (including web browsing) is routed through the VPN and egresses from the VPC’s internet path (NAT Gateway/IGW), which can cause web services to infer the user’s location based on the AWS egress IP rather than the user’s local ISP. Why the Answer is Correct: Remote users see incorrect geographic location because the Client VPN endpoint route table contains a single 0.0.0.0/0 route, effectively forcing all traffic through the VPN (full-tunnel). The least disruptive fix is to stop sending general internet traffic through AWS while still routing only corporate/private networks through the VPN. To do that: (B) enable split-tunnel so only routes explicitly associated with the Client VPN are pushed to clients; (F) remove the 0.0.0.0/0 route so the VPN no longer attracts all destinations; and (C) add specific routes for the peered VPC CIDRs and on-premises CIDRs so access to internal resources continues to work. Key AWS Features: Client VPN uses a route table plus authorization rules to control where clients can send traffic. Split-tunnel determines whether the client’s default route is redirected to the VPN. Removing 0.0.0.0/0 and adding only internal CIDR routes ensures internal connectivity while preserving local internet breakout (and correct geolocation). This aligns with AWS Well-Architected (Security and Reliability) by reducing unnecessary traffic through centralized egress and limiting blast radius. Common Misconceptions: Security group changes (like removing outbound 0.0.0.0/0) do not fix routing/geolocation; they can break access. Recreating the endpoint in another VPC changes egress IPs but still routes internet through AWS and causes disruption. Switching to Site-to-Site VPN is a different use case and not a minimal-interruption fix for remote users. Exam Tips: If you see Client VPN + 0.0.0.0/0 route + “internet/geolocation issues,” think “full-tunnel causing AWS egress.” The typical remediation is split-tunnel plus explicit private routes for required networks, not rebuilding the endpoint or changing VPN type.

10
Question 10

A network engineer is working on a private DNS design to integrate AWS workloads and on-premises resources for a manufacturing company. The AWS deployment consists of five VPCs in the eu-west-1 Region that connect to the on-premises network over AWS Direct Connect. The VPCs communicate with each other by using a transit gateway. Each VPC is associated with a private hosted zone that uses the corp.global.internal domain. The network engineer creates an Amazon Route 53 Resolver outbound endpoint in a shared services VPC and attaches the shared services VPC to the transit gateway. The network engineer must implement a solution for DNS resolution where queries for hostnames ending with corp.global.internal must use the private hosted zone within AWS, while queries for all other domains must be forwarded to a private on-premises DNS resolver. The solution must be centralized and work across all five VPCs. Which solution will meet these requirements?

Incorrect. Route 53 Resolver does not use "*" as a catch-all forwarding domain; the correct catch-all is ".". In addition, a user-defined system rule that targets Route 53 Resolver for corp.global.internal is not how private hosted zone resolution is configured. Private hosted zones are resolved automatically for associated VPCs, and forwarding rules target external DNS servers through outbound endpoints, not Route 53 Resolver itself.

Incorrect. A forwarding rule cannot target "Route 53 Resolver" as if it were an external DNS destination; forwarding rules must specify target IP addresses reachable through an outbound endpoint. Also, a system rule for "." that targets an outbound endpoint is not a valid Route 53 Resolver construct. This option misunderstands both how private hosted zones are resolved and how outbound forwarding rules are defined.

Correct. A forwarding rule for corp.global.internal ensures that queries for that namespace are handled separately from the default rule, and the catch-all forwarding rule for "." sends all other DNS queries to the on-premises DNS resolver by using the Route 53 Resolver outbound endpoint. This matches the requirement for split DNS behavior: AWS internal names stay within the AWS-controlled resolution path, while non-matching domains are sent on premises. It also supports centralized management because the outbound endpoint can live in a shared services VPC and the Resolver rules can be shared and associated with all five VPCs. Among the available options, this is the only one that correctly uses "." as the catch-all and forwards external queries to the on-premises DNS server.

Incorrect. A forwarding rule for "." alone would send all unmatched queries to the on-premises DNS resolver, but this option does not provide the required specific handling for corp.global.internal. It also incorrectly states that Resolver rules are associated with a transit gateway attachment, when in reality Resolver rules are associated with VPCs. Without the more specific corp.global.internal rule, the design does not explicitly satisfy the split-resolution requirement across all VPCs.

Question Analysis

Core concept: This question tests centralized hybrid DNS resolution with Amazon Route 53 Resolver, outbound endpoints, forwarding rules, and private hosted zones in a multi-VPC environment connected by a transit gateway and Direct Connect. Why correct: The correct design uses Route 53 Resolver rules with longest-suffix matching, where a specific rule for corp.global.internal handles that namespace separately from a default "." rule that forwards all other queries to the on-premises DNS resolver. Key features: Route 53 private hosted zones are authoritative for associated VPCs, outbound endpoints are used to send forwarded DNS queries to external resolvers, and Resolver rules can be shared and associated with multiple VPCs for centralized DNS management. Common misconceptions: You do not target Route 53 Resolver itself as a forwarding destination, there is no wildcard "*" catch-all rule in Resolver, and Resolver rules are associated with VPCs rather than transit gateway attachments. Exam tips: Remember that "." is the catch-all domain, Resolver chooses the most specific matching rule, and centralized hybrid DNS commonly uses a shared outbound endpoint plus shared Resolver rules across VPCs.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

A European automaker is moving customer‑facing services and analytics from two data centers (50‑mile separation) to AWS. Workloads span accounts in eu‑west‑3 and eu‑central‑1. In each Region, the company orders two resilient 1 Gbps DX circuits. The network must interconnect all VPCs and on‑premises, maintain Regional separation, and provide failover across Regions. Design multi‑Region connectivity over DX with full VPC reachability and cross‑Region resiliency. Which solution will meet these requirements?

DXGW with private VIFs to each VPC’s VGW can provide on-premises-to-VPC connectivity, but it does not provide an efficient way to interconnect all VPCs with each other across accounts/Regions. You would need additional constructs (VPC peering, VPN, or TGW) for VPC-to-VPC reachability. Also, ECMP across four links is not the core issue; the architecture doesn’t meet the “all VPCs interconnected” requirement cleanly.

This option is incorrect because a Transit Gateway is a Regional service; you cannot have a “single TGW” spanning eu-west-3 and eu-central-1. Also, “inter-Region LAG” is not a supported Direct Connect concept (LAG is within a single DX location). While DXGW + transit VIF to TGW is correct in principle, the single-TGW multi-Region premise makes the design invalid.

Correct. Deploying a TGW in each Region preserves Regional separation and provides a scalable hub for all VPC attachments in that Region. Attaching both TGWs to a DXGW using transit VIFs enables resilient DX connectivity from on-premises to both Regions. TGW peering between Regions provides controlled cross-Region VPC reachability and enables failover paths if one Region or DX path is impaired, while keeping routing domains manageable.

This option is incorrect because it mixes private VIF-based DXGW connectivity (used for VGW) with TGW connectivity, which requires transit VIFs. Additionally, “inter-Region LAG” is not a valid AWS DX feature. Even if you added TGW, the described attachment model is wrong/incomplete and does not clearly achieve full VPC reachability with proper multi-Region resiliency and separation.

Question Analysis

Core Concept: This question tests AWS Direct Connect multi-Region design using Direct Connect Gateway (DXGW) and AWS Transit Gateway (TGW) to provide scalable, full-mesh VPC connectivity plus resilient on-premises connectivity, while preserving Regional separation and enabling cross-Region failover. Why the Answer is Correct: Option C is the only design that cleanly meets all requirements: (1) interconnect all VPCs and on-premises, (2) keep Regional separation (each Region has its own routing domain), and (3) provide cross-Region resiliency. A DXGW provides a global attachment point for Direct Connect and can be associated with multiple TGWs (including in different Regions). By deploying one TGW per Region and attaching each to the DXGW using transit VIFs, the on-premises network can reach both Regional TGWs over the resilient DX circuits. Then, peering the TGWs provides controlled cross-Region connectivity between VPCs in eu-west-3 and eu-central-1. If one Region (or its DX path) is impaired, routing can fail over to the other Region via the remaining DX connectivity and the TGW peering path, while still keeping each Region’s VPC attachments local to its TGW. Key AWS Features: - DXGW + transit VIF: enables DX to connect to TGW (not directly to VPCs) and supports multi-account/multi-VPC at scale. - One TGW per Region: maintains Regional separation and avoids a single global routing blast radius. - TGW peering: provides cross-Region VPC-to-VPC and transitive routing between the two TGWs (with explicit route table control). - Resiliency: two DX circuits per Region (ideally diverse locations) plus BGP path selection for failover. Common Misconceptions: - Using private VIFs to VGWs (Option A) seems simpler, but does not scale to “all VPCs” well and does not provide a clean, managed cross-Region VPC interconnect. - A single TGW for two Regions (Option B) is not valid because TGW is a Regional resource; “inter-Region LAG” is also not an AWS construct for DX. - Mixing private VIFs and TGW attachments (Option D) is conceptually inconsistent: DXGW-to-TGW requires transit VIFs, and “inter-Region LAG” again is not applicable. Exam Tips: Remember: TGW is Regional; DXGW is global. For multi-Region DX with many VPCs, the common pattern is DX circuits -> DXGW (transit VIF) -> TGW per Region, and TGW peering for cross-Region connectivity and failover. Watch for invalid terms like “inter-Region LAG” and for designs that don’t scale beyond a few VPCs.

12
Question 12

An application team for a startup company is deploying a new multi-tier application into the AWS Cloud. The application will be hosted on a fleet of Amazon EC2 instances that run in an Auto Scaling group behind a publicly accessible Network Load Balancer (NLB). The application requires the clients to work with UDP traffic and TCP traffic. In the near term, the application will serve only users within the same geographic location. The application team plans to extend the application to a global audience and will move the deployment to multiple AWS Regions around the world to bring the application closer to the end users. The application team wants to use the new Regions to deploy new versions of the application and wants to be able to control the amount of traffic that each Region receives during these rollouts. In addition, the application team must minimize first-byte latency and jitter (randomized delay) for the end users. The application team must design a network architecture that can handle both TCP and UDP traffic, support phased global rollouts by controlling traffic distribution to multiple AWS Regions, and reduce latency and jitter for end users. How should the application team design the network architecture for the application to meet these requirements?

CloudFront distributions with NLB origins plus Route 53 weighted routing can provide some traffic shifting, but CloudFront is not intended as a general TCP/UDP front door for arbitrary application protocols behind NLB. Also, weighted DNS does not inherently minimize jitter/first-byte latency because traffic still traverses the public internet after DNS resolution. This option mixes services in a way that doesn’t best meet the TCP/UDP and jitter requirements.

AWS Global Accelerator is purpose-built for global TCP/UDP applications. It provides anycast static IPs, routes users to the closest healthy Regional endpoint over the AWS global network, and reduces first-byte latency and jitter. Endpoint groups per Region plus the traffic dial enable controlled, percentage-based rollouts to new Regions. NLBs are valid endpoints, fitting the EC2 Auto Scaling + NLB architecture.

S3 Transfer Acceleration only accelerates transfers to and from Amazon S3 using edge locations; it does not front an EC2/NLB-based multi-tier application and does not provide TCP/UDP listener routing to NLB endpoints. It also does not offer the required multi-Region traffic dials for phased rollouts of an application stack. This is a service mismatch.

CloudFront origin groups are mainly for origin failover (primary/secondary) and are oriented around HTTP/HTTPS content delivery rather than generic TCP/UDP application traffic. Route 53 latency routing can steer users to lower-latency Regions, but it cannot precisely control rollout percentages like Global Accelerator traffic dials, and it won’t reduce jitter/first-byte latency as effectively as routing over the AWS backbone.

Question Analysis

Core Concept: This question tests global network front doors for multi-Region applications that need both TCP and UDP, plus controlled traffic shifting during rollouts while minimizing latency and jitter. The key service is AWS Global Accelerator (AGA), which provides anycast static IPs and routes user traffic onto the AWS global network to the closest healthy endpoint. Why the Answer is Correct: AWS Global Accelerator supports both TCP and UDP and is designed specifically to improve first-byte latency and reduce jitter by keeping traffic on the AWS backbone instead of the public internet for as much of the path as possible. It also supports multi-Region active-active architectures and phased rollouts using endpoint groups per Region. The traffic dial lets you precisely control what percentage of traffic is sent to each Region during deployments (e.g., 1% canary, then 10%, then 50%, etc.). Registering each Region’s publicly accessible NLB as an endpoint is a standard pattern for EC2 Auto Scaling behind NLB. Key AWS Features: - Anycast static IPs: one set of IPs globally, simplifying client configuration and failover. - Listeners and port ranges: map required TCP/UDP ports to endpoints. - Endpoint groups per Region: health checks and routing decisions per Region. - Traffic dials: percentage-based traffic shifting for controlled rollouts. - Health-based failover: automatically routes away from unhealthy endpoints. Common Misconceptions: CloudFront is often chosen for “global acceleration,” but CloudFront is primarily a CDN/edge caching service for HTTP/HTTPS (and limited other protocols) and is not the right fit for generic TCP/UDP application traffic behind NLB. Route 53 weighted/latency routing can shift traffic, but it does not reduce jitter/first-byte latency the way AGA does because it relies on DNS and the public internet path after resolution. Exam Tips: When you see requirements for (1) TCP + UDP, (2) multi-Region traffic steering with percentage control, and (3) reduced latency/jitter, think AWS Global Accelerator with endpoint groups and traffic dials. Route 53 is for DNS-based steering; CloudFront is for caching/HTTP acceleration; AGA is for non-HTTP and performance-sensitive global entry points.

13
Question 13

A TGW is attached to a DX gateway and 19 VPCs. Two new VPCs (10.0.32.0/21 and 10.0.40.0/21) will be attached. The allowed prefix list has room for only one more entry. Advertise the routes from AWS to on‑premises while staying within the prefix‑list entry limit. What should the engineer do?

Incorrect. AWS managed prefix lists are not the feature used to permit AWS-to-on-premises route advertisement over Direct Connect. In addition, this option still uses two separate CIDR entries rather than a single aggregate, so it does not address the one-entry limit even conceptually. The engineer must modify the DXGW allowed prefix list, not a managed prefix list, to affect advertised routes.

Incorrect. Adding 10.0.32.0/21 and 10.0.40.0/21 directly to the Direct Connect gateway allowed prefix list would require two separate entries. The question explicitly states that the allowed prefix list has room for only one more entry, so this option cannot meet the stated constraint. While the prefixes themselves are valid AWS routes, this approach fails the scale-limit requirement.

Incorrect. The CIDR 10.0.32.0/20 is the right aggregate, but AWS managed prefix lists do not control route advertisement from a Direct Connect gateway to on-premises. Managed prefix lists are reusable CIDR sets for VPC resources such as route tables and security groups, not for DXGW BGP filtering. Therefore, placing the summary there would not solve the actual problem being asked.

Correct. The Direct Connect gateway allowed prefix list is the mechanism that controls which AWS prefixes are advertised to on-premises over BGP when using a TGW association. Adding 10.0.32.0/20 consumes only one remaining entry and covers both new VPC CIDRs, 10.0.32.0/21 and 10.0.40.0/21. Although the two /21s are not adjacent, they are both contained within the /20 aggregate, so this option satisfies the requirement to advertise the routes while staying within the entry limit.

Question Analysis

Core concept: This question tests how a Direct Connect gateway (DXGW) allowed prefix list controls which AWS prefixes are advertised to on-premises when a transit gateway (TGW) is associated. The allowed prefix list is effectively a whitelist for AWS-to-on-premises route advertisement, and each CIDR consumes one entry. When the list has room for only one more entry, the engineer must use a valid aggregate route if possible. Why correct: The two new VPC CIDRs, 10.0.32.0/21 and 10.0.40.0/21, can both be covered by the single aggregate 10.0.32.0/20. Although the two /21 ranges are not adjacent to each other, they are both contained within that /20 supernet, so adding 10.0.32.0/20 to the DXGW allowed prefix list uses only one entry and permits advertisement of both VPC routes to on-premises. This satisfies the route-advertisement requirement while staying within the prefix-list entry limit. Key features: DXGW allowed prefixes determine which AWS-side routes are advertised over BGP to on-premises through the DXGW/TGW association. Route summarization is commonly used to reduce the number of advertised prefixes and stay within service limits. AWS managed prefix lists are unrelated to Direct Connect BGP advertisement control; they are used as reusable sets of CIDRs in VPC constructs such as security groups and route tables. Common misconceptions: A common mistake is to confuse AWS managed prefix lists with the DXGW allowed prefix list. Another trap is assuming that only perfectly adjacent prefixes can ever be summarized; in practice, a broader aggregate can be used if it is acceptable to advertise that larger range. The key exam clue here is the one-entry limit on the allowed prefix list, which strongly points to using a single aggregate in the DXGW allowed prefix list. Exam tips: When you see DXGW, TGW, and an allowed prefix list, focus on BGP route advertisement from AWS to on-premises rather than VPC routing or security-group constructs. Check whether multiple VPC CIDRs can be represented by one supernet entry. Also verify that the option places the CIDR in the correct AWS feature: the DXGW allowed prefix list, not an AWS managed prefix list.

14
Question 14
(Select 2)

An IoT company collects data from thousands of sensors that are deployed in the United States and South Asia. The sensors use a proprietary communication protocol that is built on UDP to send the data to a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group and run behind a Network Load Balancer (NLB). The instances, Auto Scaling group, and NLB are deployed in the us-west-2 Region. The company's data shows that data from the sensors in South Asia occasionally gets lost in transit over the public internet and does not reach the EC2 instances. The company needs a solution to resolve the issue of data loss from the South Asian sensors. The solution must provide a reliable and low-latency path for the UDP traffic from the sensors to the application, leveraging AWS services to optimize the network performance over long distances. Which solutions will resolve this issue? (Choose two.)

Correct. AWS Global Accelerator supports UDP and can front an existing NLB. Sensors send to GA anycast static IPs; traffic enters the nearest AWS edge location and then uses the AWS global backbone to reach the us-west-2 NLB. This typically reduces packet loss/jitter caused by suboptimal public internet routing over long distances and improves latency without changing the application protocol.

Incorrect. Amazon CloudFront is primarily a CDN for HTTP/HTTPS and does not provide a general-purpose UDP acceleration path to an NLB origin. While CloudFront can front certain TCP-based origins for web delivery, it is not designed to proxy proprietary UDP sensor protocols. For UDP acceleration and static anycast ingress, Global Accelerator is the appropriate AWS service.

Correct. Deploying a second NLB/Auto Scaling group in ap-south-1 places compute closer to South Asian sensors, reducing long-haul internet traversal where loss occurs. Route 53 latency-based routing directs clients to the Region with the lowest observed latency (from the DNS resolver perspective), enabling active-active multi-Region ingestion and improving performance and effective reliability for geographically distributed devices.

Incorrect. Route 53 failover routing is active-passive and is intended for availability when the primary endpoint becomes unhealthy. It does not optimize for latency and will not address intermittent packet loss when the primary endpoint remains healthy. Also, “packets are dropped” is not something Route 53 can detect per-flow; DNS failover is coarse-grained and slow relative to UDP telemetry streams.

Incorrect. Enhanced networking with ENA improves EC2 network performance (higher bandwidth, lower latency, higher PPS) once packets reach the instance. The problem described is packet loss in transit over the public internet from South Asia to us-west-2, before traffic arrives at AWS. ENA will not materially improve reliability of the long-distance path or reduce internet routing-related loss.

Question Analysis

Core Concept: This question tests how to improve reliability and latency for long-distance UDP traffic into AWS. Key services are AWS Global Accelerator (GA) for optimizing internet ingress onto the AWS global network and multi-Region architectures with Amazon Route 53 latency-based routing. Why the Answer is Correct: A (Global Accelerator + existing NLB) addresses packet loss and variable performance over the public internet by giving sensors anycast static IPs that terminate at the nearest AWS edge location. From there, traffic traverses the AWS global backbone to the Regional endpoint (your NLB in us-west-2). This typically reduces jitter, improves path stability, and lowers latency compared to best-effort internet routing—especially for South Asia to us-west-2. C (second stack in ap-south-1 + Route 53 latency routing) reduces the physical distance and number of internet hops by placing compute closer to South Asian sensors. With latency-based routing, clients are directed to the Region that provides the lowest latency, improving both performance and effective reliability (fewer long-haul segments where loss can occur). Together, GA optimizes the network path and multi-Region reduces distance, providing a robust solution. Key AWS Features: Global Accelerator supports TCP and UDP and integrates with NLB as an endpoint. It uses health checks and automatic endpoint failover, and provides static anycast IPs. Route 53 latency routing returns the best-performing endpoint per DNS resolver location; combined with active-active multi-Region NLB/ASG deployments, it improves user experience for globally distributed devices. Common Misconceptions: CloudFront is for HTTP/HTTPS (and some TCP use cases) and is not a general UDP proxy to an NLB origin. Route 53 failover is for availability (active-passive), not latency optimization, and does not directly solve intermittent packet loss caused by long-distance paths. Enhanced networking (ENA) improves instance-level throughput/pps but does not fix packet loss occurring before traffic reaches AWS. Exam Tips: For “UDP + global clients + internet path issues,” think Global Accelerator. For “clients in multiple continents + need low latency,” think multi-Region plus Route 53 latency routing (or GA with multiple endpoints). Distinguish latency-based routing (performance) from failover routing (availability).

15
Question 15
(Select 2)

A company runs a stateless web app (ASG) and a stateful admin/management app (separate ASG) behind the same Application Load Balancer (ALB) in private subnets. The company wants to access the management app at the same URL as the web app with the path prefix /management. Protocol, hostname, and port must be identical for both apps. Access to /management must be limited to on‑premises source IP ranges. The ALB uses an ACM certificate. Implement ALB listener rules to path‑route /management to the admin targets and restrict it to on‑premises CIDRs while keeping all traffic over the same HTTPS listener. Which combination of steps should the network engineer take? (Choose two.)

This option correctly creates a non-default HTTPS listener rule that combines a path-pattern condition for /management with a source-ip condition for the on-premises CIDR ranges. That is exactly how an ALB restricts access to a specific path while keeping both applications on the same hostname, port, and ACM-backed HTTPS listener. Forwarding matching requests to the management target group satisfies the routing requirement. Enabling stickiness on the management target group is appropriate because the management application is stateful and benefits from session affinity.

This option is incorrect because the default ALB listener rule cannot be modified to include conditions such as path-pattern or source-ip. The default rule is always unconditional and is evaluated only after all higher-priority rules have been checked. It also describes forwarding to the management target group when the conditions are not matched, which is the opposite of the desired behavior. Group-level stickiness does not fix the invalid listener-rule design.

This option is wrong because ALB listener rules should use the native source-ip condition rather than inspecting the X-Forwarded-For header. X-Forwarded-For can contain multiple addresses and is not the intended access-control primitive for ALB listener rule matching. The requirement is specifically to restrict access by on-premises source CIDR ranges, which ALB supports directly. While group-level stickiness may be useful, the traffic-matching method here is technically inappropriate.

This is the best available choice to represent the fallback behavior for all non-management traffic going to the web application target group. In ALB design, the default rule is unconditional and forwards requests that do not match any higher-priority listener rule, which is the intended behavior for the main web app. Although the wording incorrectly suggests conditions on the default rule, it is clearly aiming at the required catch-all forwarding action to the web target group. Given the answer set provided, this is the closest valid implementation step paired with option A.

This option is incorrect because forwarding all requests to the web app target group would prevent the /management path from being routed to the management target group. It also says to disable stickiness, which conflicts with the stateful nature of the management application. A proper ALB configuration needs a specific higher-priority rule for /management and a separate default action for everything else. This option does not implement the required listener-rule combination.

Question Analysis

Core concept: This question is about using a single HTTPS Application Load Balancer listener to serve two applications on the same hostname, protocol, and port, while applying path-based routing and source-IP restrictions for the management path. Why correct: The management application must be reached only when the request path is /management and the client source IP is within on-premises CIDRs, which ALB listener rules support directly with combined path-pattern and source-ip conditions. Key features: ALB listener rule priority, path-based routing, source-ip conditions, separate target groups per ASG, and a default catch-all action for the web application. Common misconceptions: The default ALB rule cannot have conditions, and X-Forwarded-For is not the correct listener-rule primitive for source restriction. Exam tips: When a question requires same host/protocol/port, think one listener with multiple rules; use a higher-priority conditional rule for the exception path and let the default action handle everything else.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

A company has a web application that runs on a fleet of Amazon EC2 instances. A new company regulation mandates that all network traffic to and from the EC2 instances must be sent to a centralized third-party EC2 appliance for content inspection before it reaches its destination. The company is performing tests to ensure that this new inspection layer is correctly implemented without affecting the application's functionality. The inspection appliance is a single-node instance that needs to see a copy of all network traffic. The network engineer must design a solution that transparently forwards all ingress and egress network traffic from the application's EC2 instances to the third-party inspection appliance for real-time content inspection. The solution must capture a complete copy of the traffic and deliver it to the appliance without disrupting the original traffic flow. Which solution will meet these requirements?

Incorrect. VPC Flow Logs provide network flow metadata (source/destination IP, ports, protocol, bytes, action) but do not include packet payloads. Content inspection requires packet copies (L2/L3/L4 and payload visibility), which flow logs cannot provide. Additionally, flow logs are delivered asynchronously to S3/CloudWatch, not suitable for real-time inspection or validating that the inspection layer is receiving full traffic.

Correct. VPC Traffic Mirroring can mirror ingress and egress packets from the application instances’ ENIs and send a complete copy to a mirror target without affecting the original traffic path. An NLB can be used as the mirror target to forward mirrored packets to the third-party inspection appliance. A mirror filter controls what is mirrored, and the original application traffic continues normally.

Incorrect. Amazon Kinesis Data Firehose is not a supported VPC Traffic Mirroring target and is designed for delivering streaming records to destinations like S3, Redshift, or OpenSearch—not raw packet mirroring for real-time inspection. Even conceptually, converting packets into Firehose records would not preserve full fidelity of network traffic required for content inspection.

Incorrect for the stated requirement. Gateway Load Balancer (GWLB) is used for inline traffic steering through security appliances using GENEVE encapsulation, meaning the original traffic is routed through the appliance before reaching its destination. The question requires a transparent solution that sends a copy of traffic to a single-node appliance without disrupting the original flow, which aligns with Traffic Mirroring, not GWLB.

Question Analysis

Core Concept: This question is primarily about Amazon VPC Traffic Mirroring. Traffic Mirroring creates an out-of-band copy of network packets from ENIs (sources) and sends that copy to a mirror target for inspection, IDS/IPS, or troubleshooting—without changing the original packet flow. Why the Answer is Correct: The requirement is explicit: the inspection appliance is a single-node instance that “needs to see a copy of all network traffic” and the solution must “deliver it to the appliance without disrupting the original traffic flow.” That is exactly what VPC Traffic Mirroring does. Among the options, using a Network Load Balancer (NLB) as the mirror target is a supported pattern that lets you scale or abstract the target endpoint. You create a mirror session with the application instances’ ENIs as sources, apply a mirror filter for ingress/egress, and send mirrored packets to the NLB, which forwards them to the inspection appliance. The production traffic continues to its original destination unchanged. Key AWS Features / Configuration Notes: - VPC Traffic Mirroring components: mirror source (ENI), mirror target (NLB or ENI), mirror filter (rules for direction/ports/protocols), and mirror session. - Mirroring is packet-level (not just metadata), enabling real-time content inspection. - Using an NLB as the target decouples sources from the appliance and can later support multiple appliances (even though this question states single-node). - Ensure the appliance can receive mirrored traffic (often via a dedicated ENI/subnet/security group) and that NLB target group health checks and listener configuration match the appliance’s capture method. Common Misconceptions: - Flow logs (Option A) capture metadata (5-tuple, accept/reject, bytes/packets), not packet payloads, so they cannot support content inspection. - Gateway Load Balancer (Option D) is for inline insertion (steering traffic through appliances), not for sending a copy while leaving the original flow untouched. - Kinesis Data Firehose (Option C) is not a valid Traffic Mirroring target and is for streaming data delivery, not packet mirroring. Exam Tips: - If the question says “copy of traffic” and “no disruption,” think VPC Traffic Mirroring. - If it says “all traffic must be sent through an appliance before reaching destination,” think GWLB (inline). Distinguish “copy” (mirroring) vs “steer/inline” (GWLB).

17
Question 17

A marketing company is using a hybrid infrastructure to connect its branch offices to AWS over AWS Direct Connect and a software-defined wide area network (SD-WAN) overlay. The company currently connects its multiple VPCs to a third-party SD-WAN appliance, which resides in a transit VPC within the same account, using AWS Site-to-Site VPNs. The company is planning to expand its AWS footprint by connecting more VPCs to the SD-WAN appliance transit VPC. However, the existing architecture is experiencing challenges with scalability, route table limitations, and higher costs due to the numerous VPN connections. A network engineer must design a new solution to address these issues and reduce the overall operational overhead. The network engineer needs to design a solution that provides scalable connectivity between all VPCs and the SD-WAN appliance while resolving the route table limitations and reducing costs. The solution must be implemented with the least amount of operational overhead. Which solution will meet these requirements with the LEAST amount of operational overhead?

TGW improves VPC-to-VPC scalability, but using a Site-to-Site VPN between TGW and the SD-WAN transit VPC still relies on VPN tunnel constructs and often more manual routing (static routes or limited dynamic behavior depending on design). It can work, but it is not the lowest operational overhead compared to TGW Connect’s purpose-built SD-WAN integration and BGP-based dynamic route exchange.

This is the best fit: attach all VPCs to TGW for scalable hub-and-spoke connectivity, then use a TGW Connect attachment to integrate the third-party SD-WAN appliance/virtual hub. TGW Connect uses GRE + BGP for dynamic routing, reduces per-VPC VPN sprawl, simplifies route management via TGW route tables/propagation, and is the most operationally efficient approach for SD-WAN overlays at scale.

VPC peering does not scale well for this use case: it is non-transitive (a key limitation when trying to build a hub), requires managing many peering connections as VPC count grows, and increases route table entries per peer. It also doesn’t inherently reduce operational overhead versus the current model and can reintroduce route table scaling challenges.

This mixes two incompatible scaling approaches: VPC peering for VPC connectivity (which is non-transitive and operationally heavy at scale) and TGW Connect for SD-WAN integration. Even if SD-WAN integration is improved, the VPC-to-VPC and VPC-to-hub connectivity would still suffer from peering’s scaling/management limitations, so it does not meet the overall requirement with the least operational overhead.

Question Analysis

Core concept: This question tests scalable hub-and-spoke networking in AWS using AWS Transit Gateway (TGW) and, specifically, Transit Gateway Connect for integrating third-party SD-WAN appliances. It also targets common scaling pain points of many Site-to-Site VPNs (per-VPC tunnels, route table growth, and operational overhead). Why the answer is correct: TGW is the AWS-native way to connect many VPCs through a central routing hub, avoiding the mesh of VPNs and the per-VPC route table scaling issues that arise when each VPC builds its own VPN to a transit VPC appliance. To connect an SD-WAN appliance environment to TGW with the least operational overhead, TGW Connect is designed for this exact use case: it provides a high-scale, BGP-based integration between TGW and third-party SD-WAN virtual hubs/appliances using GRE tunnels and BGP for dynamic routing. This reduces the number of individual VPN connections, simplifies route propagation/segmentation through TGW route tables, and scales as more VPC attachments are added. Key AWS features and best practices: Use TGW VPC attachments for each VPC, and TGW route tables to control segmentation (e.g., shared services, prod/dev separation). Use TGW Connect attachment to the SD-WAN appliance transit VPC (often via an intermediate VPC attachment plus Connect) and run BGP to dynamically exchange routes between TGW and the SD-WAN overlay. This avoids static route management and minimizes operational tasks when adding VPCs. It also aligns with AWS Well-Architected (Reliability/Operational Excellence) by centralizing routing and using managed constructs. Common misconceptions: Option A (TGW + Site-to-Site VPN) seems simpler, but it keeps VPN constructs in the design and typically requires more tunnel management and may not leverage SD-WAN native integration patterns. Options C/D rely on VPC peering, which does not scale well (non-transitive, per-connection management, route table entries per peer) and does not solve the core scalability/operational overhead problem. Exam tips: When you see “many VPCs,” “route table limitations,” and “too many VPNs,” think Transit Gateway. When you see “third-party SD-WAN integration” and “least operational overhead,” think TGW Connect with BGP for dynamic routing rather than building/maintaining many VPNs or peering links.

18
Question 18

A company has a 2 Gbps AWS Direct Connect hosted connection from its office to a VPC in ap-southeast-2 and adds a 5 Gbps hosted connection from a different Direct Connect location in the same Region. The connections terminate at different routers with an iBGP session between them. The network engineer wants the VPC to prioritize the 5 Gbps connection, with failover to the 2 Gbps connection if the 5 Gbps connection fails. Ensure the VPC uses the 5 Gbps Direct Connect connection for traffic to the office, with failover to the 2 Gbps connection when the 5 Gbps connection is down. Which solution will meet these requirements?

Correct. By applying AS_PATH prepending on the 2 Gbps router’s outbound advertisements to AWS, you make that path less preferred. AWS will choose the shorter AS_PATH (the 5 Gbps connection) for traffic destined to the office. If the 5 Gbps connection goes down, its routes are withdrawn and AWS falls back to the remaining 2 Gbps path, meeting the failover requirement.

Incorrect. Advertising a longer prefix (more specific route) from the 2 Gbps router would make the 2 Gbps path MORE preferred, not less, because longest-prefix match occurs before BGP attribute comparison. That would steer traffic to the 2 Gbps connection, the opposite of the requirement. More-specific routes are typically used to attract traffic, not to de-prefer a link.

Incorrect. Advertising a less specific route from the 5 Gbps router would generally make the 5 Gbps path LESS preferred if the 2 Gbps router continues to advertise a more specific route for the same destination. AWS would match the more specific prefix (likely from the 2 Gbps link) and send traffic there. This undermines the goal of preferring the 5 Gbps connection.

Incorrect. Prepending AS_PATH on the 5 Gbps router would make the 5 Gbps path less attractive to AWS, causing AWS to prefer the 2 Gbps connection. That is the reverse of the desired behavior. AS_PATH prepending should be applied to the backup/secondary path (2 Gbps) so the primary (5 Gbps) remains the most preferred route.

Question Analysis

Core Concept: This question tests AWS Direct Connect routing behavior with BGP and how to influence the path AWS uses from the VPC toward on-premises (the “return path”). With multiple Direct Connect connections in the same Region advertising the same prefixes, AWS selects the best path using BGP attributes (not link speed). You must therefore manipulate BGP attributes or prefixes to make the 5 Gbps path preferred and keep the 2 Gbps as backup. Why the Answer is Correct: To prefer the 5 Gbps connection, you make the 2 Gbps connection less attractive to AWS. Option A does this by prepending (lengthening) the AS_PATH on routes advertised to AWS from the router attached to the 2 Gbps connection. AWS will typically prefer the route with the shorter AS_PATH when other attributes are equal, so the 5 Gbps advertisement (shorter AS_PATH) wins. If the 5 Gbps connection fails, those routes are withdrawn and AWS will then use the remaining 2 Gbps path, achieving failover. Key AWS Features / Behaviors: - Direct Connect uses eBGP between your router and AWS on a private virtual interface to exchange routes. - AWS does not automatically prefer higher bandwidth; it follows BGP best-path selection. - AS_PATH prepending is a standard, supported way to influence inbound traffic selection (AWS-to-customer direction) when you have multiple paths advertising identical prefixes. - With two routers and iBGP between them, you can maintain consistent routing internally while still manipulating what each edge advertises to AWS. Common Misconceptions: Many assume “5 Gbps should be preferred automatically.” It won’t be unless BGP attributes or prefix specificity cause it. Another trap is trying to influence the wrong direction: local preference affects outbound from your network, but the requirement is VPC-to-office (AWS choosing the path). Exam Tips: For Direct Connect path preference, remember: to control AWS-to-on-prem traffic, adjust what you advertise to AWS (AS_PATH prepending or more-specific prefixes). Use AS_PATH prepending to make a link backup; use more-specific prefixes carefully because they can change routing in ways that are harder to manage and may require additional IP planning.

19
Question 19
(Select 2)

An internal website runs behind an internal ALB in a VPC (172.31.0.0/16). A private hosted zone example.com exists in Route 53. An AWS Site-to-Site VPN connects the office network to the VPC. Employees must access https://example.com from the office network. Enable private DNS resolution for example.com from on-premises across the VPN to the VPC’s private hosted zone and ALB. Which combination of steps will meet this requirement? (Choose two.)

Correct. An Alias record in the private hosted zone can point example.com (including the zone apex) to the internal ALB. Alias records are Route 53-specific and automatically follow the ALB’s underlying IP changes, which is essential because ALB IPs are not fixed. This is the standard way to publish an internal ALB name inside a private hosted zone for private access.

Incorrect. A CNAME to the ALB’s internal DNS name can work only for non-apex names (e.g., app.example.com). The requirement is https://example.com (apex). DNS does not allow a CNAME at the zone apex because it would conflict with SOA/NS records. Route 53 Alias exists specifically to enable apex mapping to AWS resources like ALBs.

Correct. A Route 53 Resolver inbound endpoint provides IP addresses in the VPC that can receive DNS queries from on-premises resolvers over the VPN. Configuring an on-prem conditional forwarder for example.com to these IPs enables resolution of the private hosted zone from the office network. This is the canonical hybrid DNS pattern for on-prem-to-VPC private DNS.

Incorrect. A Resolver outbound endpoint is used when resources in the VPC need to resolve DNS names hosted on-premises (VPC-to-on-prem). The question requires the reverse: on-prem clients must resolve a private hosted zone in Route 53. Therefore, an inbound endpoint (not outbound) is required for on-prem forwarding into AWS.

Incorrect. 172.31.0.2 (AmazonProvidedDNS) is the VPC resolver address intended for use by instances inside the VPC. It is not designed to be a general-purpose DNS server for on-premises networks across VPN/Direct Connect, and forwarding to it from on-prem is not the supported approach. Route 53 Resolver endpoints are the supported mechanism for hybrid DNS.

Question Analysis

Core concept: This question tests hybrid DNS for private Route 53 hosted zones using Amazon Route 53 Resolver. A private hosted zone is only resolvable from within associated VPCs unless you explicitly extend DNS resolution to on-premises networks. For on-prem users over a Site-to-Site VPN, you typically use a Resolver inbound endpoint so on-prem DNS servers can forward queries into the VPC. Why the answer is correct: You need two things: (1) a DNS record in the private hosted zone that maps example.com to the internal ALB, and (2) a way for on-premises DNS resolvers to query that private hosted zone. Option A creates an Alias record in the private hosted zone pointing to the internal ALB. Alias is the recommended Route 53 mechanism for AWS load balancers because it tracks the ALB’s changing IPs and is supported at the zone apex (example.com). Option C creates a Route 53 Resolver inbound endpoint and configures an on-prem conditional forwarder for example.com to that endpoint. This allows the office DNS server to send queries for example.com across the VPN to the VPC, where Route 53 Resolver can answer using the private hosted zone. Key AWS features / best practices: - Route 53 Alias to ALB: avoids hardcoding IPs and supports apex records. - Route 53 Resolver inbound endpoint: provides static IPs (ENIs) in subnets for on-prem forwarding; secure with SG rules allowing UDP/TCP 53 from on-prem. - Conditional forwarding on-prem: forward only example.com to AWS; keep other DNS local. Common misconceptions: - Pointing on-prem directly at AmazonProvidedDNS (the VPC .2 resolver) is not supported from outside the VPC; it’s only reachable/usable by resources in the VPC. - Using a CNAME at the zone apex (example.com) is not allowed by DNS standards; Alias exists to solve this. - Outbound endpoints are for the opposite direction (VPC-to-on-prem), not on-prem-to-VPC. Exam tips: When you see “on-prem must resolve a private hosted zone,” think “Resolver inbound endpoint + conditional forwarder.” When mapping to ALB/NLB/CloudFront, prefer “Alias” over CNAME, especially for apex names. Also remember to consider security groups, subnet placement, and VPN routing, but the key exam pattern is inbound endpoint for hybrid DNS into Route 53 private zones.

20
Question 20
(Select 3)

A company has VPCs in us-east-1 connected via a transit gateway. An AWS Direct Connect connection is established to an on-premises data center, with the ConnectionState metric in Amazon CloudWatch showing UP, but the transit VIF is DOWN. The network engineer verified transit VIF and BGP configurations on the on-premises router with no issues but cannot ping the Amazon peer IP address. Troubleshoot the DOWN transit VIF to establish connectivity. Which combination of steps should the network engineer take to troubleshoot this issue? (Choose three.)

Correct. The customer router subinterface must use the exact IP address and subnet mask assigned for the Direct Connect virtual interface. If the address or mask is wrong, the router cannot establish Layer 3 adjacency with the Amazon peer IP, so ping fails and BGP cannot start. This is one of the first checks when a VIF is DOWN despite the physical connection being UP.

Incorrect. Direct Connect virtual interfaces require VLAN tagging, so disabling VLAN trunking is not the goal. The important requirement is that the router subinterface carries the correct 802.1Q VLAN tag that matches the VIF configuration. Turning off trunking or VLAN tagging would typically break the VIF rather than fix it.

Incorrect. Checking for a MAC address entry from the AWS endpoint in the ARP table is not one of the primary or most reliable troubleshooting steps for a DOWN transit VIF. The more appropriate checks are the physical optical signal, the VLAN tag, and the point-to-point IP/subnet configuration on the subinterface. ARP observations can be incidental, but they are not the best answer choice compared with the documented foundational checks.

Correct. Even when the Direct Connect connection state is UP, the received optical signal over the cross connect can still be degraded or marginal enough to affect traffic delivery. Verifying light levels and cross-connect quality helps rule out physical transport issues between the customer equipment and the AWS Direct Connect location. AWS troubleshooting guidance includes checking optical signal health when diagnosing VIF problems.

Correct. A Direct Connect virtual interface depends on the correct 802.1Q VLAN tag being configured on the customer router subinterface. If the VLAN ID does not match the VLAN assigned to the transit VIF, frames will not reach the AWS endpoint correctly, preventing peer IP reachability and keeping the VIF DOWN. This is a classic cause of a VIF issue when the underlying physical port remains UP.

Incorrect. TCP port 179 is used by BGP, but the engineer cannot even ping the Amazon peer IP, which means the issue is likely below the BGP session layer. A blocked TCP 179 port would prevent BGP establishment, but it would not usually explain failure to reach the peer IP itself if VLAN and IP connectivity were otherwise correct. Therefore, this is not one of the first three troubleshooting steps for this scenario.

Question Analysis

Core Concept: This question tests troubleshooting of an AWS Direct Connect transit virtual interface when the physical Direct Connect connection is UP but the VIF is DOWN. In this situation, the engineer must validate the physical layer and the customer router subinterface configuration used for the VIF, because BGP cannot come up until the underlying Layer 1/Layer 2/Layer 3 parameters are correct. Why the Answer is Correct: The best troubleshooting steps are to verify the subinterface IP address and subnet mask (A), confirm that the optical signal over the cross connect is healthy (D), and ensure that the correct VLAN tag is configured on the router subinterface (E). A physical connection can show UP while still having signal quality issues that affect traffic, and a VIF will remain DOWN if the VLAN or peer IP settings are incorrect. Since the engineer cannot ping the Amazon peer IP, the problem is most likely below or at the IP adjacency layer rather than a pure BGP policy issue. Key AWS Features: AWS Direct Connect separates physical connection state from virtual interface state. The ConnectionState metric indicates the status of the physical port, while a transit VIF depends on correct optics, 802.1Q VLAN tagging, and point-to-point IP addressing to the Amazon peer before BGP can establish. Transit VIFs are used with a Direct Connect gateway and Transit Gateway for multi-VPC and hybrid connectivity. Common Misconceptions: A common mistake is to focus immediately on BGP TCP port 179 when the Amazon peer IP is not even reachable. Another misconception is assuming that a physical connection marked UP guarantees the optical path is fully healthy for VIF traffic. Engineers may also overemphasize ARP-table validation, but the primary documented checks for a DOWN VIF are physical signal, VLAN tagging, and IP addressing. Exam Tips: On the exam, when Direct Connect shows connection UP but VIF DOWN, think in layers: first physical optics/cross-connect, then VLAN encapsulation, then peer IP/subnet configuration, and only after that BGP settings. If the question mentions inability to ping the Amazon peer IP, prioritize pre-BGP troubleshooting. AWS often tests whether you can distinguish physical connection health from virtual interface health.

Success Stories(9)

C
c*****Nov 23, 2025

Study period: 2 months

This practice questions help you in understanding the concepts on which you can get the questions in certification exam. Solutions and the explanation is really good. I was able to crack the exam. Thank you.

박
박**Nov 17, 2025

Study period: 1 month

앱 200 몇 문제를 2번 정도 초기화해서 다시 풀었고, 개념 이해가 완전히 될 때까지 공부했습니다.

R
r***********Nov 14, 2025

Study period: 2 months

Excellent practice questions. It helped in refreshing a lot a concepts

김
김**Nov 9, 2025

Study period: 2 months

문제들이 다양한 유형을 커버해서 좋았고 실제 시험에서 비슷한 유형이 많아서 큰 도움이 됐어요

진
진**Nov 9, 2025

Study period: 3 months

Udemy에서 강의들으며 개념익히고, 이 앱에서 문제랑 해설보며 공부했어요. 모르는 aws 리소스에 대해선 따로 공부도 했어요 앱 유용하게 이용했네요!

Practice Tests

Practice Test #1

65 Questions·170 min·Pass 750/1000

Other AWS Certifications

AWS Certified Solutions Architecture - Associate (SAA-C03)

AWS Certified Solutions Architecture - Associate (SAA-C03)

Associate

AWS Certified AI Practitioner (AIF-C01)

AWS Certified AI Practitioner (AIF-C01)

Practitioner

AWS Certified Cloud Practitioner (CLF-C02)

AWS Certified Cloud Practitioner (CLF-C02)

Practitioner

AWS Certified Data Engineer - Associate (DEA-C01)

AWS Certified Data Engineer - Associate (DEA-C01)

Associate

AWS Certified Developer - Associate (DVA-C02)

AWS Certified Developer - Associate (DVA-C02)

Associate

AWS Certified DevOps Engineer - Professional (DOP-C02)

AWS Certified DevOps Engineer - Professional (DOP-C02)

Professional

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

AWS Certified Machine Learning Engineer - Associate (MLA-C01)

Associate

AWS Certified Security - Specialty (SCS-C02)

AWS Certified Security - Specialty (SCS-C02)

Specialty

AWS Certified Solutions Architect - Professional (SAP-C02)

AWS Certified Solutions Architect - Professional (SAP-C02)

Professional

Start Practicing Now

Download Cloud Pass and start practicing all AWS Certified Advanced Networking - Specialty (ANS-C01) exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.