CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-305
Microsoft AZ-305

Practice Test #1

Simulate the real exam experience with 50 questions and a 100-minute time limit. Practice with AI-verified answers and detailed explanations.

50Questions100Minutes700/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

HOTSPOT - You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication. App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD. You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers. What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The users can connect to App1 without being prompted for authentication: ______

Correct answer: A (An Azure AD app registration). To enable Azure AD authentication for an Azure web app (App Service) and achieve SSO, the application must be represented in Azure AD. That representation is an app registration (enterprise application/service principal) used for OAuth2/OpenID Connect token issuance. With Azure AD-joined Windows 10 devices, users typically have a Primary Refresh Token (PRT), enabling silent token acquisition in supported browsers, so they can access App1 without an interactive prompt. Why others are wrong: - B (Managed identity) is for workload identity (the app accessing Azure resources like Key Vault/Storage) and does not provide end-user authentication/SSO into the web app. - C (Azure AD Application Proxy) is primarily for publishing on-premises apps externally via Azure AD; App1 is already an internet-facing Azure web app, so Application Proxy is unnecessary for SSO and doesn’t replace the need for an app registration.

Part 2:

The users can access App1 only from company-owned computers: ______

Correct answer: A (A Conditional Access policy). Conditional Access is the Azure AD feature designed to control access to cloud apps based on conditions such as device state. To ensure only company-owned computers can access App1, you configure a Conditional Access policy targeting App1 that requires the device to be marked compliant (typically via Microsoft Intune) and/or require an Azure AD joined device. This enforces that access is granted only when the sign-in comes from a managed corporate device, even though the app is reachable from the internet. Why others are wrong: - B (Administrative unit) scopes administrative management of users/devices; it doesn’t enforce sign-in restrictions. - C (Application Gateway) is a Layer 7 reverse proxy/WAF; it can’t evaluate Azure AD device compliance/join state. - D (Azure Blueprints) and E (Azure Policy) govern Azure resource deployment/configuration, not end-user authentication and device-based access controls.

2
Question 2

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription. What should you include in the recommendation?

Azure Activity Log is correct because it records all subscription-level control-plane events, including ARM deployment operations and resource creation events. This makes it the native audit source for identifying new deployments within a given month. It includes useful metadata such as the caller, operation name, timestamp, and status, which are exactly the details needed for a deployment report. In exam scenarios, when the requirement is to track Azure administrative actions or deployments, Activity Log is the primary service to choose.

Azure Advisor focuses on best-practice recommendations across cost, security, reliability, operational excellence, and performance. It does not provide an authoritative audit trail of ARM deployments or a complete list of new deployments. Advisor may highlight configuration issues resulting from deployments, but it is not designed for compliance-style deployment reporting.

Azure Analysis Services is a PaaS analytics service for hosting tabular semantic models (similar to SSAS Tabular) used by BI tools like Power BI. It does not ingest or track Azure subscription Activity Log events by itself. You could theoretically model data after exporting logs elsewhere, but it is not the correct native service to generate deployment reports.

Azure Monitor action groups define notification and automation endpoints (email, SMS, webhook, Logic Apps, etc.) used by alert rules. They do not collect, store, or enumerate ARM deployment events. Action groups could be used after you create an alert on Activity Log/Log Analytics, but they are not the data source for a monthly deployment report.

Question Analysis

Core concept: To generate a monthly report of new Azure Resource Manager (ARM) resource deployments, you need the Azure service that records control-plane operations performed in a subscription. Azure Activity Log captures subscription-level events such as resource creation, update, delete, and deployment operations. Why correct: Azure Activity Log is the authoritative source for ARM deployment activity because it records when resources are deployed and who initiated the operation. You can filter the log for deployment-related operations over the last month and use that data as the basis for a monthly report. Key features: 1) Captures control-plane events for the subscription, including create/update/delete and deployment operations. 2) Provides details such as timestamp, caller, operation name, status, and target resource. 3) Supports filtering by subscription, resource group, operation type, and time range. 4) Can be exported or integrated with other Azure Monitor capabilities if longer retention or advanced reporting is needed. Common misconceptions: - Azure Advisor gives recommendations, not an audit trail of deployments. - Azure Analysis Services is for analytical models, not Azure deployment tracking. - Azure Monitor action groups only send notifications or trigger actions; they do not store deployment history. Exam tips: For questions asking what records "who did what and when" for Azure resources, think Azure Activity Log. It is the default answer for subscription-level ARM operation auditing. Distinguish it from data-plane logs, recommendations, and notification mechanisms.

3
Question 3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines. Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic. Does this meet the goal?

This option is correct because Azure Network Watcher IP flow verify is specifically built to determine whether a packet to or from a virtual machine is allowed or denied. It evaluates the effective NSG rules applied at the NIC and subnet level using the specified source, destination, port, and protocol information. The tool also identifies the exact rule that caused the allow or deny result, which makes it ideal for troubleshooting connectivity issues. Since the goal is to analyze whether packets are being allowed or denied to the VMs, this solution directly meets the requirement.

This option is incorrect because the proposed solution does meet the stated goal. IP flow verify is one of the core Azure Network Watcher diagnostics for checking whether traffic is permitted or blocked based on effective NSG rules. Although it does not perform full packet inspection or trace every hop across ExpressRoute, the question only asks to identify whether packets are being allowed or denied to the virtual machines. For that purpose, IP flow verify is the appropriate and sufficient tool.

Question Analysis

Core concept: This question tests knowledge of Azure Network Watcher diagnostics, specifically which tool can determine whether traffic to or from a virtual machine is allowed or denied by Azure networking rules such as NSGs. Why the answer is correct: IP flow verify in Azure Network Watcher is designed to check whether a packet is allowed or denied to or from a VM. It evaluates the effective network security rules that apply to the VM's network interface or subnet and returns the decision along with the specific rule responsible. In this scenario, the requirement is to identify whether packets are being allowed or denied to virtual machines, and IP flow verify directly answers that question. The presence of ExpressRoute does not change the usefulness of this tool for Azure-side packet filtering analysis. Key features / configurations: - Azure Network Watcher provides network diagnostic and monitoring tools for Azure resources. - IP flow verify tests a 5-tuple style flow: source IP, destination IP, source port, destination port, and protocol. - It identifies whether traffic is Allowed or Denied. - It also shows which NSG rule caused the decision. - It is useful for troubleshooting VM connectivity issues related to Azure network security filtering. - It analyzes Azure-side effective security rules, not arbitrary packet captures across the full end-to-end path. Common misconceptions: - Candidates often confuse IP flow verify with packet capture. Packet capture records traffic, while IP flow verify evaluates whether Azure would allow or deny a specific flow. - Some assume ExpressRoute requires a different diagnostic tool. While ExpressRoute affects connectivity, Azure-side NSG evaluation for VM traffic can still be checked with IP flow verify. - Another common mistake is choosing connection troubleshoot when the question specifically asks whether packets are allowed or denied. Connection troubleshoot tests reachability, but IP flow verify is the direct tool for rule-based allow/deny analysis. Exam tips: - If the question asks whether traffic is allowed or denied, think IP flow verify. - If the question asks which NSG rule is affecting traffic, IP flow verify is a strong match. - If the question asks to capture actual packets, use packet capture instead. - If the question asks to test end-to-end connectivity, consider connection troubleshoot. - Distinguish between Azure rule evaluation tools and traffic recording tools.

4
Question 4

You are designing an application that will aggregate content for users. You need to recommend a database solution for the application. The solution must meet the following requirements: ✑ Support SQL commands. ✑ Support multi-master writes. ✑ Guarantee low latency read operations. What should you include in the recommendation?

Azure Cosmos DB SQL API is designed for globally distributed applications. It supports SQL-like queries over JSON documents, offers multi-region (multi-master) writes when enabled, and provides low-latency reads by replicating data to regions close to users. It also supports configurable consistency levels and conflict resolution, which are important considerations when enabling multi-master writes.

Azure SQL Database active geo-replication provides readable secondary replicas in other regions for DR and read scaling, but it does not support multi-master writes. Only the primary database accepts writes; secondaries are read-only. This can meet low-latency reads if users read from a nearby secondary, but it fails the multi-master write requirement.

Azure SQL Database Hyperscale is a tier optimized for very large databases and rapid scale, with architecture that separates compute and storage and can use read replicas. However, it still follows a single-writer model and does not provide multi-master writes across regions. It can help with read performance and scale, but it doesn’t meet the multi-master requirement.

Azure Database for PostgreSQL supports standard SQL and can provide read replicas for scaling reads, but the managed service does not natively provide multi-master, multi-region writes as a built-in feature. Typical HA/DR patterns are single primary with replicas. Achieving multi-master would require complex custom replication/conflict handling, which is not the intended managed solution.

Question Analysis

Core concept: This question tests selecting a globally distributed database that supports SQL-like querying, multi-master (multi-region) writes, and consistently low-latency reads. In Azure, the primary service designed for this combination is Azure Cosmos DB with the SQL API. Why the answer is correct: Azure Cosmos DB SQL API provides a SQL-like query language over JSON documents and is built for global distribution. It supports multi-region writes (multi-master) via the “multi-region writes” capability, allowing writes to be accepted in multiple Azure regions. For low-latency reads, Cosmos DB lets you replicate data to regions close to users and uses automatic indexing and partitioning to keep read performance predictable. You can also choose consistency levels (e.g., Session for user-centric apps) to balance latency and consistency. Key features / configurations: - SQL support: Cosmos DB SQL API uses SQL-like queries (SELECT, WHERE, ORDER BY, etc.) against JSON items. - Multi-master writes: Enable multi-region writes and add multiple regions. Cosmos DB handles conflict detection/resolution (last-writer-wins or custom conflict resolution using stored procedures). - Low-latency reads: Add read regions near users; Cosmos DB provides single-digit millisecond reads in practice when properly partitioned and provisioned. - Partitioning and throughput: Choose a good partition key and provision RU/s (or autoscale) to meet latency/throughput SLOs. - Well-Architected alignment: Improves Performance Efficiency (global distribution, predictable latency), Reliability (multi-region replication), and Operational Excellence (managed service). Common misconceptions: Azure SQL Database with active geo-replication improves read scale and DR, but it is not multi-master; only the primary is writable. Hyperscale improves scale-out storage and read replicas but still does not provide multi-master writes. Azure Database for PostgreSQL (managed Postgres) supports SQL but multi-master writes across regions is not a standard built-in capability for the Azure managed offering; typical patterns rely on single-writer with read replicas. Exam tips: When you see “multi-master writes” plus “low latency reads” and “global users,” think Cosmos DB. If the requirement is strict relational semantics (joins, constraints) and single-writer is acceptable, then Azure SQL is often the answer—but multi-master is the key differentiator here.

5
Question 5

You have the Azure resources shown in the following table.

You need to deploy a new Azure Firewall policy that will contain mandatory rules for all Azure Firewall deployments. The new policy will be configured as a parent policy for the existing policies. What is the minimum number of additional Azure Firewall policies you should create?

Part 1:

US-Central-Firewall-policy is of type Azure Firewall policy and located in Central US.

Yes. The statement says US-Central-Firewall-policy is an Azure Firewall policy located in Central US. In the provided resource table for this type of question, US-Central-Firewall-policy is listed as a Firewall Policy resource and its region is Central US. This matters because Firewall Policy is a regional resource and is the object you would configure as a child policy under a new parent policy (also in Central US). If this policy were in a different region than stated, you could not use a Central US parent policy for it. Why not No: There is no indication that the resource type or region differs; the naming convention and the typical table entries align with “Azure Firewall policy” and “Central US” for this item.

Part 2:

US-East-Firewall-policy is of type Azure Firewall policy and located in East US.

Yes. US-East-Firewall-policy is described as an Azure Firewall policy located in East US, which matches the typical resource table for this scenario. For the parent/child design, this is important because a parent policy must be in the same region as the child policy. Since this policy is in East US, it will require an East US parent policy (unless one already exists, which the question implies it does not). Why not No: If US-East-Firewall-policy were not in East US (or not a Firewall Policy), it would change the count of required parent policies. The scenario’s intent is to show multiple policies across multiple regions, driving the need for multiple regional parent policies.

Part 3:

EU-Firewall-policy is of type Azure Firewall policy and located in West Europe.

Yes. EU-Firewall-policy is of type Azure Firewall policy and located in West Europe. This matches the common setup where European resources are deployed in West Europe. From an exam perspective, the key is that West Europe is a distinct region from Central US and East US. Because Azure Firewall Policy hierarchy is region-bound, a West Europe child policy requires a West Europe parent policy. Why not No: Marking this as No would imply either the resource type is not a Firewall Policy or the region is not West Europe, which would contradict the scenario’s multi-region policy footprint that drives the design requirement.

Part 4:

USEastfirewall is of type Azure Firewall and located in Central US.

No. The statement says USEastfirewall (an Azure Firewall) is located in Central US. In the resource table for this scenario, USEastfirewall is an Azure Firewall deployed in East US (the name strongly indicates East US, and these questions commonly include a deliberate mismatch to test attention to region). This distinction is important because Azure Firewall (the data plane) is also regional, and it must be associated with a Firewall Policy in the same region. However, for the question about how many additional parent policies are needed, the determining factor is the regions of the existing firewall policies (and their required regional parents), not the firewall names. Why not Yes: Accepting the mismatch would undermine the regional-alignment rule that the exam expects you to apply.

Part 5:

USWestfirewall is of type Azure Firewall and located in East US.

No. The statement says USWestfirewall (an Azure Firewall) is located in East US. In the typical resource table, USWestfirewall would be located in West US (or a western US region), and the statement is intentionally swapped to test whether you validate region rather than rely on the prompt. Architecturally, an Azure Firewall must be deployed in the same region as the virtual network it protects and must use a Firewall Policy in the same region. Incorrect region identification can lead to invalid designs (you can’t associate a firewall in one region to a policy in another). Why not Yes: The firewall’s name and the scenario’s pattern indicate the region is not East US; it is a mismatch.

Part 6:

EUFirewall is of type Azure Firewall and located in West Europe.

Yes. EUFirewall is of type Azure Firewall and located in West Europe. This aligns with the expected resource placement for the EU deployment. This matters because EUFirewall would need to be associated with a West Europe Firewall Policy (such as EU-Firewall-policy). When introducing a mandatory baseline via a parent policy, you would create that parent in West Europe so EU-Firewall-policy can inherit it and EUFirewall can continue using a same-region policy. Why not No: There is no indication of a mismatch here; the EU firewall and EU policy are typically both in West Europe in these exam scenarios to reinforce the regional association requirement.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

HOTSPOT - Your company develops a web service that is deployed to an Azure virtual machine named VM1. The web service allows an API to access real-time data from VM1. The current virtual machine deployment is shown in the Deployment exhibit.

diagram

The chief technology officer (CTO) sends you the following email message: "Our developers have deployed the web service to a virtual machine named VM1. Testing has shown that the API is accessible from VM1 and VM2. Our partners must be able to connect to the API over the Internet. Partners will use this data in applications that they develop." You deploy an Azure API Management (APIM) service. The relevant API Management configuration is shown in the API exhibit.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Select the correct answer(s) in the image below.

question-image

Pass is appropriate because the correct choices can be derived directly from the exhibits and standard APIM networking behavior. The API Management configuration explicitly shows Virtual network = External, Location = West Europe, Virtual network = VNet1, Subnet = ProdSubnet. With that, you can determine both inbound internet reachability (External implies a public endpoint) and outbound reachability to VM1 (APIM has a VNet interface in VNet1). The remaining decisions (whether a VPN gateway is required) follow from whether partners need private connectivity versus public internet access. No additional unknowns (like NSG rules) are required to answer the statements at the level expected in AZ-305.

Part 2:

The API is available to partners over the internet.

Yes. APIM in External VNet mode is designed to be accessible from the internet while also being connected to a subnet in your VNet. In this mode, APIM retains a public IP/public gateway endpoint for inbound client traffic (partners over the internet), and it uses the VNet integration for reaching private backends. This matches the CTO requirement that partners must connect over the Internet. If APIM were configured as Internal, it would be reachable only from within the VNet (or via private connectivity like VPN/ExpressRoute), and partners on the public internet would not be able to reach it without additional network design.

Part 3:

The APIM instance can access real-time data from VM1.

Yes. Because the APIM instance is injected into VNet1 (ProdSubnet) in External mode, it has network connectivity into that VNet and can route to other subnets in the same VNet (such as Subnet1 where VM1 resides), assuming default VNet routing and no blocking NSGs/UDRs. The exhibit shows VM1 and VM2 in Subnet1 and APIM in ProdSubnet, both within VNet1, so APIM can call the backend API on VM1 using its private IP/DNS name. This is a primary reason to use VNet integration: keep the backend private while still exposing a managed API surface to external consumers.

Part 4:

A VPN gateway is required for partner access.

No. A VPN gateway is not required for partner access in this design because partners are intended to connect over the public internet, and APIM is configured in External mode, which provides a public endpoint. VPN gateways (site-to-site or point-to-site) are typically required when you want partners to access private endpoints (for example, APIM in Internal mode) or when you want to avoid public internet exposure and use private connectivity. Here, the requirement explicitly states internet access for partners, and the APIM configuration supports that directly without needing VPN/ExpressRoute.

7
Question 7

You are developing a sales application that will contain several Azure cloud services and handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping. You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using XML messages. What should you include in the recommendation?

Azure Notification Hubs is a push notification service for sending messages to mobile devices (iOS/Android) and browsers. It is optimized for fan-out notifications to users, not for reliable service-to-service transactional messaging. It lacks core broker capabilities like queues/topics with dead-lettering, sessions, and enterprise delivery semantics required for order/billing/payment workflow integration.

Azure Data Lake (Azure Data Lake Storage Gen2) is designed for big data analytics storage (Hadoop/Spark), storing large volumes of structured/unstructured data for batch and analytical workloads. While it can store XML files, it is not a messaging system and does not provide asynchronous broker features such as message locks, retries, subscriptions, or guaranteed delivery patterns for microservices.

Azure Service Bus provides durable, asynchronous messaging using queues and topics/subscriptions, making it ideal for coordinating distributed transaction components such as order processing, billing, inventory, and shipping. It supports reliable delivery patterns, dead-lettering, sessions for ordered processing, and duplicate detection. Because it is payload-agnostic, it can carry XML messages as the message body while enabling decoupled, resilient communication between services.

Azure Blob Storage is object storage for unstructured data (files, documents, logs). You could store XML documents and have services poll for changes, but that is inefficient and lacks messaging guarantees, consumer coordination, and built-in retry/dead-letter patterns. Blob Storage is not intended as a message broker for transactional workflows; it’s primarily for storage, not asynchronous integration.

Question Analysis

Core concept: This question tests asynchronous messaging and decoupled communication between distributed application components (microservices) in Azure. The key requirement is that multiple cloud services (orders, billing, payment, inventory, shipping) must exchange transaction information asynchronously using XML messages. Why the answer is correct: Azure Service Bus is Azure’s enterprise message broker designed for reliable, asynchronous communication between services. It supports message-based integration patterns (queues and topics/subscriptions) that decouple producers from consumers, enabling each transaction component to process messages independently and at its own pace. XML payloads are simply message bodies; Service Bus is payload-agnostic, so it can transport XML without issue. This aligns with Azure Well-Architected Framework principles: reliability (durable messaging), operational excellence (standard integration), and performance efficiency (buffering and load leveling). Key features and best practices: - Queues for point-to-point workflows (e.g., “BillingQueue” consumed by billing service). - Topics/subscriptions for publish/subscribe fan-out (e.g., “OrderPlacedTopic” with subscriptions for inventory, shipping, and billing). - At-least-once delivery with Peek-Lock, message settlement, and retry handling. - Dead-letter queues (DLQ) for poison messages and troubleshooting. - Sessions for FIFO and message ordering per transaction/customer. - Duplicate detection to reduce double-processing. - Security via Azure AD/RBAC and Shared Access Signatures (SAS), plus private endpoints for network isolation. Common misconceptions: Some may choose Service Fabric because it’s used for microservices, but it is a hosting/orchestration platform, not a messaging broker. Notification Hubs is for mobile push notifications, not service-to-service transaction processing. Traffic Manager is DNS-based load balancing/failover, not messaging. Exam tips: When you see “asynchronously communicate,” “decouple services,” “reliable messaging,” “queues/topics,” or “enterprise integration,” think Azure Service Bus. If the scenario emphasizes event streaming/telemetry at massive scale, consider Event Hubs; if it emphasizes lightweight event routing, consider Event Grid. Here, transactional components and reliable processing strongly indicate Service Bus.

8
Question 8

You need to design a highly available Azure SQL database that meets the following requirements: ✑ Failover between replicas of the database must occur without any data loss. ✑ The database must remain available in the event of a zone outage. ✑ Costs must be minimized. Which deployment option should you use?

Azure SQL Database Serverless is a compute tier within General Purpose that auto-scales and can auto-pause to reduce cost for intermittent usage. It is not designed to guarantee zero data loss failover between replicas, and it does not inherently provide zone-resilient synchronous replicas. It addresses cost optimization for variable workloads, not strict HA requirements like RPO=0 plus zone-outage survivability.

Azure SQL Managed Instance Business Critical is built on an Always On availability group architecture with multiple replicas and synchronous replication, enabling automatic failover with RPO 0 (no data loss). With zone redundancy (where supported), replicas can span Availability Zones, keeping the database available during a zone outage. It best satisfies both reliability requirements among the given options.

Azure SQL Database Basic is intended for small, less demanding workloads and does not provide the multi-replica synchronous architecture needed for zero data loss failover. It also does not support the advanced HA/zone redundancy capabilities required to remain available through a zone outage. While it minimizes cost, it fails the reliability requirements.

Azure SQL Database Standard is a lower service tier intended for general-purpose workloads and does not provide the same synchronous multi-replica high-availability architecture as Business Critical. Because of that, it is not the right choice when the requirement explicitly calls for failover between replicas with no data loss. It also does not offer the same level of zone-resilient availability expected for mission-critical database workloads. While it is less expensive, it does not satisfy the stated resiliency requirements.

Question Analysis

Core concept: This question tests Azure SQL high availability and resiliency choices, specifically how to achieve zero data loss (synchronous replication) and zone-outage resilience while minimizing cost. In Azure SQL, these requirements map to HA architectures that use multiple replicas and automatic failover. Why the answer is correct: Azure SQL Managed Instance (MI) Business Critical uses an Always On availability group architecture with multiple replicas and synchronous replication within the region. This enables automatic failover with an RPO of 0 (no data loss) because transactions are committed only after being hardened on synchronous replicas. Business Critical also supports zone redundancy (in supported regions) so replicas can be distributed across Availability Zones, keeping the database available during a zone outage. Among the listed options, it is the only one that clearly aligns with “failover between replicas without data loss” and “available during a zone outage.” Key features / configurations: - Synchronous replication across multiple replicas (RPO 0) with automatic failover. - Built-in HA; no need to manage clustering/AGs yourself. - Zone redundancy option (where available) to survive a full zone failure. - Aligns with Azure Well-Architected Framework reliability pillar: redundancy, fault isolation (zones), and automated failover. Common misconceptions: - “Standard” or “Premium” Azure SQL Database tiers can provide HA, but the question explicitly emphasizes failover between replicas with no data loss and zone outage resilience. Those requirements strongly imply synchronous multi-replica architecture with zone distribution; not all tiers/offerings guarantee this in the way Business Critical does. - “Serverless” focuses on cost optimization via auto-pause/auto-scale for intermittent workloads, not strict HA/zone-outage requirements. Exam tips: - RPO 0 typically implies synchronous replication. - Zone outage resilience requires zone-redundant deployment (replicas across AZs), not just local redundancy within a single datacenter. - When options include “Business Critical,” associate it with multiple replicas, synchronous commit, and the strongest in-region HA characteristics. If the question also required cross-region DR, you’d look for active geo-replication or auto-failover groups (often asynchronous, RPO > 0).

9
Question 9

You have a .NET web service named Service1 that performs the following tasks: ✑ Reads and writes temporary files to the local file system. ✑ Writes to the Application event log. You need to recommend a solution to host Service1 in Azure. The solution must meet the following requirements: ✑ Minimize maintenance overhead. ✑ Minimize costs. What should you include in the recommendation?

An Azure App Service web app is a managed PaaS offering that minimizes maintenance, but it does not provide customer applications with direct access to the Windows Application event log. App Service does offer temporary local storage, but that only satisfies one of the two technical requirements. Replacing event log writes with App Service diagnostics or Application Insights would require changing the application's behavior, which the question does not permit. Therefore, App Service cannot fully meet the stated requirements even if it is cheaper and easier to manage.

An Azure virtual machine scale set provides full control of the guest Windows operating system, which means the application can write temporary files to the local file system and write entries directly to the Windows Application event log. This is the only option in the list that natively supports both stated behaviors without changing the application design. While VM scale sets require more management than PaaS services, they still offer automated scaling and centralized instance management, making them the best fit among the available choices. Because the requirement must be met as written, VM scale set is the correct recommendation despite the higher maintenance relative to App Service.

An App Service Environment is still based on Azure App Service, so it has the same platform limitation regarding direct access to the Windows Application event log. It provides isolation, networking control, and dedicated capacity, but not guest OS-level control for writing to Event Viewer. In addition, ASE is significantly more expensive than the other options and directly conflicts with the requirement to minimize costs. Since it fails both the technical fit and cost goals, it is not the right recommendation.

An Azure Functions app is a serverless compute option intended for event-driven execution patterns, not for hosting a traditional web service that depends on Windows OS features. Functions can use temporary local storage in limited ways, but they do not expose the Windows Application event log for application writes. As with App Service, using Application Insights instead would be a redesign rather than satisfying the explicit requirement. Therefore, Functions do not meet the requirement even though they can reduce operational overhead and cost for suitable workloads.

Question Analysis

Core concept: This question is about choosing the Azure hosting model that supports specific OS-level behaviors while balancing maintenance and cost. The key technical requirement is that the .NET web service writes temporary files to the local file system and writes to the Windows Application event log. Those are host-level capabilities that require access to the underlying Windows operating system. Why correct: An Azure virtual machine scale set is the only option listed that provides full Windows OS access, including the local file system and the Windows Application event log. Although VM scale sets are IaaS and require more maintenance than PaaS offerings, they are still the least costly and least operationally heavy option among the choices that can actually meet the stated technical requirements. App Service and Functions do not expose the Windows Application event log to customer applications, and ASE is still App Service-based, so it has the same limitation while costing much more. Key features: - VM scale sets provide full administrative control over Windows Server instances. - Applications can write to local disks and to the Windows Application event log just as they would on-premises. - Scale sets add elasticity and centralized management compared to individual VMs, which helps reduce operational burden somewhat within an IaaS model. Common misconceptions: - App Service supports temporary local storage, but that does not mean it supports writing to the Windows Application event log. - ASE is not a different compute model for OS access; it is an isolated App Service deployment and still does not provide direct event log access. - Azure Functions is serverless and inexpensive for some workloads, but it does not satisfy a requirement for writing to the Windows Application event log. Exam tips: When a requirement explicitly mentions Windows OS features such as Event Viewer, Windows services, registry access, or direct machine-level logging, prefer IaaS unless the question explicitly allows redesigning the application. In architecture exams, do not assume you can replace a stated requirement with a cloud-native alternative unless the wording says you may modify the application behavior.

10
Question 10

You plan to deploy an Azure SQL database that will store Personally Identifiable Information (PII). You need to ensure that only privileged users can view the PII. What should you include in the solution?

Dynamic Data Masking (DDM) obscures sensitive column values in query results for users who do not have permission to see the original data. Privileged users, such as those granted the UNMASK permission or equivalent administrative access, can view the real values. This directly satisfies the requirement that only privileged users can view PII. DDM works at query time and does not modify the underlying stored data, making it well suited for protecting sensitive fields without changing application logic.

Role-based access control (RBAC) in Azure controls access to Azure resources (subscriptions, resource groups, SQL server resource) via the management plane. It does not, by itself, restrict what data a user can query inside Azure SQL Database. Data-plane permissions are handled by SQL logins/users, database roles, and features like DDM/RLS, not Azure RBAC alone.

Data Discovery & Classification helps you discover, label, and track sensitive data columns (like PII) in Azure SQL. It supports governance, reporting, and can drive recommendations (e.g., in Microsoft Defender for SQL), but it does not enforce that only privileged users can view the data. It’s a complementary governance feature, not an access control mechanism.

Transparent Data Encryption (TDE) encrypts the database, logs, and backups at rest to protect against offline access to files or backups. However, it does not prevent users who can query the database from seeing PII in results. TDE addresses at-rest encryption requirements, not selective visibility for privileged vs. non-privileged users.

Question Analysis

Core Concept: This question tests which Azure SQL Database feature limits exposure of sensitive data such as PII in query results so that only users with explicit privilege can see the real values. Why the Answer is Correct: Dynamic Data Masking (DDM) masks sensitive column values for users who query the data but do not have permission to view unmasked values. Users granted the appropriate privilege, such as the UNMASK permission or equivalent high-level administrative access, can see the actual data. This directly matches the requirement that only privileged users can view the PII while others receive masked results. Key Features / How it’s used: DDM is configured at the column level and supports masking functions such as default, email, random, and partial string masking. It is applied at query time, so the stored data is not altered. This makes it useful for reducing accidental exposure of PII in applications, reports, and support scenarios without requiring application changes. Common Misconceptions: Azure RBAC is often confused with database-level data protection, but it primarily controls management-plane access to Azure resources rather than selective visibility of data inside SQL query results. TDE is another common distractor because it protects data at rest, not what users see after they successfully query the database. Data Discovery & Classification identifies and labels sensitive data, but it does not enforce masking or access restrictions. Exam Tips: If the requirement is to hide sensitive values from some users while still allowing them to query the table, think Dynamic Data Masking. If the requirement is encryption of database files and backups, think TDE. If the requirement is identifying sensitive columns for governance, think Data Discovery & Classification. If the requirement is Azure resource administration, think RBAC.

Other Practice Tests

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Practice Test #4

50 Questions·100 min·Pass 700/1000
← View All Microsoft AZ-305 Questions

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AZ-305 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.