CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Microsoft AZ-305
Microsoft AZ-305

Practice Test #4

Simulate the real exam experience with 50 questions and a 100-minute time limit. Practice with AI-verified answers and detailed explanations.

50Questions100Minutes700/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

Your company has the infrastructure shown in the following table.

The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD). Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain. You plan to migrate Server1 to a virtual machine in Subscription1. A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network. You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy. What should you include in the recommendation?

Part 1:

Location: Azure - Resource: Azure subscription named Subscription1

No. An Azure subscription is only a billing and management boundary, not the service that provides LDAP for App1. To keep the application working, you must deploy Azure AD Domain Services in Azure so the migrated VM can query LDAP without any connectivity to the on-premises network. Answering Yes confuses the deployment location with the actual resource required by the solution. The subscription may host the solution, but it is not itself something to include as a recommended component.

Part 2:

Location: Azure - Resource: 20 Azure web apps

No. Azure web apps do not provide directory services or LDAP endpoints, so they do not help App1 continue performing LDAP queries after migration. The requirement is to preserve LDAP-based authentication while preventing access from Azure to the on-premises network, which is addressed by Azure AD Domain Services, not App Service. The mention of 20 Azure web apps is unrelated to the migrated server workload. Therefore, this resource should not be included in the recommendation.

Part 3:

Location: On-premises datacenter - Resource: Active Directory domain

No. The migrated application must not rely on the on-premises Active Directory domain because Subscription1 resources are not allowed to access the on-premises network. Although the on-premises AD DS can remain the original identity source and continue syncing to Azure AD, it is not part of the runtime solution for App1 after migration. Azure AD Domain Services in Azure provides the needed LDAP capability locally in Azure. Therefore, the on-premises Active Directory domain should not be included in the recommendation.

Part 4:

Location: On-premises datacenter - Resource: Server running Azure AD Connect

Yes. Azure AD Connect should be included because it synchronizes identities from the on-premises Active Directory domain to Azure AD, which is then used to populate Azure AD Domain Services. App1 will authenticate against Azure AD Domain Services in Azure, but that managed domain depends on synchronized identities being present in Azure AD. Azure AD Connect does not provide LDAP itself, but it is still a necessary component in the end-to-end solution. Answering No overlooks the identity synchronization path that enables Azure AD Domain Services to contain the required users and groups.

Part 5:

Location: On-premises datacenter - Resource: Linux computer named Server1

No. The on-premises Linux computer named Server1 is the source server being migrated, not a resource that remains part of the final recommended solution. The requirement is to ensure App1 continues to function after it is moved to an Azure virtual machine while preventing access to the on-premises network. The needed components are Azure AD Domain Services in Azure and the existing identity synchronization path, not the original on-premises server. Answering Yes incorrectly treats the source machine as part of the post-migration architecture.

2
Question 2

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices. A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices. You need to recommend a solution to ensure that the users can access the shared files as quickly as possible if the Toronto branch office is inaccessible. What should you include in the recommendation?

A Recovery Services vault with Windows Server Backup is primarily a backup/restore approach. It can protect VM1’s data, but during a Toronto outage users won’t have immediate access; you must restore to alternate infrastructure first. This increases RTO and does not inherently improve user access speed across branch offices. It addresses data protection, not fast, continuous file access.

Azure blob containers with Azure File Sync is not a valid pairing for this use case. Azure File Sync requires an Azure file share (Azure Files) as the cloud endpoint, not Blob storage. While Blob is excellent for object storage and archival, it doesn’t provide SMB share semantics needed for typical Windows file server shared folders without additional services/gateways.

A Recovery Services vault with Azure Backup protects VM1 and can restore data to Azure or another server, but it is still a restore-based DR method. Users cannot access the shared files quickly during an outage unless you have already provisioned and synchronized an alternate file server. This option improves durability but typically results in higher RTO compared to a synchronized, cloud-backed file share solution.

An Azure file share with Azure File Sync enables a cloud-backed file system with multi-site caching. VM1 can sync to Azure Files, and other offices can host File Sync server endpoints to keep local cached copies for low-latency access. If Toronto is down, users can be redirected to another endpoint or directly to Azure Files, providing much faster access than backup/restore and meeting the resiliency goal.

Question Analysis

Core concept: This question tests designing a high-availability/DR approach for shared file access when a branch office (Toronto) becomes unavailable. The key is not just restoring data, but maintaining fast, continuous access for users across multiple offices. This aligns with Azure Well-Architected Framework reliability (resiliency, failover) and performance efficiency (low-latency access). Why the answer is correct: Using an Azure file share (Azure Files) with Azure File Sync provides a cloud-backed, multi-site-capable file solution. Azure File Sync can sync the Toronto file server (VM1) to an Azure file share, and you can deploy additional File Sync server endpoints (cache servers) in other offices. If Toronto is inaccessible, users can be redirected to another local server endpoint (or directly to Azure Files) that already has the namespace and cached data, enabling much faster access than a restore-based approach. The Azure file share becomes the central, highly available copy of the data in Azure. Key features and best practices: - Azure Files provides SMB-based shares accessible from multiple locations; it supports redundancy options (e.g., ZRS/GRS depending on region/account type) to improve durability. - Azure File Sync provides: - Cloud endpoint (Azure file share) + server endpoints (Windows Servers) for multi-site caching. - Cloud tiering to keep hot data local while offloading cold data to Azure. - Rapid recovery of access: other offices can continue serving files from their local cache even if the primary site is down. - For best performance, place the storage account in a region that minimizes latency for the majority of users, and use ExpressRoute/VPN for predictable connectivity. Common misconceptions: Backup solutions (Azure Backup/Windows Server Backup) protect data but do not provide immediate, low-latency access during an outage; they require restore operations and time to rehydrate data. Blob containers are object storage and don’t natively provide SMB file share semantics for typical Windows file server workloads. Exam tips: - If the requirement is “quickly access files during site outage,” prefer active/active or cloud-backed file services (Azure Files + Azure File Sync) over backup/restore. - Azure File Sync is the go-to when you want to keep Windows file server compatibility locally while centralizing data in Azure and enabling multi-branch caching. - Map requirements: “inaccessible site” + “fast access” => resiliency + caching, not just data protection.

3
Question 3

HOTSPOT - You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers. You need to recommend a design for the planned Databrick deployment. The solution must meet the following requirements: ✑ Ensure that the data engineers can only access folders to which they have permissions. ✑ Minimize development effort. ✑ Minimize costs. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Databricks SKU: ______

Choose Premium because Databricks credential passthrough is a Premium feature. The requirement states that permissions are granted directly to the data engineers (per-user) and they must only access folders they have permissions to. Enforcing this with ADLS Gen2 ACLs requires Databricks to access ADLS using the individual user’s Azure AD identity, which is enabled through credential passthrough. Standard SKU commonly relies on shared credentials (storage account key/service principal via secrets) for mounts, which means all users of the cluster can access whatever the shared identity can access—violating the per-user folder restriction unless you implement complex workarounds (multiple mounts, separate containers, custom authorization logic). Premium increases cost compared to Standard, but it is the necessary SKU to meet the security requirement with minimal development effort and aligns with least-privilege access control.

Part 2:

Cluster configuration: ______

Select Credential passthrough. This configuration allows Databricks to pass the signed-in user’s Azure AD identity to ADLS Gen2 so that ADLS evaluates folder/file ACLs per user. That directly satisfies: “Ensure that the data engineers can only access folders to which they have permissions.” It also minimizes development effort because you rely on native Azure AD + ADLS ACL enforcement rather than building custom access controls. Why others are wrong: Managed identities are typically used for a single workload identity (cluster/job) and do not inherently enforce per-user permissions when multiple engineers share a cluster. Secret scopes store shared secrets (like service principal credentials) and again lead to shared access. MLflow and Photon runtime are performance/ML features, not access control. Therefore, credential passthrough is the correct cluster configuration for per-user authorization.

4
Question 4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines. Solution: Use Azure Advisor to analyze the network traffic. Does this meet the goal?

Yes is incorrect because Azure Advisor does not capture or analyze network packet flows or provide allow/deny decisions for VM traffic. Advisor focuses on best-practice recommendations (cost optimization, security posture, reliability, performance), not operational traffic diagnostics. It may suggest improvements, but it cannot tell you whether specific packets were allowed or denied.

No is correct because Azure Advisor does not analyze packet-level network traffic or determine whether specific traffic to a virtual machine was allowed or denied. Advisor provides high-level recommendations about resource optimization and best practices, but it is not intended for troubleshooting connectivity at the flow level. To identify whether packets are being permitted or blocked, Azure Network Watcher tools such as IP flow verify, NSG flow logs, and packet capture are the appropriate services. These tools can help diagnose connectivity issues for Azure virtual machines, including scenarios involving ExpressRoute-connected environments.

Question Analysis

Core concept: This question tests knowledge of Azure networking monitoring and diagnostics tools. The requirement is to analyze network traffic and determine whether packets to virtual machines are being allowed or denied, including in a hybrid environment connected by ExpressRoute. Why correct: The solution does not meet the goal because Azure Advisor is not a packet-level or flow-level traffic analysis tool. To identify whether traffic is allowed or denied, you would use tools such as Azure Network Watcher, including IP flow verify, NSG flow logs, connection troubleshoot, and packet capture. These tools are designed to diagnose connectivity issues and evaluate whether security rules are permitting or blocking traffic. Key features: Azure Advisor provides best-practice recommendations for cost, security, reliability, operational excellence, and performance. It does not inspect live packet flows, evaluate NSG decisions for specific traffic, or capture packets for troubleshooting. Azure Network Watcher is the appropriate service for analyzing VM network traffic behavior in Azure. Common misconceptions: A common mistake is assuming Azure Advisor can perform operational diagnostics because it surfaces recommendations about Azure resources. In reality, Advisor is a recommendation engine, not a traffic inspection or packet analysis service. Another misconception is that ExpressRoute connectivity issues are diagnosed through governance or advisory tools rather than network diagnostic tools. Exam tips: For AZ-305, when a question asks whether traffic is allowed or denied, think of Azure Network Watcher features such as IP flow verify and NSG flow logs. If the question asks for recommendations or optimization guidance, think of Azure Advisor. Distinguish clearly between advisory services and diagnostic services.

5
Question 5

You plan to move a web app named App1 from an on-premises datacenter to Azure. App1 depends on a custom COM component that is installed on the host server. You need to recommend a solution to host App1 in Azure. The solution must meet the following requirements: ✑ App1 must be available to users if an Azure datacenter becomes unavailable. ✑ Costs must be minimized. What should you include in the recommendation?

Azure Web Apps run on App Service, which does not provide the host-level access needed to install and register a custom COM component on the server. Even though deploying in two regions improves resiliency, the platform itself is incompatible with the application's dependency. The load balancer does not change that limitation. Therefore, this option fails the core technical requirement.

A virtual machine scale set does support the custom COM component because it provides OS-level control, but deploying in two regions is more expensive than necessary for the stated requirement. The question asks for availability if an Azure datacenter becomes unavailable, which is best addressed by Availability Zones within a single region. A two-region deployment adds duplicated infrastructure, networking, and operational complexity. Because cost must be minimized, this is not the best recommendation.

A virtual machine scale set is the correct hosting choice because it provides full control of the guest operating system, allowing the custom COM component to be installed and maintained. Deploying the scale set across two availability zones ensures the application remains available if one Azure datacenter in the region fails, since zones are physically separate datacenters. Azure Load Balancer can distribute traffic across healthy VM instances in both zones. This design also minimizes cost compared to duplicating the application in two separate regions.

Traffic Manager with web apps in two regions is a valid pattern for multi-region web applications, but it still relies on App Service for hosting. App Service cannot host applications that require arbitrary COM components installed on the underlying server. Traffic Manager only handles DNS-based routing and failover; it does not solve the hosting compatibility issue. Therefore, this option does not meet the application's dependency requirement.

Question Analysis

Core concept: This question tests selecting the right Azure hosting model for an application with host-level dependencies and the appropriate resiliency scope for a datacenter failure. Because App1 depends on a custom COM component installed on the host server, the solution must use infrastructure you control, such as virtual machines or a virtual machine scale set, rather than Azure App Service. To remain available if an Azure datacenter becomes unavailable, the design should span Availability Zones, which place instances in separate physical datacenters within the same region. Why correct: Option C is the best fit because a virtual machine scale set supports installing and maintaining the required COM component on Windows VMs, and deploying across two availability zones protects against the loss of a single datacenter. This also minimizes cost compared to duplicating the full application stack in two separate Azure regions. A load balancer can distribute traffic across healthy VM instances in the zonal deployment. Key features: - VM Scale Sets provide OS-level control for installing COM components and support autoscaling. - Availability Zones provide datacenter-level fault isolation within a region. - Azure Load Balancer distributes traffic across instances in different zones. - This design is generally less expensive than active deployments in two regions. Common misconceptions: - Azure App Service is often cheaper and easier to manage, but it does not allow installation of arbitrary COM components on the host. - Multi-region deployment is for regional disaster recovery, not specifically for a single datacenter failure. - Availability Sets and Zones are not interchangeable; zones provide physical datacenter separation. Exam tips: When a question mentions custom COM components, drivers, or host-installed software, prefer IaaS over PaaS. When the requirement says a datacenter becomes unavailable, think Availability Zones unless the question explicitly says a region becomes unavailable. Also compare resiliency scope against cost, because multi-region is usually more expensive than zonal redundancy.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You need to deploy resources to host a stateless web app in an Azure subscription. The solution must meet the following requirements: ✑ Provide access to the full .NET framework. ✑ Provide redundancy if an Azure region fails. ✑ Grant administrators access to the operating system to install custom application dependencies. Solution: You deploy two Azure virtual machines to two Azure regions, and you deploy an Azure Application Gateway. Does this meet the goal?

Yes is incorrect because the presence of VMs in two regions is not enough by itself to guarantee regional failover. Azure Application Gateway is a regional service, so a single instance does not continue serving traffic if its hosting region becomes unavailable. Although the VM choice satisfies the full .NET Framework and OS administration requirements, the architecture still has a single regional dependency at the ingress layer. A correct multi-region design would require a global load-balancing service such as Azure Front Door or Traffic Manager, typically combined with regional back-end components.

No is correct because the solution only partially satisfies the requirements. Two Azure virtual machines in two regions do provide access to the full .NET Framework and allow administrators to access the operating system to install custom dependencies. However, a single Azure Application Gateway is deployed in only one region, so if that region fails, the application's entry point fails as well. Because the traffic-routing layer is not region-redundant, the overall solution does not meet the requirement to provide redundancy if an Azure region fails.

Question Analysis

Core concept: This question tests whether an IaaS-based web application architecture satisfies specific hosting requirements: full .NET Framework support, regional redundancy, and administrator access to the operating system for installing custom dependencies. Azure virtual machines provide full control of the OS and support full .NET Framework, unlike some PaaS options that abstract the OS. However, regional redundancy also requires the application entry point and traffic distribution layer to survive a regional outage. Why correct: The proposed solution does not fully meet the goal because Azure Application Gateway is a regional service. Even if two virtual machines are deployed in two different Azure regions, a single Application Gateway in one region becomes a single point of failure. If that region fails, users cannot reach the application, so the design does not provide end-to-end regional failover. Key features: Azure VMs support full .NET Framework workloads and allow administrators to log on to the operating system to install custom application dependencies. Deploying VMs in two regions can support workload redundancy, but only if traffic routing is also made region-resilient. To achieve that, you would typically use a global load-balancing service such as Azure Front Door or Azure Traffic Manager, often with regional Application Gateways or load balancers behind it. Common misconceptions: A common mistake is assuming that placing compute resources in two regions automatically provides regional high availability. In reality, every critical tier, including the ingress layer, must also be redundant across regions. Another misconception is treating Application Gateway as a global service; it is regional and cannot by itself provide cross-region failover. Exam tips: For AZ-305, check every requirement independently: platform/runtime support, administrative control, and disaster recovery architecture. When a question requires surviving a regional outage, look for a global traffic distribution component rather than a single regional load balancer. If OS-level access is required, favor virtual machines over App Service or other managed PaaS offerings.

7
Question 7

You are designing a microservices architecture that will be hosted in an Azure Kubernetes Service (AKS) cluster. Apps that will consume the microservices will be hosted on Azure virtual machines. The virtual machines and the AKS cluster will reside on the same virtual network. You need to design a solution to expose the microservices to the consumer apps. The solution must meet the following requirements: ✑ Ingress access to the microservices must be restricted to a single private IP address and protected by using mutual TLS authentication. ✑ The number of incoming microservice calls must be rate-limited. ✑ Costs must be minimized. What should you include in the solution?

Azure Application Gateway with WAF can be deployed with a private frontend IP and can provide L7 routing and WAF protections. However, it is not an API management gateway and does not natively provide APIM-style rate limiting/quota policies. mTLS/client certificate enforcement is also not the typical strength for App Gateway in this scenario. It may partially meet private ingress but fails the full set of requirements.

APIM Standard tier does not support VNet injection (internal/private gateway). A “service endpoint” is not a mechanism to give APIM a private IP address in your VNet; it’s used to secure access to certain Azure PaaS services from a subnet. Therefore, Standard cannot satisfy the requirement to restrict ingress to a single private IP while also providing mTLS and rate limiting.

Azure Front Door is a global, internet-facing entry point with anycast public endpoints. Even with WAF, it cannot provide ingress restricted to a single private IP inside a VNet for VM-to-AKS private communication. It’s designed for public web acceleration and global load balancing, not private VNet-only API exposure with mTLS to internal consumers.

APIM Premium tier supports virtual network connectivity (VNet injection) and can run in internal mode with a private IP, meeting the “single private IP” ingress requirement for VM consumers in the same VNet. APIM policies support client certificate authentication (mTLS) and rate limiting/quota enforcement. Although Premium is costly, it is the only option listed that satisfies all stated requirements.

Question Analysis

Core concept: This question tests how to securely expose AKS-hosted microservices privately to VM-based consumers on the same VNet, while enforcing mutual TLS (mTLS) and rate limiting at the ingress layer, with cost awareness. The key is choosing an ingress/gateway that can be privately reachable (single private IP) and can apply API-level policies such as rate limiting and client-certificate authentication. Why the answer is correct: Azure API Management (APIM) Premium tier with virtual network connection is the most appropriate because it can be deployed in (or connected to) a VNet to provide a private endpoint (single private IP via internal VNet mode) and supports policies for both client certificate authentication (mTLS) and rate limiting/throttling. With APIM in internal VNet mode, consumer VMs in the same VNet can call APIM over a private IP only, satisfying the “single private IP” ingress restriction. APIM can then route to AKS services (typically via an internal load balancer service, private ingress controller, or private DNS) without exposing public endpoints. Key features/configuration points: - Private ingress: APIM Premium supports VNet injection (internal mode) so the gateway is reachable only via a private IP in the VNet. - mTLS: Configure APIM to require client certificates and validate them (certificate authentication) and optionally validate against uploaded CA certificates. - Rate limiting: Use APIM policies such as rate-limit and quota to throttle calls per subscription/key, per IP, or per API. - Cost minimization nuance: Premium is expensive, but among the options it is the only one that meets all requirements simultaneously (private IP + mTLS + rate limiting). In exam terms, “must meet requirements” overrides “minimize cost.” Common misconceptions: - Application Gateway/WAF is often chosen for private ingress, but it does not provide APIM-style rate limiting policies and mTLS enforcement is not its primary API governance feature. - Front Door is a global public entry service and cannot provide a single private IP for VNet-only access. - APIM Standard does not support VNet injection; service endpoints don’t make APIM privately reachable in the required way. Exam tips: When you see “rate limit” and “mTLS/client cert auth” together, think APIM policies. When you also see “single private IP” and “same VNet,” you typically need APIM Premium (VNet injection/internal mode) or another private API gateway pattern. Always validate which tiers support VNet connectivity and which services are inherently public (e.g., Front Door).

8
Question 8
(Select 2)

You are planning an Azure IoT Hub solution that will include 50,000 IoT devices. Each device will stream data, including temperature, device ID, and time data. Approximately 50,000 records will be written every second. The data will be visualized in near real time. You need to recommend a service to store and query the data. Which two services can you recommend? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Azure Table Storage is a low-cost key/attribute store suitable for large volumes, but it has limited query capabilities and is not purpose-built for time-series analytics or near-real-time visualization at 50,000 writes/sec. It can work for simple lookups with careful partition/row key design, but dashboards and aggregations over time windows are harder and typically require additional analytics services.

Azure Event Grid is an event routing and notification service (pub/sub) used to deliver discrete events to handlers (Functions, Logic Apps, webhooks). It does not store data for querying and is not a time-series database. In an IoT architecture, Event Grid may help react to events, but it does not satisfy the requirement to store and query telemetry for visualization.

Azure Cosmos DB for NoSQL is a fully managed, horizontally scalable database with low-latency reads/writes and elastic throughput (RU/s or autoscale). It can ingest very high event volumes when properly partitioned (often by deviceId) and supports querying recent telemetry for dashboards. Features like TTL, global distribution, and multi-region replication make it a strong fit for high-ingest IoT telemetry storage and querying.

Azure Time Series Insights is designed specifically for IoT time-series data: fast exploration, time-window queries, and near-real-time visualization. It integrates with IoT Hub/Event Hubs for ingestion and provides a model for devices, hierarchies, and measures (like temperature). It’s commonly used to visualize telemetry quickly without building a custom analytics UI, making it a complete solution for querying and visualizing streaming IoT data.

Question Analysis

Core concept: This question tests selecting a data store and query/analytics service for high-ingest IoT telemetry (50,000 events/sec) with near-real-time visualization. The key is choosing services that can handle very high write throughput, low-latency queries, and time-series patterns. Why the answers are correct: Azure Cosmos DB for NoSQL (C) is a globally distributed, horizontally scalable database designed for massive ingestion rates and low-latency reads/writes. With proper partitioning (for example, partition key by deviceId and possibly a synthetic key to avoid hot partitions), Cosmos DB can sustain very high RU/s and support real-time dashboards querying recent data. It also supports TTL for automatic data aging, which is common for telemetry. Azure Time Series Insights (D) is purpose-built for IoT time-series exploration and near-real-time visualization. It provides time-series modeling, fast ad-hoc queries over time windows, and built-in visualization experiences. It integrates with IoT Hub/Event Hubs for ingestion and is designed for scenarios exactly like temperature/device/time telemetry. Key features and best practices: - Cosmos DB: choose an appropriate partition key to distribute writes evenly; provision RU/s (or autoscale) based on 50k writes/sec; use TTL for retention; consider multi-region writes if required for resiliency and latency. This aligns with Azure Well-Architected Framework performance efficiency and reliability pillars. - Time Series Insights: use it for interactive analytics and visualization; model hierarchies (device/site) and measures (temperature); configure warm/cold retention depending on tier and retention needs. Common misconceptions: - Azure Table Storage (A) is inexpensive and scalable, but it’s not optimized for complex, near-real-time analytical queries at this scale, and query patterns are limited (primarily key-based). It can store telemetry, but it’s not a complete “store and query for near real-time visualization” solution compared to Cosmos DB/TSI. - Event Grid (B) is an event routing service, not a database or query engine. It helps react to events but does not store/query telemetry. Exam tips: For IoT telemetry at high velocity with near-real-time dashboards, look for “time-series” and “interactive exploration” cues (Time Series Insights) and for “massive ingestion + low-latency queries + global scale” cues (Cosmos DB). Also remember: routing services (Event Grid) are not storage, and simple key-value stores (Table Storage) often fall short for real-time analytics requirements at scale.

9
Question 9

You plan to provision a High Performance Computing (HPC) cluster in Azure that will use a third-party scheduler. You need to recommend a solution to provision and manage the HPC cluster node. What should you include in the recommendation?

Azure Automation is a general automation platform for running scripts, scheduling tasks, and applying configuration management. Although it can automate parts of VM deployment or post-deployment configuration, it does not provide native HPC cluster orchestration or built-in awareness of third-party schedulers. It also lacks specialized capabilities such as cluster templates, scheduler-integrated autoscaling, and HPC-focused node lifecycle management. Therefore, it is not the best-fit service for provisioning and managing an Azure HPC cluster.

Azure CycleCloud is purpose-built to deploy, configure, and manage HPC and large-scale compute clusters in Azure. It supports integration with common third-party schedulers such as Slurm, PBS Pro, Grid Engine, and HTCondor, which is exactly what the question requires. CycleCloud can automate node provisioning, cluster configuration, and lifecycle operations while also enabling autoscaling based on scheduler demand. This makes it the most appropriate service for provisioning and managing HPC cluster nodes in a repeatable and operationally efficient way.

Azure Purview, now part of Microsoft Purview, is focused on data governance, cataloging, classification, and compliance. Its purpose is to help organizations discover and manage data assets across environments, not to deploy or operate compute infrastructure. It has no role in provisioning HPC nodes, integrating with schedulers, or scaling cluster resources. As a result, it is unrelated to the scenario described in the question.

Azure Lighthouse is designed for cross-tenant management and delegated administration, especially for service providers or enterprises managing multiple Azure tenants. It helps centralize governance, monitoring, and operational access, but it does not deploy or manage HPC clusters. Lighthouse has no built-in functionality for scheduler integration, compute node provisioning, or HPC autoscaling. Therefore, it does not satisfy the requirement to provision and manage HPC cluster nodes.

Question Analysis

Core concept: This question tests Azure services used to provision and manage High Performance Computing (HPC) clusters in Azure, specifically when integrating with third-party schedulers (for example, Slurm, PBS Pro, Grid Engine, LSF, HTCondor). The focus is on cluster lifecycle management: deploying head/login nodes, compute nodes, autoscaling, and integrating scheduler-driven elasticity. Why the answer is correct: Azure CycleCloud is purpose-built for deploying, managing, and optimizing HPC and big compute clusters on Azure, including clusters that use third-party schedulers. CycleCloud provides templates (“cluster projects”) and orchestration to provision nodes, configure networking, attach storage, and integrate the scheduler so that compute nodes can scale out/in based on job demand. This aligns directly with the requirement to “provision and manage the HPC cluster node” using a third-party scheduler. Key features / best practices: CycleCloud supports repeatable deployments (infrastructure-as-code-like patterns), scheduler integration, and elastic scaling policies. It can manage heterogeneous node types (CPU/GPU, different VM SKUs), placement considerations (proximity placement groups, availability zones where applicable), and common HPC storage patterns (Azure NetApp Files, NFS on VMs, Lustre offerings depending on region). From an Azure Well-Architected Framework perspective, CycleCloud improves Operational Excellence (standardized deployments, automation, lifecycle management) and Performance Efficiency (elastic scale to match workload demand). Common misconceptions: Azure Automation can run scripts and manage configuration, but it is a general automation service and does not provide HPC scheduler-aware cluster orchestration, templates, or job-driven autoscaling out of the box. Purview is for data governance, not compute provisioning. Lighthouse is for cross-tenant management, not HPC cluster deployment. Exam tips: For AZ-305, when you see “HPC cluster” plus “third-party scheduler,” think Azure CycleCloud (or Azure Batch for Microsoft-managed scheduling). If the question emphasizes custom schedulers and cluster lifecycle management, CycleCloud is the canonical Azure service. If it emphasizes managed job scheduling without third-party schedulers, Azure Batch is typically the answer (though not listed here).

10
Question 10
(Select 2)

You have an Azure subscription. Your on-premises network contains a file server named Server1. Server1 stores 5 ׀¢׀’ of company files that are accessed rarely. You plan to copy the files to Azure Storage. You need to implement a storage solution for the files that meets the following requirements: ✑ The files must be available within 24 hours of being requested. ✑ Storage costs must be minimized. Which two possible storage solutions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Correct. A Blob Storage account supports blob access tiers, including Archive. Setting each blob to Archive minimizes storage cost for rarely accessed data. When requested, the blobs must be rehydrated (typically hours, up to ~15 hours), meeting the “within 24 hours” requirement. The default tier being Cool is fine; the explicit Archive tier on the blobs is what drives the lowest steady-state cost.

Incorrect. A general-purpose v1 (GPv1) storage account is a legacy account type and does not provide the same flexible blob tiering and pricing model as GPv2/Blob accounts. Even if blobs can be stored, this option does not specify using the Archive tier, so it won’t minimize storage cost to the level required for rarely accessed data with delayed retrieval.

Incorrect. This uses Azure Files (file share) in a GPv2 account. Azure Files is appropriate when you need SMB/NFS semantics, but it does not offer an Archive tier equivalent (offline storage with rehydration). Cool file share tiers can reduce cost, but not as much as Blob Archive for rarely accessed data, so it fails the “minimize storage costs” intent compared to Archive.

Correct. A GPv2 account supports blob containers and blob access tiers. Even though the account default tier is Hot, explicitly setting each blob to Archive achieves the lowest storage cost. Rehydration from Archive to Hot/Cool typically completes within hours (up to ~15 hours), satisfying the requirement that files be available within 24 hours of being requested.

Incorrect. GPv1 with a file share is not a valid/optimal design for minimizing cost for rarely accessed data. Azure Files does not provide an Archive tier, and GPv1 is legacy with fewer modern cost-optimization and tiering capabilities. This option also doesn’t address the 24-hour retrieval model via rehydration; it implies online file share access rather than low-cost offline storage.

Question Analysis

Core concept: This question tests Azure Storage tiering choices for rarely accessed data, specifically Blob Storage access tiers (Hot/Cool/Archive) and the retrieval-time tradeoffs. The key requirement is “available within 24 hours of being requested” while minimizing storage cost. Why the answers are correct: Azure Blob Storage Archive tier is designed for long-term retention and the lowest storage cost. Data in Archive is offline and must be rehydrated to Hot or Cool before it can be read. Standard rehydration typically completes within hours and is documented as taking up to 15 hours, which satisfies the “within 24 hours” requirement. Therefore, any solution that stores the files as blobs in the Archive tier meets the availability requirement and minimizes ongoing storage cost. Both A and D copy the files into a blob container and then set the blobs to the Archive tier. The difference is the account’s default access tier (Cool vs Hot), but default tier mainly affects the tier applied to new blobs by default and some billing behaviors; once you explicitly set blobs to Archive, the default tier is not the deciding factor for cost minimization. In both cases, the steady-state storage cost is minimized by Archive. Key features / best practices: - Use Blob Storage (or GPv2) with lifecycle management to automatically move data from Hot/Cool to Archive based on last access time. - Understand cost model: Archive has lowest storage cost but higher access/rehydration costs and latency. - Ensure the workload tolerates rehydration delay and potential early deletion charges (Archive has minimum retention periods). Common misconceptions: - Choosing Cool tier alone (without Archive) reduces cost but not as much as Archive for rarely accessed data. - Azure Files (file shares) does not provide an “Archive” tier; it offers Hot/Cool/Transaction Optimized/Premium depending on account type and region, so it can’t reach the lowest-cost offline storage model. - GPv1 accounts are legacy and don’t support modern tiering features as flexibly as GPv2/Blob. Exam tips: When you see “rarely accessed” + “can wait hours” + “minimize storage cost,” think Blob Archive. If the requirement were “immediate access,” choose Hot/Cool instead. If the requirement were “SMB/NFS file share,” consider Azure Files, but note it won’t match Archive’s lowest-cost offline storage behavior.

Other Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000
← View All Microsoft AZ-305 Questions

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AZ-305 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.