CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. Microsoft AZ-305
Microsoft AZ-305

Microsoft

Microsoft AZ-305

282+ Practice Questions with AI-Verified Answers

Designing Microsoft Azure Infrastructure Solutions

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 282+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every Microsoft AZ-305 answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Design Identity, Governance, and Monitoring SolutionsWeight 27%
Design Data Storage SolutionsWeight 22%
Design Business Continuity SolutionsWeight 18%
Design Infrastructure SolutionsWeight 33%

Practice Questions

1
Question 1

HOTSPOT - You plan to deploy an Azure web app named App1 that will use Azure Active Directory (Azure AD) authentication. App1 will be accessed from the internet by the users at your company. All the users have computers that run Windows 10 and are joined to Azure AD. You need to recommend a solution to ensure that the users can connect to App1 without being prompted for authentication and can access App1 only from company-owned computers. What should you recommend for each requirement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The users can connect to App1 without being prompted for authentication: ______

Correct answer: A (An Azure AD app registration). To enable Azure AD authentication for an Azure web app (App Service) and achieve SSO, the application must be represented in Azure AD. That representation is an app registration (enterprise application/service principal) used for OAuth2/OpenID Connect token issuance. With Azure AD-joined Windows 10 devices, users typically have a Primary Refresh Token (PRT), enabling silent token acquisition in supported browsers, so they can access App1 without an interactive prompt. Why others are wrong: - B (Managed identity) is for workload identity (the app accessing Azure resources like Key Vault/Storage) and does not provide end-user authentication/SSO into the web app. - C (Azure AD Application Proxy) is primarily for publishing on-premises apps externally via Azure AD; App1 is already an internet-facing Azure web app, so Application Proxy is unnecessary for SSO and doesn’t replace the need for an app registration.

Part 2:

The users can access App1 only from company-owned computers: ______

Correct answer: A (A Conditional Access policy). Conditional Access is the Azure AD feature designed to control access to cloud apps based on conditions such as device state. To ensure only company-owned computers can access App1, you configure a Conditional Access policy targeting App1 that requires the device to be marked compliant (typically via Microsoft Intune) and/or require an Azure AD joined device. This enforces that access is granted only when the sign-in comes from a managed corporate device, even though the app is reachable from the internet. Why others are wrong: - B (Administrative unit) scopes administrative management of users/devices; it doesn’t enforce sign-in restrictions. - C (Application Gateway) is a Layer 7 reverse proxy/WAF; it can’t evaluate Azure AD device compliance/join state. - D (Azure Blueprints) and E (Azure Policy) govern Azure resource deployment/configuration, not end-user authentication and device-based access controls.

2
Question 2
(Select 3)

You are designing a large Azure environment that will contain many subscriptions. You plan to use Azure Policy as part of a governance solution. To which three scopes can you assign Azure Policy definitions? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Incorrect. Azure AD administrative units are used to delegate administrative control over subsets of Azure AD objects (users, groups, devices). Azure Policy is an Azure Resource Manager governance feature and does not assign to Azure AD administrative units. This option is a common distractor mixing identity governance with resource governance.

Incorrect. An Azure AD tenant is an identity boundary, not an ARM resource scope. Azure Policy assignments are made within the Azure resource hierarchy (management groups, subscriptions, resource groups, resources). While many Azure services integrate with Azure AD, Azure Policy does not assign at the tenant level as an Azure AD construct.

Correct. Subscription is a primary Azure Policy assignment scope. Assigning a policy at the subscription level applies to all resource groups and resources within that subscription (unless excluded). This is commonly used to enforce standards for a single subscription, such as allowed regions, required tags, or security configurations.

Incorrect. “Compute resources” is not a valid Azure Policy assignment scope. Azure Policy can evaluate and enforce rules on compute resource types (VMs, VMSS, AKS, etc.) via policy conditions, but the assignment scope is still management group, subscription, resource group (or individual resource), not a generic compute category.

Correct. Resource group is a valid Azure Policy assignment scope. This is useful when different workloads within the same subscription require different governance rules (e.g., stricter policies for production RGs). Policies assigned at the resource group scope apply to resources within that resource group.

Correct. Management group is a key scope for large environments with many subscriptions. Assigning policies at the management group level enables centralized governance and consistent enforcement across multiple subscriptions. This is a best-practice approach for enterprise-scale landing zones and aligns with the Governance pillar of the Azure Well-Architected Framework.

Question Analysis

Core concept: Azure Policy is an Azure Resource Manager (ARM) governance service used to enforce standards and assess compliance for Azure resources. Policy definitions (rules) are assigned at a scope within the Azure resource hierarchy so they can be inherited by child scopes. Why the answer is correct: Azure Policy definitions can be assigned at three primary governance scopes in the ARM hierarchy: management groups, subscriptions, and resource groups. Assigning at a higher scope (management group) enables consistent governance across many subscriptions, which is a common AZ-305 design scenario for large enterprises. Assigning at subscription scope targets a single subscription and all its resource groups/resources. Assigning at resource group scope targets a specific workload boundary. Key features, configurations, and best practices: - Scope inheritance: A policy assignment at a management group applies to all subscriptions (and their resource groups/resources) beneath it, unless excluded. - Exclusions: You can exclude specific child scopes from a policy assignment to support exceptions (e.g., sandbox subscriptions). - Initiatives: Group multiple policy definitions into an initiative and assign once at the desired scope for easier governance. - Well-Architected alignment: Policy supports the Governance pillar by enforcing tagging, allowed locations/SKUs, security baselines, and resource configuration standards at scale. Common misconceptions: - Azure AD tenant or administrative units are identity governance scopes, not ARM resource scopes. Azure Policy evaluates ARM resources, not Azure AD objects. - “Compute resources” sounds like a resource-level scope, but Azure Policy assignment scopes are not “resource type categories.” While you can target specific resource types using policy rules and conditions, the assignment itself is made at management group/subscription/resource group (and also individual resource scope, though that option is not listed here). Exam tips: Remember the ARM hierarchy for governance: Management group > Subscription > Resource group > Resource. For questions asking “to which scopes can you assign Azure Policy,” pick the ARM scopes. If Azure AD scopes appear as distractors, they are typically incorrect for Azure Policy assignments.

3
Question 3

HOTSPOT - You need to design a storage solution for an app that will store large amounts of frequently used data. The solution must meet the following requirements: ✑ Maximize data throughput. ✑ Prevent the modification of data for one year. ✑ Minimize latency for read and write operations. Which Azure Storage account type and storage service should you recommend? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Storage account type: ______

BlockBlobStorage is the correct account type because it is optimized for premium block blob workloads that require high throughput and low latency. Since the data must also be protected from modification for one year, Blob storage is needed to apply an immutability policy, and BlockBlobStorage is the premium account type aligned to that service. BlobStorage is a legacy account type, FileStorage is intended for Azure Files rather than blobs, and StorageV2 with Standard performance does not provide the same performance characteristics. Therefore, BlockBlobStorage best satisfies both the performance and immutability requirements.

Part 2:

Storage service: ______

Blob storage is the correct service because it supports immutable storage (WORM) through immutability policies, including time-based retention for a defined period such as one year. This directly satisfies the requirement to prevent modification of data for one year. Blob storage is also well-suited for storing large amounts of unstructured data and can deliver high throughput, especially when paired with Premium performance. Why others are wrong: - File (B): Azure Files is optimized for SMB/NFS file shares. While it can be very performant in Premium tiers, the exam-relevant, first-class immutability/WORM capability for “prevent modification for one year” is a Blob feature (immutability policies). - Table (C): Table storage is a NoSQL key/attribute store for structured data and is not intended for large unstructured datasets nor for WORM-style immutability requirements.

4
Question 4

You are designing an application that will be hosted in Azure. The application will host video files that range from 50 MB to 12 GB. The application will use certificate-based authentication and will be available to users on the internet. You need to recommend a storage option for the video files. The solution must provide the fastest read performance and must minimize storage costs. What should you recommend?

Azure Files provides managed file shares over SMB/NFS and is best for lift-and-shift file server scenarios, shared application configuration, or user home directories. It is not typically the fastest or most cost-effective option for serving large video files to internet users at scale. Internet delivery usually requires additional components and doesn’t align as well as object storage with CDN caching and tiering for cost optimization.

Azure Data Lake Storage Gen2 is essentially Blob Storage with a hierarchical namespace and POSIX-like ACLs, optimized for big data analytics (Spark/Hadoop) and data engineering. While it can store large files, its primary value is analytics and filesystem semantics rather than lowest-cost, highest-performance internet content delivery. For a video hosting app focused on fast reads and minimal cost, standard Blob Storage is the more direct fit.

Azure Blob Storage is designed for unstructured data such as video and supports very large objects with high throughput. It offers Hot/Cool/Archive tiers and lifecycle management to minimize storage costs while maintaining performance for frequently accessed content. For fastest read performance to internet users, Blob integrates with Azure CDN/Front Door for edge caching. Secure access can be implemented via short-lived SAS tokens issued after certificate-based authentication.

Azure SQL Database is a relational database service intended for structured data and transactional workloads. Storing 50 MB to 12 GB video files in a database (as BLOBs) is inefficient and expensive, complicates scaling, and typically results in worse read performance and higher costs than object storage. The correct pattern is to store metadata in SQL (if needed) and store the actual video content in Blob Storage.

Question Analysis

Core concept: This question tests choosing the right Azure storage service for large unstructured objects (video files) with internet access, strong authentication, high read performance, and low cost. For AZ-305, this maps to selecting the appropriate data platform and access pattern (object storage vs file shares vs database). Why the answer is correct: Azure Blob Storage is the primary Azure service for storing and serving large binary objects (50 MB to 12 GB) efficiently. It provides the best cost/performance fit for internet-facing content delivery scenarios because it is optimized for high-throughput reads, supports tiering to minimize cost, and integrates cleanly with secure access mechanisms. For “fastest read performance,” Blob Storage can be paired with Azure CDN or Front Door for edge caching and acceleration, which is the typical architecture for global video delivery. For “minimize storage costs,” Blob supports Hot/Cool/Archive access tiers and lifecycle management rules to automatically move older/less-accessed videos to cheaper tiers. Key features and best practices: - Performance: Blob Storage supports high throughput and large object sizes; using Premium Block Blob can further increase performance for demanding workloads, while Standard is usually most cost-effective. - Cost optimization: Use lifecycle policies to transition blobs from Hot to Cool/Archive based on last access/modified time; consider reserved capacity for predictable storage. - Secure internet access: Use Azure AD integration where applicable, SAS tokens, stored access policies, and/or private endpoints (for internal access). For certificate-based authentication, a common pattern is to authenticate users/app via certificates to an identity provider or API, then issue short-lived SAS for blob reads. - Well-Architected alignment: Cost Optimization (tiering, lifecycle), Performance Efficiency (CDN/Front Door caching), Security (least privilege via SAS/AAD, encryption at rest). Common misconceptions: Azure Files can look attractive because it’s “file storage,” but it’s SMB/NFS-oriented and typically not the best/cheapest for large-scale internet content distribution. ADLS Gen2 is built on Blob but is optimized for analytics and hierarchical namespace scenarios, not primarily for serving public video content. SQL Database is not suitable for large video binaries. Exam tips: For large media files served to internet users, default to Azure Blob Storage (often with CDN/Front Door). Use access tiers and lifecycle management for cost control, and use SAS/AAD patterns for secure access rather than exposing storage keys.

5
Question 5

You have an Azure subscription that contains a custom application named Application1. Application1 was developed by an external company named Fabrikam, Ltd. Developers at Fabrikam were assigned role-based access control (RBAC) permissions to the Application1 components. All users are licensed for the Microsoft 365 E5 plan. You need to recommend a solution to verify whether the Fabrikam developers still require permissions to Application1. The solution must meet the following requirements: ✑ To the manager of the developers, send a monthly email message that lists the access permissions to Application1. ✑ If the manager does not verify an access permission, automatically revoke that permission. ✑ Minimize development effort. What should you recommend?

Correct. Azure AD (Microsoft Entra ID) Access Reviews provide scheduled (monthly) reviews with email notifications to reviewers (e.g., the developers’ manager). You can configure auto-apply results so that access is removed if not approved or not reviewed. This meets all requirements with minimal development effort and provides strong auditing for governance and least-privilege enforcement.

Incorrect. An Azure Automation runbook using Get-AzRoleAssignment can enumerate RBAC assignments, but you would still need to build custom logic for monthly scheduling, emailing the manager, capturing approvals, tracking non-responses, and revoking access. This increases development and maintenance effort and is less aligned with built-in identity governance and audit workflows.

Incorrect. Privileged Identity Management (PIM) focuses on just-in-time access, eligible vs. active role assignments, approval workflows for elevation, and time-bound assignments. It does not inherently provide the required monthly manager attestation with automatic removal for non-response in the same straightforward way as Access Reviews. PIM can complement governance but is not the best fit here.

Incorrect. Get-AzureADUserAppRoleAssignment targets application role assignments (app roles) rather than Azure RBAC role assignments to Azure resources. Application1 access is described as RBAC permissions to components, which are typically Azure resource role assignments. Even if applicable, a runbook would still require custom approval/attestation and revocation logic, increasing development effort.

Question Analysis

Core concept: This question tests Azure governance for identity and access, specifically how to periodically validate and automatically remove unnecessary access with minimal effort. The right service is Microsoft Entra ID (Azure AD) Access Reviews, part of Identity Governance. Why the answer is correct: An access review in Azure AD can be configured to run on a recurring schedule (monthly), notify designated reviewers (the developers’ manager), present the current access (who has access/role assignments), and automatically apply results. Critically, you can configure the review so that if the reviewer does not respond (does not verify/approve), the system automatically removes access at the end of the review. This directly satisfies: 1) Monthly email to the manager listing access permissions 2) Automatic revocation if not verified 3) Minimal development effort (built-in workflow, no custom scripting) Key features and configuration points: - Scope: You can review access to groups, applications, and (with supported integrations) Azure resource roles. For RBAC to Application1 components, you typically review membership in groups used for RBAC assignments (recommended best practice) or review role assignments where supported. - Recurrence: Set to monthly. - Reviewers: Set the manager as the reviewer (or use “manager of user” where applicable). - Auto-apply results: Enable automatic removal for denied/not reviewed. - Notifications: Email notifications and reminders are built in. This aligns with Azure Well-Architected Framework security pillar: enforce least privilege and continuously review access. Common misconceptions: Many assume Automation runbooks are required to email reports and remove assignments. While possible, that increases operational and development overhead and is more error-prone. Another misconception is that Privileged Identity Management (PIM) is required; PIM is excellent for just-in-time privileged access, but the requirement here is periodic attestation and automatic removal of unverified access. Exam tips: - If you see “manager attestation,” “recurring review,” “auto-remove if not approved,” think Access Reviews. - Use PIM when the requirement is time-bound elevation/eligible assignments; use Access Reviews when the requirement is periodic validation/attestation of existing access. - Prefer built-in governance features over custom scripts to minimize effort and improve auditability. Licensing note: All users have Microsoft 365 E5, which typically includes Entra ID Governance capabilities needed for Access Reviews, making this solution feasible without additional custom development.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You have an Azure subscription. The subscription has a blob container that contains multiple blobs. Ten users in the finance department of your company plan to access the blobs during the month of April. You need to recommend a solution to enable access to the blobs during the month of April only. Which security solution should you include in the recommendation?

Shared access signatures (SAS) provide delegated, granular access to Blob Storage with explicit permissions (read/write/list), scope (blob/container), and validity period (start/expiry). This directly satisfies “April only” by setting the SAS to expire at the end of April. For best practice, use a user delegation SAS with Microsoft Entra ID to avoid exposing storage account keys and to improve governance.

Conditional Access policies control how users authenticate to applications via Microsoft Entra ID (e.g., require MFA, compliant device, named locations). They are not the primary mechanism to grant time-limited access to blobs via URLs, and SAS-based access may not be subject to Conditional Access at all. Conditional Access also doesn’t inherently provide a simple per-resource expiry for blob access.

Certificates are not a typical or recommended method to grant temporary access to Azure blobs. While certificates can be used for certain authentication scenarios (e.g., client certificates, service principals), they do not provide the built-in, resource-scoped, time-bound permission model that SAS offers for Blob Storage access. Managing certificate issuance/rotation also adds unnecessary complexity for this requirement.

Storage account access keys are highly privileged, long-lived secrets that grant broad access to the entire storage account. Sharing them with 10 users violates least privilege and makes it difficult to enforce “April only” access without manual rotation. If a key is leaked, an attacker could access all data until the key is regenerated, making this a poor security design choice.

Question Analysis

Core concept: This question tests secure, time-bound access delegation to Azure Blob Storage. The key capability is granting limited permissions to specific storage resources without sharing long-lived credentials. Why the answer is correct: Shared Access Signatures (SAS) are designed to provide delegated access to blobs/containers with explicit constraints, including start time and expiry time. To allow finance users to access blobs only during April, you can issue a SAS (service SAS or user delegation SAS) that is valid from April 1 through April 30 (or May 1 00:00). After expiry, the SAS token becomes unusable automatically, meeting the “April only” requirement without ongoing admin action. Key features / best practices: - Scope and permissions: Create a SAS scoped to the container (or specific blobs) with only required permissions (typically read/list). Apply least privilege per Azure Well-Architected Framework (Security pillar). - Time bounds: Set a start time and expiry time to enforce the April window. - Prefer user delegation SAS: If using Microsoft Entra ID, generate a user delegation SAS backed by Azure AD and a user delegation key, avoiding storage account keys and enabling better governance. - Optional network controls: Combine SAS with storage firewall, private endpoints, or IP restrictions in the SAS (where applicable) to further reduce exposure. - Operational considerations: Distribute SAS securely (e.g., via a secure portal or short-lived delivery mechanism). Rotate/reissue if compromised. Common misconceptions: - Conditional Access controls sign-in to Entra-protected apps, but Blob access via SAS can bypass interactive sign-in and Conditional Access evaluation. Also, Conditional Access doesn’t natively provide a simple “resource access expires on a date” mechanism for blob URLs. - Certificates are not a standard mechanism to grant temporary access to blobs; they may be used for client auth in some contexts but not for delegating blob permissions with an expiry. - Access keys provide full control over the storage account and are long-lived; they are not appropriate for time-limited, user-scoped access. Exam tips: When you see “temporary access,” “limited permissions,” “specific container/blob,” or “time-bound access,” think SAS. For AZ-305 design questions, also remember the preference hierarchy: Entra ID + RBAC where possible, and for URL-based delegated access, use SAS—ideally user delegation SAS to avoid sharing account keys.

7
Question 7
(Select 2)

You have an Azure Active Directory (Azure AD) tenant that syncs with an on-premises Active Directory domain. You have an internal web app named WebApp1 that is hosted on-premises. WebApp1 uses Integrated Windows authentication. Some users work remotely and do NOT have VPN access to the on-premises network. You need to provide the remote users with single sign-on (SSO) access to WebApp1. Which two features should you include in the solution? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure AD Application Proxy is correct because it is specifically built to provide remote access to on-premises web applications without requiring VPN connectivity. It uses an on-premises connector that makes outbound connections to Azure, so you do not need to expose the internal application directly to the internet. For apps using Integrated Windows authentication, Application Proxy supports single sign-on through Kerberos Constrained Delegation, allowing Azure AD-authenticated users to access the app seamlessly. This directly matches the requirement to give remote users SSO access to WebApp1.

Azure AD Privileged Identity Management (PIM) is wrong because it is used to manage privileged roles and provide just-in-time elevation for administrative access. It helps reduce standing privilege and supports approval workflows, notifications, and access reviews for privileged accounts. However, it does not publish on-premises applications, provide remote connectivity, or enable SSO for end users accessing an internal web app. Therefore, it does not address the core requirement in this scenario.

Conditional Access policies are wrong as a primary answer because they only control the conditions under which users can access applications, such as requiring MFA, compliant devices, or trusted locations. They do not provide the mechanism to expose an on-premises web application to remote users who lack VPN access. While Conditional Access can be layered on top of Application Proxy for additional security, it is not one of the required features to make the application reachable and provide SSO. The scenario asks for the core solution components, which are Application Proxy and the enterprise application configuration.

Azure Arc is wrong because it is intended for hybrid and multicloud resource management, such as managing servers, Kubernetes clusters, and data services from Azure. It extends Azure governance, policy, and management capabilities to non-Azure environments. It does not provide remote user access to on-premises web applications and has no role in Integrated Windows authentication SSO for this scenario. As a result, it is unrelated to the stated requirement.

Azure AD enterprise applications are correct because the published on-premises application is managed in Azure AD as an enterprise application. This is where administrators configure user and group assignment, SSO-related settings, and access behavior for the application. In an Application Proxy deployment, the app appears as an enterprise application in Azure AD, making this feature part of the overall solution. Without the enterprise application object, you would not have the Azure AD application integration layer needed to manage access to WebApp1.

Azure Application Gateway is wrong because it is primarily a Layer 7 load balancer for HTTP/HTTPS traffic with features such as SSL termination, path-based routing, and Web Application Firewall integration. Although it can publish web applications, it is not the Azure AD identity-based reverse proxy service designed for remote access to internal apps without VPN. It also does not natively provide the same Azure AD preauthentication and KCD-based SSO pattern used by Azure AD Application Proxy for Integrated Windows authentication apps. Compared to Application Proxy, it lacks the direct identity integration needed for this exact scenario.

Question Analysis

Core concept: This question tests how to provide remote users with secure single sign-on access to an on-premises web application that uses Integrated Windows authentication, when those users do not have VPN connectivity. The key Azure AD capability for this scenario is publishing the on-premises application externally through Azure AD while enabling authentication and access management through an enterprise application object. Why the answer is correct: Azure AD Application Proxy is designed specifically to publish on-premises web applications to external users without requiring inbound firewall ports or VPN access. It uses a connector installed on-premises that establishes outbound connections to Azure, allowing remote users to reach internal apps securely. For applications like WebApp1 that use Integrated Windows authentication, Application Proxy can be configured for Kerberos Constrained Delegation (KCD), which enables single sign-on from Azure AD to the on-premises app. Azure AD enterprise applications are also required because the published app is represented and managed in Azure AD as an enterprise application, where you configure user assignment, SSO settings, and access behavior. Key features / configurations: - Azure AD Application Proxy publishes internal web apps for external access without VPN. - Application Proxy connector is installed on-premises and communicates outbound to Azure. - Integrated Windows authentication apps can use Kerberos Constrained Delegation for SSO. - Azure AD enterprise applications provide the application object used for assignment, access control, and SSO configuration. - Users authenticate with Azure AD first, then Azure AD/Application Proxy brokers access to the internal app. Common misconceptions: - Conditional Access can control who can access an app, but it does not by itself publish an on-premises application or provide the connectivity path. - Privileged Identity Management is for just-in-time role activation and privileged access governance, not application publishing or user SSO to internal web apps. - Azure Application Gateway is a load balancer and web traffic management service, but it does not natively provide the Azure AD Application Proxy pattern for publishing internal Integrated Windows auth apps to remote users without VPN. - Azure Arc extends Azure management to hybrid resources, but it is unrelated to remote SSO access for an on-premises web application. Exam tips: - If the scenario says on-premises web app + remote users + no VPN, think Azure AD Application Proxy. - If the app uses Integrated Windows authentication, look for Kerberos Constrained Delegation support through Application Proxy. - Enterprise applications in Azure AD are commonly involved when configuring SSO, user assignment, and app access. - Conditional Access is often complementary, but not the core publishing solution. - Distinguish between identity/access services and network/load-balancing services.

8
Question 8

You have an Azure Active Directory (Azure AD) tenant named contoso.com that has a security group named Group1. Group1 is configured for assigned membership. Group1 has 50 members, including 20 guest users. You need to recommend a solution for evaluating the membership of Group1. The solution must meet the following requirements: ✑ The evaluation must be repeated automatically every three months. ✑ Every member must be able to report whether they need to be in Group1. ✑ Users who report that they do not need to be in Group1 must be removed from Group1 automatically. ✑ Users who do not report whether they need to be in Group1 must be removed from Group1 automatically. What should you include in the recommendation?

Azure AD (Entra ID) Identity Protection is designed for detecting and remediating identity risks (risky sign-ins, risky users) using risk policies and reports. It does not provide a periodic, user-driven membership attestation workflow for groups, nor does it automatically remove users from a group based on self-reported need or non-response. Therefore it cannot meet the stated governance and recertification requirements.

Changing Group1 to Dynamic User membership automates membership based on a rule evaluating user attributes (for example, department, jobTitle, userType). This does not satisfy the requirement that every member must explicitly report whether they still need to be in the group, and it does not implement a quarterly review/attestation process. Dynamic groups are great for attribute-driven access, not for periodic access recertification.

Access reviews in Microsoft Entra ID Governance are built for periodic access recertification of group membership, app access, and role assignments. You can schedule the review to recur every three months, set reviewers to the users themselves (self-attestation), and configure automatic application of results so users who answer “No” are removed. You can also remove non-responders automatically, meeting all requirements.

Privileged Identity Management (PIM) focuses on managing privileged access (Azure AD roles, Azure resource roles, and privileged groups) through eligible assignments, just-in-time activation, approval workflows, and access reviews for privileged assignments. While PIM can involve reviews, the question is specifically about evaluating standard assigned group membership with self-attestation and automatic removal of non-responders. Access Reviews is the direct and expected solution.

Question Analysis

Core concept: This question tests Microsoft Entra ID (Azure AD) governance capabilities for group membership lifecycle management. The specific feature is Access Reviews (part of Entra ID Governance), which enables periodic attestation of users’ continued need for access and can automatically remove users based on review outcomes. Why the answer is correct: An access review can be configured for a specific group (Group1) and scheduled to recur every three months. It can be set so that each member (including guest users) is asked to self-attest whether they still need membership. Critically, access reviews support automated remediation: users who indicate they no longer need access can be removed automatically, and users who do not respond by the end of the review can also be removed automatically (via the “auto-apply results” and “remove non-responders”/“apply results to non-responders” settings). This exactly matches all four requirements. Key features and configuration points: - Scope: Access review for “Groups” targeting Group1. - Reviewers: “Self review” so every member can report their need. - Recurrence: Set to every 3 months. - Automation: Enable auto-apply results upon completion; configure non-responders to be removed. - Guests: Access reviews can include guest users, which is a common governance need. - Governance alignment: This supports Azure Well-Architected Framework security pillar principles (least privilege, continuous access evaluation, and governance). Common misconceptions: - Dynamic groups (option B) automate membership based on user attributes, not user attestation, and do not provide a periodic “confirm you still need access” workflow. - PIM (option D) is for privileged role and resource access elevation (just-in-time) and can manage eligible assignments, but it is not the primary mechanism for recurring self-attestation and automatic removal from a standard assigned membership group. - Identity Protection (option A) focuses on risk-based detection and remediation (risky users/sign-ins), not membership recertification. Exam tips: When you see requirements like “recertify every X months,” “users must attest,” and “auto-remove non-responders,” think “Access Reviews” in Entra ID Governance. If the scenario is about privileged roles and JIT elevation, think PIM; if it’s about risk detections, think Identity Protection; if it’s attribute-based membership, think Dynamic Groups.

9
Question 9

DRAG DROP - Your on-premises network contains a server named Server1 that runs an ASP.NET application named App1. You have a hybrid deployment of Azure Active Directory (Azure AD). You need to recommend a solution to ensure that users sign in by using their Azure AD account and Azure Multi-Factor Authentication (MFA) when they connect to App1 from the internet. Which three features should you recommend be deployed and configured in sequence? To answer, move the appropriate features from the list of features to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

Pass is appropriate because the required solution is a standard Azure AD + on-prem app publishing pattern frequently tested on AZ-305. The correct three features and order are: 1) an Azure AD enterprise application (represents App1 in Azure AD and is the target for access policies) 2) Azure AD Application Proxy (publishes the on-prem app to the internet via the Application Proxy connector and enables Azure AD pre-authentication) 3) a Conditional Access policy (configured for that enterprise app to require Azure MFA for internet access) Why others are wrong: public/internal Azure Load Balancer are for traffic distribution to Azure resources and don’t enforce Azure AD auth/MFA for an on-prem app. An Azure App Service plan is for hosting apps in Azure App Service (not applicable since App1 stays on Server1). A managed identity is for workload identity to access Azure resources, not for end-user interactive sign-in and MFA enforcement.

10
Question 10

You need to recommend a solution to generate a monthly report of all the new Azure Resource Manager (ARM) resource deployments in your Azure subscription. What should you include in the recommendation?

Azure Activity Log is correct because it records all subscription-level control-plane events, including ARM deployment operations and resource creation events. This makes it the native audit source for identifying new deployments within a given month. It includes useful metadata such as the caller, operation name, timestamp, and status, which are exactly the details needed for a deployment report. In exam scenarios, when the requirement is to track Azure administrative actions or deployments, Activity Log is the primary service to choose.

Azure Advisor focuses on best-practice recommendations across cost, security, reliability, operational excellence, and performance. It does not provide an authoritative audit trail of ARM deployments or a complete list of new deployments. Advisor may highlight configuration issues resulting from deployments, but it is not designed for compliance-style deployment reporting.

Azure Analysis Services is a PaaS analytics service for hosting tabular semantic models (similar to SSAS Tabular) used by BI tools like Power BI. It does not ingest or track Azure subscription Activity Log events by itself. You could theoretically model data after exporting logs elsewhere, but it is not the correct native service to generate deployment reports.

Azure Monitor action groups define notification and automation endpoints (email, SMS, webhook, Logic Apps, etc.) used by alert rules. They do not collect, store, or enumerate ARM deployment events. Action groups could be used after you create an alert on Activity Log/Log Analytics, but they are not the data source for a monthly deployment report.

Question Analysis

Core concept: To generate a monthly report of new Azure Resource Manager (ARM) resource deployments, you need the Azure service that records control-plane operations performed in a subscription. Azure Activity Log captures subscription-level events such as resource creation, update, delete, and deployment operations. Why correct: Azure Activity Log is the authoritative source for ARM deployment activity because it records when resources are deployed and who initiated the operation. You can filter the log for deployment-related operations over the last month and use that data as the basis for a monthly report. Key features: 1) Captures control-plane events for the subscription, including create/update/delete and deployment operations. 2) Provides details such as timestamp, caller, operation name, status, and target resource. 3) Supports filtering by subscription, resource group, operation type, and time range. 4) Can be exported or integrated with other Azure Monitor capabilities if longer retention or advanced reporting is needed. Common misconceptions: - Azure Advisor gives recommendations, not an audit trail of deployments. - Azure Analysis Services is for analytical models, not Azure deployment tracking. - Azure Monitor action groups only send notifications or trigger actions; they do not store deployment history. Exam tips: For questions asking what records "who did what and when" for Azure resources, think Azure Activity Log. It is the default answer for subscription-level ARM operation auditing. Distinguish it from data-plane logs, recommendations, and notification mechanisms.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11
(Select 2)

You have an Azure subscription that contains an Azure Blob Storage account named store1. You have an on-premises file server named Server1 that runs Windows Server 2016. Server1 stores 500 GB of company files. You need to store a copy of the company files from Server1 in store1. Which two possible Azure services achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

An Azure Logic Apps integration account is used for enterprise integration scenarios (B2B/EDI agreements, schemas, maps, certificates) and supports Logic Apps workflows. It does not provide a direct, purpose-built mechanism to bulk copy an on-premises Windows file server’s files into Azure Blob Storage. While Logic Apps can orchestrate some transfers, the integration account itself is not a complete file-copy solution for this scenario.

Azure Import/Export is designed to transfer large amounts of data to/from Azure Storage by shipping physical drives to an Azure datacenter. You copy the 500 GB from Server1 to encrypted disks, create an Import job, and Azure imports the data into the target storage account (store1). This is a complete solution especially when network bandwidth is constrained or you want an offline bulk seeding approach.

Azure Data Factory can copy data from on-premises sources to Azure Storage using a Self-hosted Integration Runtime installed in the on-premises environment. With the file system connector, ADF can read files from Server1 (or a share it hosts) and write them to Azure Blob Storage in store1. It supports scheduling, monitoring, retries, and repeatable pipelines, making it a complete online transfer solution.

The Azure Analysis Services On-premises data gateway is intended to provide secure connectivity for semantic models and reporting tools (e.g., Power BI) to access on-premises data sources. It is not a data movement service for copying file shares into Azure Blob Storage. It enables query connectivity rather than bulk file ingestion, so it does not meet the requirement to store a copy of the files in store1.

An Azure Batch account is used to run large-scale parallel and high-performance computing workloads by scheduling jobs across pools of compute nodes. It is not a data transfer or migration service. While Batch jobs can process data once it is in Azure, Batch does not provide a straightforward, managed mechanism to copy an on-premises file server’s data into Azure Blob Storage as a complete solution.

Question Analysis

Core concept: This question tests how to copy on-premises file data into Azure Blob Storage. The key is selecting Azure services that can ingest/move data from an on-premises Windows file server into a storage account (store1). In AZ-305, this maps to designing data movement/ingestion patterns for storage solutions. Why the answers are correct: Azure Import/Export (B) is a complete offline transfer solution. You copy the 500 GB from Server1 to encrypted disks, ship them to an Azure datacenter, and Microsoft imports the data directly into the target storage account (store1). This is appropriate when bandwidth is limited, transfer windows are tight, or you want a predictable bulk copy process. Azure Data Factory (C) is a complete online data integration service. Using a Self-hosted Integration Runtime installed on Server1 (or another on-premises machine with access to the file share), ADF can copy files from an on-premises file system (SMB/file system connector) into Azure Blob Storage. This supports scheduled or one-time copy, incremental patterns, monitoring, and retry logic. Key features / best practices: - Import/Export: supports BitLocker encryption, chain-of-custody shipping workflow, and direct ingestion into Blob. Good for bulk seeding and aligns with Well-Architected reliability (controlled process) and cost optimization (avoid large egress/ingress over WAN). - Data Factory: supports orchestration, monitoring, alerting, and repeatable pipelines. Use managed identities/service principals for Azure-side auth and least-privilege access to the storage account. Consider network security (private endpoints, firewall rules) and throughput limits of your on-premises link. Common misconceptions: - Logic Apps integration account is for B2B/EDI artifacts, not file migration. - Analysis Services on-premises data gateway is for semantic model connectivity (Power BI/Analysis Services), not bulk file copy. - Azure Batch is for parallel compute jobs, not data transfer tooling. Exam tips: When the goal is “copy on-premises files to Blob,” think: (1) online pipeline tools (Azure Data Factory with self-hosted IR) or (2) offline bulk transfer (Import/Export, Data Box—though not listed here). Match the service to constraints like bandwidth, repeatability, and operational monitoring.

12
Question 12

Your company has the infrastructure shown in the following table.

The on-premises Active Directory domain syncs with Azure Active Directory (Azure AD). Server1 runs an application named App1 that uses LDAP queries to verify user identities in the on-premises Active Directory domain. You plan to migrate Server1 to a virtual machine in Subscription1. A company security policy states that the virtual machines and services deployed to Subscription1 must be prevented from accessing the on-premises network. You need to recommend a solution to ensure that App1 continues to function after the migration. The solution must meet the security policy. What should you include in the recommendation?

Part 1:

Location: Azure - Resource: Azure subscription named Subscription1

No. An Azure subscription is only a billing and management boundary, not the service that provides LDAP for App1. To keep the application working, you must deploy Azure AD Domain Services in Azure so the migrated VM can query LDAP without any connectivity to the on-premises network. Answering Yes confuses the deployment location with the actual resource required by the solution. The subscription may host the solution, but it is not itself something to include as a recommended component.

Part 2:

Location: Azure - Resource: 20 Azure web apps

No. Azure web apps do not provide directory services or LDAP endpoints, so they do not help App1 continue performing LDAP queries after migration. The requirement is to preserve LDAP-based authentication while preventing access from Azure to the on-premises network, which is addressed by Azure AD Domain Services, not App Service. The mention of 20 Azure web apps is unrelated to the migrated server workload. Therefore, this resource should not be included in the recommendation.

Part 3:

Location: On-premises datacenter - Resource: Active Directory domain

No. The migrated application must not rely on the on-premises Active Directory domain because Subscription1 resources are not allowed to access the on-premises network. Although the on-premises AD DS can remain the original identity source and continue syncing to Azure AD, it is not part of the runtime solution for App1 after migration. Azure AD Domain Services in Azure provides the needed LDAP capability locally in Azure. Therefore, the on-premises Active Directory domain should not be included in the recommendation.

Part 4:

Location: On-premises datacenter - Resource: Server running Azure AD Connect

Yes. Azure AD Connect should be included because it synchronizes identities from the on-premises Active Directory domain to Azure AD, which is then used to populate Azure AD Domain Services. App1 will authenticate against Azure AD Domain Services in Azure, but that managed domain depends on synchronized identities being present in Azure AD. Azure AD Connect does not provide LDAP itself, but it is still a necessary component in the end-to-end solution. Answering No overlooks the identity synchronization path that enables Azure AD Domain Services to contain the required users and groups.

Part 5:

Location: On-premises datacenter - Resource: Linux computer named Server1

No. The on-premises Linux computer named Server1 is the source server being migrated, not a resource that remains part of the final recommended solution. The requirement is to ensure App1 continues to function after it is moved to an Azure virtual machine while preventing access to the on-premises network. The needed components are Azure AD Domain Services in Azure and the existing identity synchronization path, not the original on-premises server. Answering Yes incorrectly treats the source machine as part of the post-migration architecture.

13
Question 13

You have an on-premises network and an Azure subscription. The on-premises network has several branch offices. A branch office in Toronto contains a virtual machine named VM1 that is configured as a file server. Users access the shared files on VM1 from all the offices. You need to recommend a solution to ensure that the users can access the shared files as quickly as possible if the Toronto branch office is inaccessible. What should you include in the recommendation?

A Recovery Services vault with Windows Server Backup is primarily a backup/restore approach. It can protect VM1’s data, but during a Toronto outage users won’t have immediate access; you must restore to alternate infrastructure first. This increases RTO and does not inherently improve user access speed across branch offices. It addresses data protection, not fast, continuous file access.

Azure blob containers with Azure File Sync is not a valid pairing for this use case. Azure File Sync requires an Azure file share (Azure Files) as the cloud endpoint, not Blob storage. While Blob is excellent for object storage and archival, it doesn’t provide SMB share semantics needed for typical Windows file server shared folders without additional services/gateways.

A Recovery Services vault with Azure Backup protects VM1 and can restore data to Azure or another server, but it is still a restore-based DR method. Users cannot access the shared files quickly during an outage unless you have already provisioned and synchronized an alternate file server. This option improves durability but typically results in higher RTO compared to a synchronized, cloud-backed file share solution.

An Azure file share with Azure File Sync enables a cloud-backed file system with multi-site caching. VM1 can sync to Azure Files, and other offices can host File Sync server endpoints to keep local cached copies for low-latency access. If Toronto is down, users can be redirected to another endpoint or directly to Azure Files, providing much faster access than backup/restore and meeting the resiliency goal.

Question Analysis

Core concept: This question tests designing a high-availability/DR approach for shared file access when a branch office (Toronto) becomes unavailable. The key is not just restoring data, but maintaining fast, continuous access for users across multiple offices. This aligns with Azure Well-Architected Framework reliability (resiliency, failover) and performance efficiency (low-latency access). Why the answer is correct: Using an Azure file share (Azure Files) with Azure File Sync provides a cloud-backed, multi-site-capable file solution. Azure File Sync can sync the Toronto file server (VM1) to an Azure file share, and you can deploy additional File Sync server endpoints (cache servers) in other offices. If Toronto is inaccessible, users can be redirected to another local server endpoint (or directly to Azure Files) that already has the namespace and cached data, enabling much faster access than a restore-based approach. The Azure file share becomes the central, highly available copy of the data in Azure. Key features and best practices: - Azure Files provides SMB-based shares accessible from multiple locations; it supports redundancy options (e.g., ZRS/GRS depending on region/account type) to improve durability. - Azure File Sync provides: - Cloud endpoint (Azure file share) + server endpoints (Windows Servers) for multi-site caching. - Cloud tiering to keep hot data local while offloading cold data to Azure. - Rapid recovery of access: other offices can continue serving files from their local cache even if the primary site is down. - For best performance, place the storage account in a region that minimizes latency for the majority of users, and use ExpressRoute/VPN for predictable connectivity. Common misconceptions: Backup solutions (Azure Backup/Windows Server Backup) protect data but do not provide immediate, low-latency access during an outage; they require restore operations and time to rehydrate data. Blob containers are object storage and don’t natively provide SMB file share semantics for typical Windows file server workloads. Exam tips: - If the requirement is “quickly access files during site outage,” prefer active/active or cloud-backed file services (Azure Files + Azure File Sync) over backup/restore. - Azure File Sync is the go-to when you want to keep Windows file server compatibility locally while centralizing data in Azure and enabling multi-branch caching. - Map requirements: “inaccessible site” + “fast access” => resiliency + caching, not just data protection.

14
Question 14

HOTSPOT - You plan to deploy Azure Databricks to support a machine learning application. Data engineers will mount an Azure Data Lake Storage account to the Databricks file system. Permissions to folders are granted directly to the data engineers. You need to recommend a design for the planned Databrick deployment. The solution must meet the following requirements: ✑ Ensure that the data engineers can only access folders to which they have permissions. ✑ Minimize development effort. ✑ Minimize costs. What should you include in the recommendation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Databricks SKU: ______

Choose Premium because Databricks credential passthrough is a Premium feature. The requirement states that permissions are granted directly to the data engineers (per-user) and they must only access folders they have permissions to. Enforcing this with ADLS Gen2 ACLs requires Databricks to access ADLS using the individual user’s Azure AD identity, which is enabled through credential passthrough. Standard SKU commonly relies on shared credentials (storage account key/service principal via secrets) for mounts, which means all users of the cluster can access whatever the shared identity can access—violating the per-user folder restriction unless you implement complex workarounds (multiple mounts, separate containers, custom authorization logic). Premium increases cost compared to Standard, but it is the necessary SKU to meet the security requirement with minimal development effort and aligns with least-privilege access control.

Part 2:

Cluster configuration: ______

Select Credential passthrough. This configuration allows Databricks to pass the signed-in user’s Azure AD identity to ADLS Gen2 so that ADLS evaluates folder/file ACLs per user. That directly satisfies: “Ensure that the data engineers can only access folders to which they have permissions.” It also minimizes development effort because you rely on native Azure AD + ADLS ACL enforcement rather than building custom access controls. Why others are wrong: Managed identities are typically used for a single workload identity (cluster/job) and do not inherently enforce per-user permissions when multiple engineers share a cluster. Secret scopes store shared secrets (like service principal credentials) and again lead to shared access. MLflow and Photon runtime are performance/ML features, not access control. Therefore, credential passthrough is the correct cluster configuration for per-user authorization.

15
Question 15

You need to recommend a solution to meet the database retention requirements. What should you recommend?

Correct. Long-term retention (LTR) for Azure SQL Database extends backup retention beyond short-term retention, enabling weekly/monthly/yearly full backups to be kept for months or years to meet compliance and audit requirements. LTR is the built-in feature specifically intended for long-duration backup retention and supports restoring a database from an LTR backup when needed.

Incorrect. Azure Site Recovery is primarily for disaster recovery of workloads like Azure VMs and some on-premises servers by replicating them to another region. It is not the standard mechanism to meet Azure SQL Database backup retention requirements. ASR addresses business continuity (failover) rather than long-term database backup retention and archival.

Incorrect. Automatic Azure SQL Database backups are enabled by default and support point-in-time restore within the short-term retention window. While you can adjust STR within supported limits, it is generally intended for operational recovery (days/weeks), not multi-month or multi-year retention. Long-term compliance retention is handled by LTR, not STR alone.

Incorrect. Geo-replication (or failover groups) provides a readable secondary and supports fast failover for regional resiliency. It is a high availability/disaster recovery feature, not a backup retention feature. Replicas can propagate logical corruption and do not provide the historical, long-term backup chain required for retention and compliance scenarios.

Question Analysis

Core concept: This question tests Azure SQL Database backup and retention capabilities, specifically the difference between default automated backups (short-term retention) and Long-term Retention (LTR) for compliance/archival requirements. Why the answer is correct: Azure SQL Database automatically creates full, differential, and transaction log backups and retains them for a short period (Short-term Retention, STR). However, many “database retention requirements” in exam scenarios imply keeping backups for months or years for audit/compliance (e.g., 7 years). STR alone typically cannot meet long retention mandates. Configuring a long-term retention policy (LTR) allows you to retain weekly, monthly, and/or yearly full backups for extended periods (up to years), stored in Azure Blob storage managed by the service. This directly addresses retention requirements without building custom backup pipelines. Key features / configuration points: LTR is configured per database (or via policies) and supports schedules such as weekly/monthly/yearly retention. It is designed for compliance and point-in-time restore beyond STR windows by restoring from an LTR backup. From an Azure Well-Architected Framework perspective, LTR supports Reliability (recoverability) and Governance (meeting regulatory retention). It also reduces operational burden compared to manual exports. Common misconceptions: Geo-replication and Site Recovery are often confused with backup retention. Replication improves availability and disaster recovery but does not provide immutable, long-term backup history. Automatic backups are always on, but their default retention is limited and may not satisfy long-term compliance. Exam tips: When you see “retention requirements” (months/years) for Azure SQL Database, think LTR. When you see “RPO/RTO” and regional outage recovery, think geo-replication/failover groups. When you see “VM-level DR,” think Azure Site Recovery. Also remember: backups are for restore points over time; replication is for high availability and fast failover, not archival retention.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines. Solution: Use Azure Traffic Analytics in Azure Network Watcher to analyze the network traffic. Does this meet the goal?

Yes is incorrect because although Azure Traffic Analytics can surface allowed and denied flow trends from NSG flow logs, it is not the intended tool for directly troubleshooting whether packets to a VM are being allowed or denied in a precise, per-flow manner. It focuses on traffic patterns, analytics, and visualization across networks rather than deterministic packet verification for a specific connectivity issue. In this scenario, the goal is to identify packet allow/deny behavior affecting virtual machines, which is better addressed by Network Watcher IP flow verify or by reviewing NSG flow logs directly. As a result, saying the proposed solution meets the goal would overstate what Traffic Analytics is designed to do.

No is correct because Azure Policy that enforces resource group location governs only the resource group’s metadata location, not the location of resources deployed into it. To meet the requirement, you must apply Azure Policy to restrict allowed locations for the relevant resource types (e.g., Microsoft.Web/serverfarms, Microsoft.Web/sites, Microsoft.Sql/servers) or use an initiative combining these policies.

Question Analysis

Core concept: This question tests Azure governance controls for regional compliance using Azure Policy. The requirement is to deploy App Service instances only to specific Azure regions, and ensure the resources for the App Service instances reside in the same region. Why the answer is correct: Recommending an Azure Policy initiative that enforces the location of resource groups does NOT meet the goal. Resource group location is metadata for the resource group container and does not control or guarantee the locations of resources deployed into that resource group. You can create a resource group in one region (e.g., West Europe) and deploy resources into another region (e.g., North Europe). Therefore, enforcing resource group location does not enforce App Service (or App Service plan) region, nor does it ensure all related resources are co-located. Key features / correct approach: To meet the regulatory requirement, you should enforce allowed locations at the resource level using built-in Azure Policy definitions such as: - “Allowed locations” (subscription or management group scope) to restrict which regions resources can be created in. - Resource-type specific location policies (e.g., for Microsoft.Web/serverfarms and Microsoft.Web/sites) if you need more granular control. Additionally, to ensure App Service resources are in the same region, remember that an App Service app’s region is determined by its App Service plan. Enforce the plan’s location and/or deny creation of apps not matching the plan’s region (often handled operationally and via policy targeting the plan and app types). For Azure SQL Database, enforce its allowed locations separately (Microsoft.Sql/servers). This aligns with Azure Well-Architected Framework governance and compliance practices (Policy-as-code, preventative controls). Common misconceptions: A frequent misunderstanding is assuming resource group location dictates resource location. It does not. Another misconception is thinking a single policy on resource groups will automatically co-locate dependent resources; co-location requires policies applied to the actual resource types (and sometimes deployment templates/initiatives that coordinate multiple resources). Exam tips: For AZ-305, distinguish between governance at the container level (resource groups) versus enforcement at the resource provider/type level. When a question says “deploy only to specific regions,” think “Allowed locations” (deny effect) at subscription/management group scope, not resource group location. Also remember App Service app location follows the App Service plan’s region.

17
Question 17

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines. Solution: Use Azure Advisor to analyze the network traffic. Does this meet the goal?

Yes is incorrect because Azure Advisor does not capture or analyze network packet flows or provide allow/deny decisions for VM traffic. Advisor focuses on best-practice recommendations (cost optimization, security posture, reliability, performance), not operational traffic diagnostics. It may suggest improvements, but it cannot tell you whether specific packets were allowed or denied.

No is correct because Azure Advisor does not analyze packet-level network traffic or determine whether specific traffic to a virtual machine was allowed or denied. Advisor provides high-level recommendations about resource optimization and best practices, but it is not intended for troubleshooting connectivity at the flow level. To identify whether packets are being permitted or blocked, Azure Network Watcher tools such as IP flow verify, NSG flow logs, and packet capture are the appropriate services. These tools can help diagnose connectivity issues for Azure virtual machines, including scenarios involving ExpressRoute-connected environments.

Question Analysis

Core concept: This question tests knowledge of Azure networking monitoring and diagnostics tools. The requirement is to analyze network traffic and determine whether packets to virtual machines are being allowed or denied, including in a hybrid environment connected by ExpressRoute. Why correct: The solution does not meet the goal because Azure Advisor is not a packet-level or flow-level traffic analysis tool. To identify whether traffic is allowed or denied, you would use tools such as Azure Network Watcher, including IP flow verify, NSG flow logs, connection troubleshoot, and packet capture. These tools are designed to diagnose connectivity issues and evaluate whether security rules are permitting or blocking traffic. Key features: Azure Advisor provides best-practice recommendations for cost, security, reliability, operational excellence, and performance. It does not inspect live packet flows, evaluate NSG decisions for specific traffic, or capture packets for troubleshooting. Azure Network Watcher is the appropriate service for analyzing VM network traffic behavior in Azure. Common misconceptions: A common mistake is assuming Azure Advisor can perform operational diagnostics because it surfaces recommendations about Azure resources. In reality, Advisor is a recommendation engine, not a traffic inspection or packet analysis service. Another misconception is that ExpressRoute connectivity issues are diagnosed through governance or advisory tools rather than network diagnostic tools. Exam tips: For AZ-305, when a question asks whether traffic is allowed or denied, think of Azure Network Watcher features such as IP flow verify and NSG flow logs. If the question asks for recommendations or optimization guidance, think of Azure Advisor. Distinguish clearly between advisory services and diagnostic services.

18
Question 18

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. Your company deploys several virtual machines on-premises and to Azure. ExpressRoute is deployed and configured for on-premises to Azure connectivity. Several virtual machines exhibit network connectivity issues. You need to analyze the network traffic to identify whether packets are being allowed or denied to the virtual machines. Solution: Use Azure Network Watcher to run IP flow verify to analyze the network traffic. Does this meet the goal?

This option is correct because Azure Network Watcher IP flow verify is specifically built to determine whether a packet to or from a virtual machine is allowed or denied. It evaluates the effective NSG rules applied at the NIC and subnet level using the specified source, destination, port, and protocol information. The tool also identifies the exact rule that caused the allow or deny result, which makes it ideal for troubleshooting connectivity issues. Since the goal is to analyze whether packets are being allowed or denied to the VMs, this solution directly meets the requirement.

This option is incorrect because the proposed solution does meet the stated goal. IP flow verify is one of the core Azure Network Watcher diagnostics for checking whether traffic is permitted or blocked based on effective NSG rules. Although it does not perform full packet inspection or trace every hop across ExpressRoute, the question only asks to identify whether packets are being allowed or denied to the virtual machines. For that purpose, IP flow verify is the appropriate and sufficient tool.

Question Analysis

Core concept: This question tests knowledge of Azure Network Watcher diagnostics, specifically which tool can determine whether traffic to or from a virtual machine is allowed or denied by Azure networking rules such as NSGs. Why the answer is correct: IP flow verify in Azure Network Watcher is designed to check whether a packet is allowed or denied to or from a VM. It evaluates the effective network security rules that apply to the VM's network interface or subnet and returns the decision along with the specific rule responsible. In this scenario, the requirement is to identify whether packets are being allowed or denied to virtual machines, and IP flow verify directly answers that question. The presence of ExpressRoute does not change the usefulness of this tool for Azure-side packet filtering analysis. Key features / configurations: - Azure Network Watcher provides network diagnostic and monitoring tools for Azure resources. - IP flow verify tests a 5-tuple style flow: source IP, destination IP, source port, destination port, and protocol. - It identifies whether traffic is Allowed or Denied. - It also shows which NSG rule caused the decision. - It is useful for troubleshooting VM connectivity issues related to Azure network security filtering. - It analyzes Azure-side effective security rules, not arbitrary packet captures across the full end-to-end path. Common misconceptions: - Candidates often confuse IP flow verify with packet capture. Packet capture records traffic, while IP flow verify evaluates whether Azure would allow or deny a specific flow. - Some assume ExpressRoute requires a different diagnostic tool. While ExpressRoute affects connectivity, Azure-side NSG evaluation for VM traffic can still be checked with IP flow verify. - Another common mistake is choosing connection troubleshoot when the question specifically asks whether packets are allowed or denied. Connection troubleshoot tests reachability, but IP flow verify is the direct tool for rule-based allow/deny analysis. Exam tips: - If the question asks whether traffic is allowed or denied, think IP flow verify. - If the question asks which NSG rule is affecting traffic, IP flow verify is a strong match. - If the question asks to capture actual packets, use packet capture instead. - If the question asks to test end-to-end connectivity, consider connection troubleshoot. - Distinguish between Azure rule evaluation tools and traffic recording tools.

19
Question 19

DRAG DROP - You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016 and Linux. You need to use Azure Monitor to design an alerting strategy for security-related events. Which Azure Monitor Logs tables should you query? To answer, drag the appropriate tables to the correct log types. Each table may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

For Windows virtual machines, guest OS Windows event logs are queried from the Event table in Azure Monitor Logs. For Linux virtual machines, system logging data is queried from the Syslog table. AzureActivity is for Azure subscription and resource control-plane operations, not guest OS events, and AzureDiagnostics is primarily used for diagnostic logs from Azure resources rather than standard in-guest Windows event logs or Linux syslog. Therefore, the correct mapping is Event for Windows event logs and Syslog for Linux system logging.

20
Question 20

You are designing an application that will aggregate content for users. You need to recommend a database solution for the application. The solution must meet the following requirements: ✑ Support SQL commands. ✑ Support multi-master writes. ✑ Guarantee low latency read operations. What should you include in the recommendation?

Azure Cosmos DB SQL API is designed for globally distributed applications. It supports SQL-like queries over JSON documents, offers multi-region (multi-master) writes when enabled, and provides low-latency reads by replicating data to regions close to users. It also supports configurable consistency levels and conflict resolution, which are important considerations when enabling multi-master writes.

Azure SQL Database active geo-replication provides readable secondary replicas in other regions for DR and read scaling, but it does not support multi-master writes. Only the primary database accepts writes; secondaries are read-only. This can meet low-latency reads if users read from a nearby secondary, but it fails the multi-master write requirement.

Azure SQL Database Hyperscale is a tier optimized for very large databases and rapid scale, with architecture that separates compute and storage and can use read replicas. However, it still follows a single-writer model and does not provide multi-master writes across regions. It can help with read performance and scale, but it doesn’t meet the multi-master requirement.

Azure Database for PostgreSQL supports standard SQL and can provide read replicas for scaling reads, but the managed service does not natively provide multi-master, multi-region writes as a built-in feature. Typical HA/DR patterns are single primary with replicas. Achieving multi-master would require complex custom replication/conflict handling, which is not the intended managed solution.

Question Analysis

Core concept: This question tests selecting a globally distributed database that supports SQL-like querying, multi-master (multi-region) writes, and consistently low-latency reads. In Azure, the primary service designed for this combination is Azure Cosmos DB with the SQL API. Why the answer is correct: Azure Cosmos DB SQL API provides a SQL-like query language over JSON documents and is built for global distribution. It supports multi-region writes (multi-master) via the “multi-region writes” capability, allowing writes to be accepted in multiple Azure regions. For low-latency reads, Cosmos DB lets you replicate data to regions close to users and uses automatic indexing and partitioning to keep read performance predictable. You can also choose consistency levels (e.g., Session for user-centric apps) to balance latency and consistency. Key features / configurations: - SQL support: Cosmos DB SQL API uses SQL-like queries (SELECT, WHERE, ORDER BY, etc.) against JSON items. - Multi-master writes: Enable multi-region writes and add multiple regions. Cosmos DB handles conflict detection/resolution (last-writer-wins or custom conflict resolution using stored procedures). - Low-latency reads: Add read regions near users; Cosmos DB provides single-digit millisecond reads in practice when properly partitioned and provisioned. - Partitioning and throughput: Choose a good partition key and provision RU/s (or autoscale) to meet latency/throughput SLOs. - Well-Architected alignment: Improves Performance Efficiency (global distribution, predictable latency), Reliability (multi-region replication), and Operational Excellence (managed service). Common misconceptions: Azure SQL Database with active geo-replication improves read scale and DR, but it is not multi-master; only the primary is writable. Hyperscale improves scale-out storage and read replicas but still does not provide multi-master writes. Azure Database for PostgreSQL (managed Postgres) supports SQL but multi-master writes across regions is not a standard built-in capability for the Azure managed offering; typical patterns rely on single-writer with read replicas. Exam tips: When you see “multi-master writes” plus “low latency reads” and “global users,” think Cosmos DB. If the requirement is strict relational semantics (joins, constraints) and single-writer is acceptable, then Azure SQL is often the answer—but multi-master is the key differentiator here.

Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Practice Test #4

50 Questions·100 min·Pass 700/1000

Other Microsoft Certifications

Microsoft AI-102

Microsoft AI-102

Associate

PL-300: Microsoft Power BI Data Analyst

PL-300: Microsoft Power BI Data Analyst

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-204

Microsoft AZ-204

Associate

Microsoft AZ-500

Microsoft AZ-500

Associate

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AZ-305 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.