CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. Microsoft AZ-500
Microsoft AZ-500

Microsoft

Microsoft AZ-500

442+ Practice Questions with AI-Verified Answers

Microsoft Azure Security Engineer Associate

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 442+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every Microsoft AZ-500 answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Secure Identity and AccessWeight 18%
Secure NetworkingWeight 24%
Secure Compute, Storage, and DatabasesWeight 24%
Secure Azure using Microsoft Defender for Cloud and Microsoft SentinelWeight 34%

Practice Questions

1
Question 1

HOTSPOT - You have a network security group (NSG) bound to an Azure subnet. You run Get-AzNetworkSecurityRuleConfig and receive the output shown in the following exhibit. Name : DenyStorageAccess Description : Protocol : * SourcePortRange : {} DestinationPortRange : {} SourceAddressPrefix : {*} DestinationAddressPrefix : {Storage} SourceApplicationSecurityGroups: [] DestinationApplicationSecurityGroups: [] Access : Deny Priority : 105 Direction : Outbound

Name : StorageE2A2Allow ProvisioningState : Succeeded Description : Protocol : * SourcePortRange : {} DestinationPortRange : {443} SourceAddressPrefix : {} DestinationAddressPrefix : {Storage.EastUS2} SourceApplicationSecurityGroups: [] DestinationApplicationSecurityGroups: [] Access : Allow Priority : 104 Direction : Outbound

Name : Contoso_FTP Description : Protocol : TCP SourcePortRange : {*} DestinationPortRange : {21} SourceAddressPrefix : {1.2.3.4/32} DestinationAddressPrefix : {10.0.0.5/32} SourceApplicationSecurityGroups: [] DestinationApplicationSecurityGroups: [] Access : Allow Priority : 504 Direction : Inbound

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Traffic destined for an Azure Storage account is ______.

Outbound traffic to Azure Storage is evaluated against the outbound rules by priority. The rule "StorageE2A2Allow" (Priority 104) allows outbound traffic to destination service tag "Storage.EastUS2" on destination port 443. The next rule "DenyStorageAccess" (Priority 105) denies outbound traffic to destination service tag "Storage" (all Storage regions) on all ports. Because NSG evaluation stops at the first match, traffic to an East US 2 Storage endpoint on 443 matches the allow rule at 104 and is permitted. Traffic to Storage in other regions (e.g., East US or West Europe) does not match "Storage.EastUS2" and will then match the broader "Storage" deny rule at 105, so it is blocked. Therefore, Storage connectivity is only allowed to East US 2 (on 443).

Part 2:

FTP connections from 1.2.3.4 to 10.0.0.10/32 are ______.

NSG inbound rules must match the 5-tuple (protocol, source IP/port, destination IP/port) and direction. The rule "Contoso_FTP" allows inbound TCP traffic from source 1.2.3.4/32 to destination 10.0.0.5/32 on destination port 21. The question asks about FTP connections from 1.2.3.4 to 10.0.0.10/32. Even though the source IP and destination port (21) would align with the intent of the rule, the destination IP does not: 10.0.0.10 is not included in the destination prefix 10.0.0.5/32. Since no other inbound allow rule is shown that would match 10.0.0.10/32, the traffic falls through to the default inbound behavior (DenyAllInBound) and is dropped. NSGs do not “forward” traffic; they allow or deny.

2
Question 2
(Select 2)

You create a new Azure subscription. You need to ensure that you can create custom alert rules in Azure Security Center. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Azure AD Identity Protection is an identity-focused service that detects risky sign-ins and risky users in Microsoft Entra ID. It is not a prerequisite for configuring custom alert rules in Azure Security Center. Although identity signals may complement a broader security strategy, onboarding Identity Protection does not enable Security Center custom alerts. Therefore, it is unrelated to the required setup in this question.

An Azure Storage account is required in the classic Azure Security Center custom alert workflow to hold collected security data used for analysis. Security Center can use this stored telemetry as part of its detection pipeline for generating alerts. Without the storage account, the service lacks the necessary backing store referenced in this exam scenario. This makes the storage account a prerequisite for creating custom alert rules in the older Security Center model.

Azure Advisor provides best-practice recommendations for cost, reliability, performance, operational excellence, and security. Implementing those recommendations can improve an environment, but it does not activate or configure custom alert rule functionality in Azure Security Center. Advisor is a recommendation engine, not the platform used to define Security Center custom alerts. As a result, it is not part of the required solution.

A Log Analytics workspace is commonly associated with Azure Monitor, Microsoft Sentinel, and many Defender for Cloud data collection scenarios. However, in the classic Azure Security Center custom alert rule context tested by this question, it is not the specific prerequisite being asked for. The exam expects the combination of Standard tier enablement and a storage account instead. Therefore, selecting a Log Analytics workspace here reflects a different monitoring architecture than the one targeted by the question.

The Standard pricing tier of Azure Security Center unlocks advanced security capabilities, including threat detection and custom alerting features. The Free tier is limited primarily to security posture assessment and recommendations, not advanced configurable detections. Because the question asks specifically about custom alert rules, enabling Standard is necessary. This is a common AZ-500 pattern whenever advanced Security Center functionality is required.

Question Analysis

Core concept: This question is about the prerequisites for creating custom alert rules in Azure Security Center (now Microsoft Defender for Cloud) in the classic AZ-500 context. Custom alerts in Security Center relied on collected security data and the advanced capabilities available only in the Standard tier. A storage account is needed to store the security event data used for these detections, and the subscription must be upgraded to Standard to unlock custom alert functionality. Why correct: Creating a storage account provides the location for storing collected security events and telemetry that Security Center can analyze for custom alerting scenarios. Upgrading Security Center to the Standard tier enables advanced threat detection and custom alert rule capabilities that are not available in the Free tier. Key features: Security Center Standard adds advanced threat protection, richer detections, and configurable alerting. Storage accounts can be used as part of the data collection pipeline for security events in older Security Center workflows. The Free tier focuses mainly on security posture and recommendations rather than advanced alert customization. Common misconceptions: A Log Analytics workspace is important for many Azure Monitor and Sentinel scenarios, but it is not the required prerequisite for this specific Security Center custom alert rule question as commonly tested. Azure AD Identity Protection and Azure Advisor are separate services and do not enable Security Center custom alert creation. Exam tips: For older Azure Security Center exam questions, distinguish between Azure Monitor/Sentinel analytics rules and Security Center custom alerts. If the question asks specifically about Security Center custom alert rules, look for Standard tier enablement and the supporting data storage requirement rather than assuming Log Analytics is always mandatory.

3
Question 3

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You create a new stored access policy. Does this meet the goal?

Yes is incorrect because a newly created stored access policy only creates another optional policy for future SAS generation; it does not affect any SAS tokens that already exist. SAS tokens for the blob service and file service remain valid if they were issued directly or linked to other stored access policies. To revoke all access, you must target the existing policies that were used, or regenerate the storage account keys to invalidate SAS signed with those keys. Therefore, the proposed action does not meet the stated goal of revoking all access to sa1.

No is correct because Azure Resource Graph is a resource query service, not a policy packaging/deployment mechanism. To deploy multiple policy definitions together, you must create an initiative (policy set definition) and assign it at a management group that contains the three subscriptions. The management group assignment is valid, but the “resource graph” part makes the solution incorrect.

Question Analysis

Core concept: This question tests Azure Policy deployment at scale and how Microsoft Defender for Cloud leverages Azure Policy for centralized governance. The key capability is deploying multiple policy definitions together (as an initiative) and assigning them across multiple subscriptions using management groups. Why the answer is correct: The proposed solution is incorrect because “creating a resource graph” is not a mechanism to deploy or group Azure Policy definitions. Azure Resource Graph is a query service used to inventory and query resources across subscriptions; it does not package policy definitions nor deploy them. While assigning policy at a management group scope is a correct scaling approach, the “resource graph” part indicates the wrong tool for grouping and deployment. To deploy policy definitions as a group, you should use an Azure Policy initiative (policy set definition) and then create a single assignment scoped to the management group that contains the three subscriptions. Key features / best practices: - Use Management Groups to apply governance consistently across multiple subscriptions (scope inheritance). - Use Initiatives (Policy Set Definitions) to bundle multiple policy definitions and assign them as one unit. - In Defender for Cloud, regulatory compliance and security recommendations are often implemented via built-in initiatives; custom initiatives can be used for organization-specific requirements. - Align with Azure Well-Architected Framework (Security pillar): enforce security baselines consistently, reduce configuration drift, and centralize governance. Common misconceptions: - Confusing Azure Resource Graph with governance deployment. Resource Graph is for querying/reporting, not for policy packaging or assignment. - Thinking that “any” management-group assignment automatically implies grouped deployment. Grouping requires an initiative; otherwise, you must assign each policy definition individually. Exam tips: For “deploy policy definitions as a group,” look for “initiative/policy set definition.” For “apply to multiple subscriptions,” look for “management group scope.” If a choice mentions Resource Graph for deployment, treat it as a red flag: it’s for cross-subscription queries and inventory, not enforcement.

4
Question 4

HOTSPOT - You have an Azure subscription that contains the virtual machines shown in the following table. Name Resource group Status VM1 RG1 Stopped (Deallocated) VM2 RG2 Stopped (Deallocated) You create the Azure policies shown in the following table.

You create the resource locks shown in the following table.

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Not allowed resource types: Resource type: virtualMachines, Scope: RG1

Yes. A policy assignment scoped to RG1 that specifies a Not allowed resource types list including Microsoft.Compute/virtualMachines (often shown as “virtualMachines”) will deny create operations for that resource type within RG1. Azure Policy with a Deny effect is evaluated by Azure Resource Manager when a request is made to create or update a resource. Because the scope is RG1, the restriction applies to resources in RG1 only. This does not retroactively delete or disable existing VMs; it prevents future create/update requests that would violate the policy. Also, it does not automatically block runtime actions unless those actions are implemented as ARM writes and are explicitly denied by policy conditions (most “not allowed resource types” policies focus on resource creation). Therefore, the statement that virtualMachines are not allowed at scope RG1 is true.

Part 2:

Allowed resource types: Resource type: virtualMachines, Scope: RG2

Yes. A policy assignment scoped to RG2 that specifies Allowed resource types including Microsoft.Compute/virtualMachines means only the listed types are permitted for create/update operations in RG2. If virtualMachines is in the allowed list, then creating a VM is permitted by that policy (assuming no other policy assignments deny it). In Azure Policy, “Allowed resource types” is typically implemented with a Deny effect for any resource type not in the list. So the presence of virtualMachines in the allowed list explicitly permits that type at that scope. This is a common governance control to restrict what can be deployed in a resource group. Therefore, the statement that virtualMachines are allowed at scope RG2 is true.

Part 3:

Lock1 is Read-only and created on VM1.

Yes. The statement is about the lock configuration: Lock1 is Read-only and created on VM1. A Read-only lock applied directly to a VM resource means the VM resource cannot be modified through ARM operations. This includes operations that change state or configuration (for example, start/stop/restart, resizing, updating extensions), because these are executed via ARM and treated as write operations. In exam questions, when a lock is described as being created “on VM1,” it means the lock scope is the VM resource itself (not inherited from the resource group). That scope is narrower than an RG-level lock, but it is still sufficient to block modifications to VM1. Therefore, the statement describing Lock1 is true.

Part 4:

Lock2 is Read-only and created on RG2.

Yes. The statement is about the lock configuration: Lock2 is Read-only and created on RG2. A Read-only lock at the resource group scope applies to the resource group and all resources within it (inheritance). That means any resource in RG2 becomes effectively read-only from an ARM perspective: you can’t create, delete, or modify resources in that RG while the lock is in place. This is a strong protection mechanism used to prevent accidental changes to critical environments. In the context of the question, it also means that even if Azure Policy would allow a VM type in RG2, the Read-only lock would still block creation or modification operations. Therefore, the statement that Lock2 is Read-only and created on RG2 is true.

Part 5:

You can start VM1.

No. You cannot start VM1 because Lock1 is a Read-only lock applied to VM1. Starting a VM is an ARM control-plane action (Microsoft.Compute/virtualMachines/start/action) and is treated as a write/modify operation against the VM resource. Read-only locks block write operations, so the start request will be denied by Azure Resource Manager. Even though VM1 is currently Stopped (Deallocated), the deallocated state does not bypass locks. Locks are evaluated at the time of the management operation. Also, Azure Policy about “not allowed resource types” in RG1 primarily affects creation of new resources; it is the Read-only lock that directly prevents the start operation. Therefore, the statement “You can start VM1” is false.

Part 6:

You can start VM2.

No. You cannot start VM2 because Lock2 is a Read-only lock applied at the RG2 scope. Resource group locks are inherited by all resources in the group, including VM2. As with VM1, starting a VM is a management operation executed via ARM and is considered a modification of the VM resource state. A Read-only lock at the resource group level blocks any write operations on resources in that group, including start/stop/restart and configuration changes. Azure Policy in RG2 allowing virtualMachines does not override the lock; locks are enforced independently by ARM and will still deny the operation. Therefore, the statement “You can start VM2” is false.

Part 7:

You can create a virtual machine in RG2.

No. Even though the Azure Policy in RG2 allows the virtualMachines resource type, the Read-only lock (Lock2) on RG2 prevents creating any new resources in that resource group. Creating a VM is an ARM create operation (a write) and is blocked by a Read-only lock at the resource group scope. This is a key exam point: Azure Policy determines whether a request is compliant and can be accepted, but a lock can still block the operation even if it is compliant. In other words, “allowed by policy” does not mean “possible” when a Read-only lock exists. To create a VM in RG2, you would need to remove or change the lock (for example, remove Read-only or use a different protection strategy). Therefore, the statement “You can create a virtual machine in RG2” is false.

5
Question 5

From Azure Security Center, you create a custom alert rule. You need to configure which users will receive an email message when the alert is triggered. What should you do?

Incorrect. Azure Monitor action groups are used to define notification targets for Azure Monitor alerts, such as metric alerts, log alerts, and activity log alerts. The question is specifically about Azure Security Center custom alert notifications, which are configured through Security Center's own policy-based email notification settings. Choosing an action group confuses Azure Monitor alerting with Defender for Cloud alert notification configuration.

Correct. In Azure Security Center, the recipients for alert email notifications are configured in the Security policy settings for the subscription. This is where you define whether subscription owners are notified and where you add additional email addresses for security alert notifications. Because the question asks specifically about Azure Security Center and who receives an email when the alert is triggered, modifying the subscription's Security policy settings is the appropriate action.

Incorrect. The Security Reader role in Azure AD or Azure RBAC determines who can view security-related information, such as alerts and recommendations. It does not control who receives email notifications when a Security Center alert is triggered. Email delivery settings are configured separately in Security Center policy settings.

Incorrect. Modifying the alert rule affects the rule definition itself, such as what condition generates the alert. It does not determine the list of users who receive email notifications for Security Center alerts. Recipient configuration is handled through the subscription's Security policy notification settings, not directly inside the alert rule.

Question Analysis

Core concept: In Azure Security Center (now Microsoft Defender for Cloud), email notifications for security alerts are configured in the subscription's Security policy settings. These settings let you specify which users or email addresses should receive notifications when security alerts are generated. Why correct: To control who receives email messages for Security Center alerts, you modify the email notification configuration under the Security policy for the Azure subscription. This is the built-in mechanism Security Center uses for alert notification recipients, rather than Azure Monitor action groups. Key features: - Security policy settings include email notification options for security alerts. - You can notify subscription owners and specify additional email recipients. - These settings apply at the subscription level and are part of Defender for Cloud's security configuration. - RBAC roles affect access to alerts, but not email delivery configuration. Common misconceptions: - Azure Monitor action groups are used for Azure Monitor alerts, but Security Center alert email recipients in this context are configured through Security policy settings. - Editing an alert rule does not define the recipient list for Security Center email notifications. - Assigning users to Security Reader only grants visibility into alerts and recommendations; it does not subscribe them to emails. Exam tips: For AZ-500, when a question asks about who receives email notifications from Azure Security Center/Defender for Cloud alerts, think about Security policy email notification settings at the subscription level. Reserve Azure Monitor action groups for Azure Monitor alerting scenarios unless the question explicitly references Azure Monitor alerts.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

HOTSPOT - Your company has two offices in Seattle and New York. Each office connects to the Internet by using a NAT device. The offices use the IP addresses shown in the following table. Location IP address space Public NAT segment Seattle 10.10.0.0/16 190.15.1.0/24 New York 172.16.0.0/16 194.25.2.0/24 The company has an Azure Active Directory (Azure AD) tenant named contoso.com. The tenant contains the users shown in the following table.

diagram

The MFA service settings are configured as shown in the exhibit. (Click the Exhibit tab.) trusted ips (learn more)

☑️ Skip multi-factor authentication for requests from federated users on my intranet

Skip multi-factor authentication for requests from following range of IP address subnets

10.10.0.0/16 194.25.2.0/24

verification options (learn more)

Methods available to users: ☑️ Call to phone ☑️ Text message to phone ⬜ Notification through mobile app ⬜ Verification code from mobile app or hardware token For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

If User1 signs in to Azure from a device that uses an IP address of 134.18.14.10, User1 must be authenticated by using a phone.

User1 is in the Enabled state for per-user MFA. Enabled means the user is enabled for MFA registration, but MFA is not yet strictly enforced for sign-ins in the same way as the Enforced state. Although 134.18.14.10 is not in a trusted IP range and only phone-based methods are available, the statement says User1 must be authenticated by using a phone, which is not guaranteed for a user in Enabled state. Therefore, the correct answer is No.

Part 2:

If User2 signs in to Azure from a device in the Seattle office, User2 must be authenticated by using the Microsoft Authenticator app.

User2 is Enforced for per-user MFA, meaning MFA is required at sign-in unless a trusted IP bypass applies. The sign-in is from the Seattle office. However, Seattle users egress through the public NAT segment 190.15.1.0/24. The trusted IP list includes 10.10.0.0/16 (Seattle internal space) but does not include 190.15.1.0/24. Azure AD evaluates the public source IP it receives, so the Seattle sign-in will not match the trusted IP configuration and MFA will still be required. The statement claims User2 must use the Microsoft Authenticator app. That is incorrect because the verification options do not allow app-based methods (both “Notification through mobile app” and “Verification code from mobile app or hardware token” are unchecked). With only phone call/SMS enabled, User2 cannot be required to use the Authenticator app. Therefore, the statement is false.

Part 3:

If User2 signs in to Azure from a device in the New York office, User2 must be authenticated by using a phone.

User2 is Enforced, so normally MFA would be required. But the sign-in is from the New York office, which uses public NAT segment 194.25.2.0/24. That exact public range is included in the trusted IPs list. Because Azure AD will see the source IP as 194.25.2.x, the request matches a trusted IP range and MFA is skipped for that sign-in. Since MFA is bypassed, User2 does not need to complete a phone call or SMS challenge (or any MFA method) for this sign-in. Therefore, the statement “User2 must be authenticated by using a phone” is not true in this scenario. Note that the presence of 172.16.0.0/16 (New York internal space) is irrelevant to Azure AD’s evaluation because it is not the public egress IP seen by Azure AD.

7
Question 7

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have a hybrid configuration of Azure Active Directory (Azure AD). You have an Azure HDInsight cluster on a virtual network. You plan to allow users to authenticate to the cluster by using their on-premises Active Directory credentials. You need to configure the environment to support the planned authentication. Solution: You deploy Azure Active Directory Domain Services (Azure AD DS) to the Azure subscription. Does this meet the goal?

Yes is correct because Azure AD DS provides a managed AD DS-compatible domain in Azure that supports LDAP and Kerberos, which are commonly required for HDInsight Enterprise Security Package (ESP) authentication. In a hybrid setup, users synchronized from on-premises AD to Azure AD can be surfaced in Azure AD DS, enabling them to authenticate using their existing on-premises credentials. By deploying Azure AD DS into the subscription and integrating the HDInsight cluster (domain join, DNS configuration), the environment can support domain-based authentication without deploying and managing domain controller VMs.

No is incorrect because the requirement is to allow authentication using on-premises Active Directory credentials, which implies domain-based authentication mechanisms (for example, Kerberos/LDAP) rather than Azure AD-only sign-in. Azure AD DS is specifically designed to provide those AD DS protocols as a managed service in Azure and is a supported approach for enabling HDInsight ESP domain integration. While there are alternative implementations (such as deploying AD DS domain controllers on Azure VMs), the proposed solution can meet the goal, so answering No would wrongly assume Azure AD DS cannot satisfy the requirement.

Question Analysis

Core concept: This question tests how to enable domain-based (on-premises AD) authentication for an Azure HDInsight cluster deployed into a virtual network, in a hybrid identity environment. Why the answer is correct: HDInsight supports Enterprise Security Package (ESP) scenarios where cluster authentication/authorization is integrated with a domain (Kerberos/LDAP) so users can sign in using domain credentials. Deploying Azure Active Directory Domain Services (Azure AD DS) provides a managed domain in Azure that is compatible with traditional AD DS protocols (LDAP/Kerberos/NTLM) and can be joined to from resources in an Azure VNet. In a hybrid Azure AD setup, user identities synchronized from on-premises AD to Azure AD can be made available in Azure AD DS, allowing those users to authenticate to the HDInsight cluster using their existing on-premises credentials (same username/password as synchronized). Therefore, deploying Azure AD DS is a valid way to meet the requirement of authenticating to HDInsight using on-premises AD credentials. Key features / configurations: - Azure AD DS provides managed domain services: LDAP, Kerberos, NTLM, Group Policy, and domain join. - Deploy Azure AD DS into the same (or peered) VNet as the HDInsight cluster and configure VNet DNS to point to the Azure AD DS domain controller IPs. - Use HDInsight Enterprise Security Package (ESP) and domain-join the cluster to the Azure AD DS managed domain. - Ensure Azure AD Connect is syncing users (and typically password hash sync) so credentials work consistently. Common misconceptions: - Confusing Azure AD (cloud identity) with AD DS (domain services). Azure AD alone does not provide Kerberos/LDAP domain join required for many HDInsight ESP integrations. - Assuming you must deploy full IaaS domain controllers in Azure; Azure AD DS can satisfy the domain requirement as a managed service. - Thinking pass-through authentication is required; for Azure AD DS, password hash synchronization is commonly used to enable sign-in with the same password. Exam tips: - If a service needs LDAP/Kerberos/domain join in Azure, consider Azure AD DS (managed) or AD DS on VMs. - For HDInsight with domain-based auth, look for “Enterprise Security Package (ESP)” + domain integration. - Remember to update VNet DNS settings when introducing Azure AD DS so domain join and lookups work.

8
Question 8

You plan to use Azure Resource Manager templates to perform multiple deployments of identically configured Azure virtual machines. The password for the administrator account of each deployment is stored as a secret in different Azure key vaults. You need to identify a method to dynamically construct a resource ID that will designate the key vault containing the appropriate secret during each deployment. The name of the key vault and the name of the secret will be provided as inline parameters. What should you use to construct the resource ID?

A Key Vault access policy controls whether a user, service principal, or managed identity can read secrets from the vault. It is necessary for authorization, but it does not provide a mechanism to build or pass a resource ID into an ARM deployment. Even with the correct access policy, the deployment still needs a way to specify which vault and secret to use. Therefore, access policy is related to permissions, not dynamic construction or designation of the Key Vault resource ID.

A linked template is used to split deployments into reusable modules or separate template files. Although a linked template can accept parameters and use functions internally, it is not the standard mechanism for supplying a Key Vault secret reference for a secure parameter value. The question asks what should be used to designate the appropriate Key Vault during each deployment when names are provided as parameters, and that is typically handled through deployment parameters rather than by introducing another template. Using a linked template would add unnecessary complexity and does not directly answer the requirement.

A parameters file is the correct choice because ARM templates commonly use deployment parameters to supply environment-specific values, including Azure Key Vault secret references. When a secret is stored in different vaults for different deployments, the parameters file can include a reference object containing the Key Vault resourceId and the secretName. This allows the same ARM template to be reused while dynamically targeting the appropriate vault during each deployment. The template itself remains unchanged, and only the parameter values differ between environments.

An Automation Account can run scripts or orchestrate deployment workflows, but it is not an ARM template construct for passing Key Vault secret references. The requirement is specifically about how to designate the correct Key Vault during ARM deployments, which is handled natively through template parameters and parameter files. Automation may invoke the deployment, but it does not replace the ARM mechanism for secret resolution. As a result, it is outside the scope of the direct solution.

Question Analysis

Core concept: This question is about how ARM template deployments can retrieve secrets from Azure Key Vault when the vault differs between deployments. In ARM, the resource ID for the Key Vault used in a secure parameter reference is typically supplied in the deployment parameters, and the parameters file supports a reference block that includes the vault's resourceId and the secretName. Why correct: Because the vault name and secret name vary per deployment, a parameters file is the standard mechanism to pass those values and the Key Vault reference into the template at deployment time. Key features: Parameters files externalize environment-specific values, support secure secret references, and allow repeated deployments of the same template without modifying template logic. Common misconceptions: Linked templates help modularize deployments, but they do not inherently solve Key Vault secret parameterization; access policies control authorization only; Automation Accounts orchestrate tasks but do not define ARM secret references. Exam tips: For ARM template questions involving different values per environment, think of parameters files as the place to provide deployment-specific inputs, including Key Vault secret references.

9
Question 9

You have an Azure subscription that contains an Azure key vault named Vault1. In Vault1, you create a secret named Secret1. An application developer registers an application in Azure Active Directory (Azure AD). You need to ensure that the application can use Secret1. What should you do?

Incorrect. Creating a role in Azure AD does not grant an application permission to read secrets from Azure Key Vault. Key Vault secret access is controlled at the vault level through a Key Vault access policy or, in environments using the RBAC permission model, through an Azure RBAC role assignment scoped to the vault. An Azure AD role affects directory-level administration and is not how the application is authorized to use Secret1.

Incorrect. Keys and secrets are different object types in Azure Key Vault and serve different purposes. Creating a key would enable cryptographic operations, but it would not allow the application to access the existing secret named Secret1. The requirement is about granting access to a secret, not creating a new cryptographic asset.

Correct. A Key Vault access policy is the classic mechanism to authorize a specific principal (user, group, or service principal for the registered application) to perform secret operations. By adding an access policy for the application and granting Secret permissions such as Get (and optionally List), the application can retrieve Secret1 securely using Azure AD authentication.

Incorrect. Azure AD Application Proxy is used to publish internal web applications for remote access through Azure AD. It has no role in granting an application permission to read data from Azure Key Vault. Even if Application Proxy were enabled, the application would still need explicit authorization on Vault1 to access Secret1.

Question Analysis

Core concept: This question tests how Azure Key Vault authorizes an Azure AD application (service principal) to access secrets. Key Vault uses Azure AD for authentication (who you are) and Key Vault permissions for authorization (what you can do). Historically, this authorization is configured with Key Vault access policies; newer deployments can also use Azure RBAC for Key Vault data-plane permissions, but the classic and most commonly tested approach in AZ-500 is access policies. Why the answer is correct: To allow the registered Azure AD application to read/use Secret1, you must grant the application permissions on Vault1. In the access policy, you select the application’s service principal and assign secret permissions such as Get (read secret value) and optionally List (enumerate secrets). Without this explicit authorization, the application will authenticate successfully to Azure AD but Key Vault will deny data-plane operations (403 Forbidden). Key features / configuration details: - Create an access policy in Vault1 and choose the application (service principal) as the principal. - Grant the minimum required secret permissions (least privilege): typically Get; add List only if the app must discover secret names/versions. - The application will use its identity (client secret/certificate or managed identity if applicable) to obtain an Azure AD token for Key Vault, then call the Key Vault secret endpoint. - This aligns with Azure Well-Architected Framework Security pillar: least privilege, centralized secret management, and strong identity-based access. Common misconceptions: - Creating an Azure AD role is not how Key Vault secret access is granted; Azure AD roles govern directory resources, not Key Vault secret operations. - Creating a key in Key Vault is unrelated; Secret1 is a secret object, not a key. - Azure AD Application Proxy publishes on-prem apps externally; it does not grant Key Vault permissions. Exam tips: For Key Vault access questions, distinguish between authentication (Azure AD app/service principal/managed identity) and authorization (Key Vault access policy or Key Vault RBAC). If the question asks to “ensure an app can use a secret,” think: grant secret permissions on the vault to that app identity, typically via an access policy (Get/List).

10
Question 10

HOTSPOT - You have an Azure subscription named Sub1. You create a virtual network that contains one subnet. On the subnet, you provision the virtual machines shown in the following table. Name Network interface Application security group assignment IP address VM1 NIC1 AppGroup12 10.0.0.10 VM2 NIC2 AppGroup12 10.0.0.11 VM3 NIC3 AppGroup3 10.0.0.100 VM4 NIC4 AppGroup4 10.0.0.200 Currently, you have not provisioned any network security groups (NSGs). You need to implement network security to meet the following requirements: ✑ Allow traffic to VM4 from VM3 only. ✑ Allow traffic from the Internet to VM1 and VM2 only. ✑ Minimize the number of NSGs and network security rules. How many NSGs and network security rules should you create? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

NSGs: ______

Create 1 NSG. Since all VMs are in the same subnet, you can associate a single NSG to the subnet and control traffic to all NICs/VMs in that subnet. This minimizes NSG count and is a common exam pattern: use one subnet-level NSG unless you need different policies per subnet or you must apply different NSGs to different NICs. Here, the requirements can be expressed with ASG-based rules inside one NSG (target AppGroup12 for Internet access and AppGroup4 for VM4 restrictions). Creating 2–4 NSGs would not reduce the number of rules needed and would increase management overhead, violating the “minimize the number of NSGs” requirement.

Part 2:

Network security rules: ______

Create 3 security rules (inbound) in the single NSG. 1) Allow Internet -> AppGroup12 (VM1 and VM2) on required ports (ports aren’t specified, so conceptually “traffic”). This satisfies “Allow traffic from the Internet to VM1 and VM2 only” because other VMs remain blocked by default DenyAllInBound. 2) Allow AppGroup3 (VM3) -> AppGroup4 (VM4). This permits VM3 to reach VM4. 3) Deny VirtualNetwork -> AppGroup4. This is required because the default AllowVnetInBound would otherwise allow VM1/VM2 (and any other VNet source) to reach VM4. Place rule (2) at a higher priority (lower number) than rule (3) so VM3 remains allowed while all other VNet sources are denied. With only 2 rules, you cannot both allow VM3 and block all other VNet sources due to the default allow.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

HOTSPOT - What is the membership of Group1 and Group2? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Group1: ______

Group1’s evaluated membership results in exactly two users: User2 and User4. This outcome is consistent with either (a) an Assigned group where only User2 and User4 are explicitly added, or (b) a Dynamic user group where only User2 and User4 match the rule shown in the prompt image (for example, a specific department, job title, or user type). The other options don’t fit: - A (No members) would require that no users are assigned and/or no users match the dynamic rule. - B (Only User2) would require User4 to be absent from the assigned list or fail the dynamic rule. - D (User1, User2, User3, and User4) would require all users to be explicitly assigned or all to match the dynamic rule, which is not the case based on the scenario’s membership evaluation.

Part 2:

Group2: ______

Group2’s evaluated membership includes User1 and User3 only. This is typical of a Dynamic user group where the rule matches exactly those two users’ attributes (for example, user.department -eq "HR" matching User1 and User3, while User2 and User4 are in other departments), or an Assigned group where only those two are explicitly added. The incorrect options are eliminated as follows: - A (No members) contradicts the evaluated membership shown. - B (Only User3) would require User1 not to be assigned and/or not to satisfy the dynamic rule. - D (User1, User2, User3, and User4) would require all users to be assigned or to match the rule, which is not supported by the scenario’s membership outcome. For AZ-500, remember dynamic membership is calculated automatically and is driven strictly by the rule; it’s commonly used to enforce least privilege at scale.

12
Question 12

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription named Sub1. You have an Azure Storage account named sa1 in a resource group named RG1. Users and applications access the blob service and the file service in sa1 by using several shared access signatures (SASs) and stored access policies. You discover that unauthorized users accessed both the file service and the blob service. You need to revoke all access to sa1. Solution: You generate new SASs. Does this meet the goal?

Yes is incorrect because new SAS tokens do not invalidate previously distributed SAS tokens. Unauthorized users who already possess valid SAS URIs can continue accessing the blob and file services until those tokens expire or their signing basis is revoked. To stop all existing SAS-based access, you must regenerate the storage account keys and/or remove or modify stored access policies used by the SAS. Therefore, the proposed action does not meet the stated goal.

No is correct because generating new SAS tokens does not revoke any SAS tokens that were already issued to users or applications. Existing SAS tokens are bearer tokens and remain valid until they expire, unless the underlying account key is regenerated or the stored access policy they depend on is changed or removed. Since the goal is to revoke all access to the storage account, merely creating new SAS values is insufficient. A proper revocation approach would involve rotating storage account keys and updating or deleting relevant stored access policies.

Question Analysis

Core concept: This question tests how to revoke access to Azure Storage when access is being granted through shared access signatures (SAS) and stored access policies. SAS tokens are already-issued bearer tokens, so simply generating additional SAS tokens does not invalidate existing ones. Why correct: The proposed solution does not meet the goal. To revoke all access granted through SAS, you must invalidate the existing SAS mechanisms, such as rotating the storage account keys for account/service SAS signed with those keys and modifying or deleting stored access policies for SAS tied to those policies. Key features: - SAS tokens remain valid until expiration unless the signing basis is invalidated. - Account SAS and service SAS signed with account keys can be revoked by regenerating the storage account access keys. - Service SAS associated with a stored access policy can be revoked by changing or deleting that stored access policy. - Generating new SAS tokens only affects future distribution, not previously issued tokens. Common misconceptions: - A common mistake is assuming that issuing new SAS tokens automatically replaces or cancels old ones. SAS tokens are independent credentials and continue to work until they expire or are explicitly invalidated through key or policy changes. - Another misconception is that one action always revokes every SAS type; in practice, revocation depends on how the SAS was created. Exam tips: When asked to revoke SAS access, focus on invalidating the signing key or the stored access policy. If the option only says to create or generate new SAS tokens, that is almost always insufficient because existing tokens remain usable.

13
Question 13

Your company has an Azure subscription named Sub1 that is associated to an Azure Active Directory (Azure AD) tenant named contoso.com. The company develops an application named App1. App1 is registered in Azure AD. You need to ensure that App1 can access secrets in Azure Key Vault on behalf of the application users. What should you configure?

Application permissions are intended for app-only access where no signed-in user is involved, such as background services or automation jobs. That does not match the requirement to access secrets on behalf of application users, because the resulting token would represent only the application identity. In addition, application permissions generally require admin consent, so the “without admin consent” part is also incorrect. This option fails on both the permission type and the consent requirement.

Although delegated permissions are the right general category for acting on behalf of users, Azure Key Vault delegated permissions are not user-consentable in the normal sense. They are admin-restricted permissions and require administrator approval in the tenant before the app can use them. Therefore, saying delegated permission without admin consent is incomplete and technically incorrect for Key Vault. This makes the current answer wrong even though it correctly identified the delegated model.

Delegated permissions are the correct permission type because App1 must access Azure Key Vault in the context of signed-in users rather than as a standalone daemon. The wording “on behalf of the application users” directly indicates a user-delegated access model, where the token represents both the user and the application. For Azure Key Vault, these delegated permissions are admin-restricted, so tenant administrator consent is required before users can use the app in this way. This makes a delegated permission that requires admin consent the only option that satisfies both the user-context requirement and the consent model for Key Vault.

Application permissions that require admin consent are used when an app accesses resources as itself, without any signed-in user context. That is appropriate for daemon apps, scheduled tasks, or service principals, but not for a user-driven scenario. Because the question explicitly says “on behalf of the application users,” the app must not use app-only permissions. Even though admin consent is commonly required for application permissions, the permission type itself is wrong here.

Question Analysis

Core concept: This question tests the difference between delegated and application permissions in Microsoft Entra ID (Azure AD), and when admin consent is required for Azure Key Vault access. The phrase “on behalf of the application users” means the app must use delegated permissions because a signed-in user is present and the app is acting in that user’s context. Why correct: Azure Key Vault supports delegated access for user-based scenarios, but those delegated permissions are admin-restricted and require administrator consent. Therefore, the correct configuration is a delegated permission that requires admin consent. This allows App1 to request tokens representing both the user and the application when accessing Key Vault. Key features: - Delegated permissions are used when a user is signed in and the app acts on behalf of that user. - Application permissions are used for daemon or service-to-service scenarios where no user is present. - Azure Key Vault delegated permissions require admin consent in Entra ID. - Access to secrets still must be authorized in Key Vault through access policies or Azure RBAC. Common misconceptions: - “On behalf of users” does not mean users can always self-consent; some delegated permissions are admin-restricted. - Application permissions are not appropriate when the requirement explicitly includes user context. - Azure AD consent alone does not grant secret access; Key Vault authorization must also be configured. Exam tips: - If the question says “on behalf of a user,” start with delegated permissions. - Then check whether the target API’s delegated permissions are admin-restricted. - For Azure Key Vault, delegated permissions require admin consent, so choose delegated permission with admin consent rather than app-only permission.

14
Question 14

DRAG DROP - You need to configure an access review. The review will be assigned to a new collection of reviews and reviewed by resource owners. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

Pass. The correct sequence is: 1) Create an access review program. 2) Create an access review control. 3) Set Reviewers to Group owners. Why: The requirement says the review will be assigned to a new collection of reviews—this is an Access Review Program, which must exist before you can assign the review to it. Next you create/configure the actual access review (the “control”), which defines what is being reviewed and its schedule/settings. Finally, because the review must be reviewed by resource owners, you set the reviewers to the owners of the resource. In the provided options, “Set Reviewers to Group owners” best matches “resource owners” for group-based access reviews. Why others are wrong: “Selected users” and “Members” don’t satisfy “resource owners.” “Create an access review audit” is not a setup step; auditing is inherent via logs and reporting rather than a created object.

15
Question 15

Your company plans to create separate subscriptions for each department. Each subscription will be associated to the same Azure Active Directory (Azure AD) tenant. You need to configure each subscription to have the same role assignments. What should you use?

Azure Security Center (now Microsoft Defender for Cloud) focuses on security posture management, recommendations, and threat protection. It can surface misconfigurations and suggest least-privilege improvements, but it does not provide a mechanism to template and automatically apply identical RBAC role assignments across multiple subscriptions. It’s primarily for monitoring and improving security, not for stamping baseline access control configurations during subscription provisioning.

Azure Policy is used to enforce and evaluate compliance by auditing, denying, or remediating resource configurations. While it is excellent for ensuring resources meet standards (tags, SKUs, encryption, allowed locations), it is not intended to create and maintain consistent RBAC role assignments across subscriptions as a baseline. Policy can’t directly define subscription RBAC assignments in the way Blueprints can.

Azure AD Privileged Identity Management (PIM) provides just-in-time activation, approval workflows, access reviews, and time-bound privileged role assignments for Azure AD roles and Azure resource roles. It improves how privileged access is managed, but it does not solve the requirement to configure multiple subscriptions to have the same initial set of role assignments. PIM governs elevation and lifecycle, not subscription-to-subscription RBAC replication.

Azure Blueprints is purpose-built to deploy repeatable governance configurations to subscriptions. A blueprint can include role assignments as artifacts, ensuring each departmental subscription receives the same RBAC assignments when the blueprint is assigned. It also supports bundling policies and templates, enabling consistent, compliant subscription baselines and reducing drift—exactly matching the requirement to standardize role assignments across multiple subscriptions in the same tenant.

Question Analysis

Core concept: This question tests how to standardize governance and access control across multiple Azure subscriptions in the same Azure AD tenant. Specifically, it focuses on deploying consistent role assignments (RBAC) at scale. Why the answer is correct: Azure Blueprints is designed to orchestrate repeatable, compliant environments by packaging and deploying a set of artifacts to a subscription (or multiple subscriptions). Blueprint artifacts can include role assignments, Azure Policy assignments, ARM templates (or Bicep via ARM), and resource groups. Because the requirement is that each new departmental subscription must have the same role assignments, Blueprints is the most direct service for stamping identical RBAC assignments across subscriptions. When you assign a blueprint to a subscription, the included role assignments are applied consistently, helping ensure every department subscription starts with the same access model. Key features and best practices: - Blueprint artifacts: Role assignments are first-class artifacts, enabling consistent RBAC across subscriptions. - Repeatability and scale: Assign the same blueprint to many subscriptions; update the blueprint version to roll forward changes. - Governance alignment: Blueprints complements Azure Policy (guardrails) by also deploying “what should exist” (RBAC, resource groups, templates). This supports Azure Well-Architected Framework governance and security pillars by enforcing consistent access patterns and reducing configuration drift. - Subscription onboarding: Common real-world use is landing zones / subscription vending where each subscription gets baseline RBAC and policies. Common misconceptions: Azure Policy is often chosen because it enforces standards, but it cannot create RBAC role assignments; it can only audit/deny configurations and remediate certain resource properties via deployIfNotExists, not manage subscription RBAC assignments as a baseline. PIM is about just-in-time privileged access, not replicating role assignments across subscriptions. Defender for Cloud (formerly Security Center) provides security posture management and recommendations, not subscription RBAC templating. Exam tips: - If the requirement includes “same role assignments” across subscriptions, think Azure Blueprints (or in newer patterns, landing zones/templated deployments), not Azure Policy. - Remember the division: Azure Policy = guardrails; Blueprints = package + deploy governance artifacts including RBAC. - For AZ-500, map services to outcomes: standardize RBAC at scale -> Blueprints. Reference: Azure Blueprints documentation highlights role assignments as a blueprint artifact and its purpose for repeatable subscription governance.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

HOTSPOT - You have two Azure virtual machines in the East US 2 region as shown in the following table. Name Operating system Type Tier VM1 Windows Server 2008 R2 A3 Basic VM2 Ubuntu 16.04-DAILY-LTS L4s Standard You deploy and configure an Azure Key vault. You need to ensure that you can enable Azure Disk Encryption on VM1 and VM2. What should you modify on each virtual machine? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

VM1 ______

Azure Disk Encryption for Windows requires a supported Windows Server version, and Windows Server 2008 R2 is not supported for ADE. Therefore, VM1 must be changed to a supported operating system version before disk encryption can be enabled. The tier is not the determining issue here, and the VM size/type itself is not the blocker as long as the VM runs a supported OS and can use the required extension.

Part 2:

VM2 ______

Azure Disk Encryption on Linux supports specific endorsed distributions and images, and Ubuntu 16.04-DAILY-LTS is not a supported production image for ADE. To enable encryption, VM2 must be changed to a supported operating system version/image, such as an officially supported Ubuntu LTS marketplace image. The tier is already Standard, so that is not the issue, and the L4s VM type is not the blocker for ADE in this scenario.

17
Question 17

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You have an Azure subscription. The subscription contains 50 virtual machines that run Windows Server 2012 R2 or Windows Server 2016. You need to deploy Microsoft Antimalware to the virtual machines. Solution: You add an extension to each virtual machine. Does this meet the goal?

Yes. Microsoft Antimalware for Azure virtual machines is delivered through a VM extension that is installed on the target VM. Adding the Microsoft Antimalware extension to each Windows Server 2012 R2/2016 VM deploys the antimalware agent and enables configuration such as real-time protection and scheduled scans. Therefore, installing the extension on each VM meets the goal of deploying Microsoft Antimalware to the 50 virtual machines.

No. This is incorrect because adding the Microsoft Antimalware VM extension is a supported and intended method to deploy antimalware to Azure IaaS virtual machines. VM extensions are specifically designed to install and configure software inside VMs post-deployment, including security tooling like antimalware. While doing it manually per VM may be operationally tedious without automation, it still achieves the stated requirement of deploying Microsoft Antimalware.

Question Analysis

Core concept: This question tests how to deploy Microsoft Antimalware (the Microsoft Antimalware extension) to Azure virtual machines and whether using VM extensions is an appropriate deployment method. Why the answer is correct: Microsoft Antimalware for Azure VMs is deployed by installing the Microsoft Antimalware VM extension (also known as the IaaS Antimalware extension) on each virtual machine. VM extensions are the native Azure mechanism to install and configure post-deployment agents and software inside a VM, including security agents. Since the environment consists of Azure IaaS VMs running supported Windows Server versions (2012 R2/2016), adding the antimalware extension to each VM directly satisfies the requirement to deploy Microsoft Antimalware to those VMs. Key features / configurations: - Azure VM Extensions: Used to install/configure software on VMs after provisioning. - Microsoft Antimalware extension: Enables real-time protection, scheduled scans, and exclusion configuration. - Deployment methods: Azure portal, ARM templates, PowerShell, Azure CLI, or Azure Policy/automation to apply at scale. - Per-VM installation: The extension is applied to each VM (manually or via automation) to ensure coverage. Common misconceptions: - Assuming Microsoft Antimalware is enabled automatically for all Azure VMs by default; it is not unless explicitly installed/configured. - Confusing Microsoft Antimalware extension with Microsoft Defender for Cloud plans; Defender for Cloud can recommend/assist, but the extension is still a VM-level deployment mechanism. - Thinking a single subscription-level setting deploys antimalware to all VMs without using extensions or automation. Exam tips: - VM extensions are the standard way to deploy agents (antimalware, monitoring, DSC, etc.) to Azure IaaS VMs. - If the requirement is “deploy to VMs,” expect an answer involving the Microsoft Antimalware extension. - For many VMs, consider automation (Policy/ARM/PowerShell), but the core mechanism remains the extension.

18
Question 18

You have an Azure subscription that contains a virtual machine named VM1. You create an Azure key vault that has the following configurations: ✑ Name: Vault5 ✑ Region: West US ✑ Resource group: RG1 You need to use Vault5 to enable Azure Disk Encryption on VM1. The solution must support backing up VM1 by using Azure Backup. Which key vault settings should you configure?

Access policies control who can access Key Vault objects, but they are not the specific Key Vault setting that ADE fundamentally uses to store the VM encryption material. The question asks which vault setting should be configured to enable ADE with backup support, and the encryption dependency is on secrets rather than on the permission model itself. While permissions are necessary operationally, they are not the best answer from the listed choices. In exam questions like this, Microsoft often distinguishes between the object type used by ADE and the authorization mechanism used to reach it.

Secrets are the core Key Vault object used by Azure Disk Encryption to store the BitLocker Encryption Key for the VM. ADE writes and retrieves this encryption material from Key Vault during enablement and subsequent operations. Azure Backup supports ADE-enabled virtual machines when the encryption configuration uses the supported secret-based integration with Key Vault. Because the question asks which key vault setting should be configured, Secrets is the best match among the available options.

Keys are optional in Azure Disk Encryption and are only used when you choose to implement a Key Encryption Key to wrap the BitLocker Encryption Key. ADE can be enabled successfully without configuring a KEK at all. Since the question does not state that customer-managed key wrapping is required, Keys is not the mandatory setting. Therefore, this option is too specific and not universally required for ADE with Azure Backup.

Locks are Azure resource management controls that prevent accidental deletion or modification of resources. They do not participate in the encryption workflow and have no role in storing or retrieving disk encryption material. Applying a lock to the vault would not enable Azure Disk Encryption on the VM. Locks also do not affect Azure Backup compatibility for ADE-protected virtual machines.

Question Analysis

Core concept: Azure Disk Encryption (ADE) integrates with Azure Key Vault by storing the BitLocker Encryption Key (BEK) as a secret. To enable ADE on a VM and maintain compatibility with Azure Backup, the key vault must support storing and retrieving secrets used by the encryption extension. Why correct: ADE for Azure VMs relies on Key Vault secrets to hold the BEK. Azure Backup supports ADE-protected VMs when the encryption material is managed through the supported Key Vault secret mechanism. Therefore, among the listed settings, Secrets is the required configuration area. Key features: ADE stores the BEK as a secret in Key Vault, and may optionally use a key encryption key (KEK) in more advanced scenarios. Azure Backup can back up ADE-enabled VMs as long as the encryption setup follows supported patterns. The vault does not need resource locks for this purpose, and keys are optional rather than mandatory. Common misconceptions: Access policies are important for permissions, but they are not the primary vault setting being asked for in this option set. Keys are only needed when using a KEK, which is optional for ADE. Locks are unrelated to encryption functionality. Exam tips: When a question asks what Key Vault component ADE uses, think Secrets first because the BEK is stored as a secret. If the question instead asks about permissions or authorization, then access policies or RBAC would be the focus. Distinguish between the object type used by ADE and the permission model that allows access to it.

19
Question 19

You have an Azure web app named webapp1. You need to configure continuous deployment for webapp1 by using an Azure Repo. What should you create first?

Azure Application Insights provides application performance monitoring (APM), logging, and telemetry for web apps. While it’s a best practice for observability and can be integrated into App Service, it is not required to configure continuous deployment from Azure Repos. You can enable CI/CD without any monitoring resources, so this is not the first thing to create.

An Azure DevOps organization is the prerequisite container for Azure DevOps Services. Azure Repos lives inside an Azure DevOps project, which itself requires an organization. To configure continuous deployment from an Azure Repo to an Azure Web App, you must first have an Azure DevOps organization so you can create the project/repo and then configure pipelines or Deployment Center integration with an Azure service connection.

An Azure Storage account can be used for build artifacts, diagnostics logs, deployment packages, or application data, but it is not a prerequisite for setting up continuous deployment from Azure Repos to an App Service web app. Azure DevOps can store artifacts in its own artifact storage, and App Service deployments do not require a customer-managed storage account by default.

Azure DevTest Labs is designed to create and manage dev/test environments (often VM-based), control costs, and apply policies like auto-shutdown. It is not used to host Azure Repos or to configure App Service continuous deployment. For CI/CD to an Azure Web App using Azure Repos, DevTest Labs is unrelated and not required.

Question Analysis

Core concept: This question tests configuring continuous deployment (CI/CD) for an Azure App Service Web App using Azure Repos. Azure Repos is part of Azure DevOps Services, so the foundational prerequisite is having an Azure DevOps organization that can host the project and repo. Why the answer is correct: To set up continuous deployment from an Azure Repo to an Azure Web App, you must connect the Web App’s Deployment Center (or set up a pipeline) to a repository that lives in an Azure DevOps project. Before you can create an Azure Repo, you must first create an Azure DevOps organization (the top-level container in Azure DevOps). After the organization exists, you create a project, then an Azure Repo, and then configure a pipeline (classic release or YAML) or use Deployment Center integration to deploy to webapp1. Key features and best practices: In practice, you’ll typically: 1) Create an Azure DevOps organization and project. 2) Create/import the Azure Repo. 3) Create a service connection to Azure (often using a service principal) with least privilege (e.g., scoped to the resource group or the specific web app). 4) Configure build and release (YAML pipeline) with secure secrets in Azure Key Vault or pipeline secret variables. From an Azure Well-Architected perspective, CI/CD improves operational excellence and reliability by enabling repeatable, auditable deployments and reducing configuration drift. For AZ-500, also consider governance and access control: use RBAC, restrict who can create service connections, and enable approvals and branch policies. Common misconceptions: Application Insights is for monitoring, not a prerequisite for deployment. Storage accounts are used for artifacts, logs, or app content in some patterns, but not required to enable Azure Repos-based deployment. DevTest Labs is for managing lab environments and VM-based dev/test scenarios, not App Service CI/CD. Exam tips: When you see “Azure Repo” in a deployment context, immediately map it to “Azure DevOps Services.” The first thing you need is an Azure DevOps organization (then project/repo/pipeline). Also remember security exam angles: service connections, least privilege, and protecting pipeline secrets are frequently tested.

20
Question 20

HOTSPOT - You have an Azure subscription. The subscription contains Azure virtual machines that run Windows Server 2016. You need to implement a policy to ensure that each virtual machine has a custom antimalware virtual machine extension installed. How should you complete the policy? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

"effect": "______"

The correct effect is DeployIfNotExists because the requirement is to ensure every VM has a specific antimalware VM extension installed. DeployIfNotExists evaluates compliance and, when the required related resource/configuration is missing (the VM extension), it can automatically deploy it via an embedded ARM template. This is the standard pattern for enforcing VM extensions, diagnostic settings, and other “should be configured” requirements. Why not Deny? Deny would only prevent creation or update operations that don’t meet the condition; it does not remediate existing VMs that are already deployed without the extension, and it can also disrupt legitimate VM operations if not carefully scoped. Why not Append? Append (and its modern replacement Modify) is used to add or alter properties on the resource being created/updated, but it cannot reliably create a separate child resource like Microsoft.Compute/virtualMachines/extensions. Therefore, DeployIfNotExists is the only option that both detects absence and installs the extension to reach compliance.

Part 2:

"parameters": { "______": {

In a DeployIfNotExists policy, the policy rule's details section includes an existenceCondition to determine whether the related resource already exists and is compliant. For a VM extension scenario, this condition checks whether the required antimalware extension is present on the virtual machine. Template is used later inside the deployment definition to describe what to deploy for remediation, and resources is only a section within an ARM template, not the policy field being asked for here.

Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Practice Test #4

50 Questions·100 min·Pass 700/1000

Practice Test #5

50 Questions·100 min·Pass 700/1000

Other Microsoft Certifications

Microsoft AI-102

Microsoft AI-102

Associate

PL-300: Microsoft Power BI Data Analyst

PL-300: Microsoft Power BI Data Analyst

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-305

Microsoft AZ-305

Expert

Microsoft AZ-204

Microsoft AZ-204

Associate

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AZ-500 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.