CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. Microsoft AZ-204
Microsoft AZ-204

Microsoft

Microsoft AZ-204

467+ Practice Questions with AI-Verified Answers

Developing Solutions for Microsoft Azure

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 467+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every Microsoft AZ-204 answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Develop Azure Compute SolutionsWeight 29%
Develop for Azure StorageWeight 19%
Implement Azure SecurityWeight 19%
Monitor, Troubleshoot, and Optimize Azure SolutionsWeight 9%
Connect to and Consume Azure Services and Third-Party ServicesWeight 24%

Practice Questions

1
Question 1

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing an Azure Service application that processes queue data when it receives a message from a mobile application. Messages may not be sent to the service consistently. You have the following requirements: ✑ Queue size must not grow larger than 80 gigabytes (GB). ✑ Use first-in-first-out (FIFO) ordering of messages. ✑ Minimize Azure costs. You need to implement the messaging solution. Solution: Use the .Net API to add a message to an Azure Service Bus Queue from the mobile application. Create an Azure Windows VM that is triggered from Azure Service Bus Queue. Does the solution meet the goal?

Answering Yes is incorrect because the architecture relies on a VM-based consumer, which generally must remain running (or be externally orchestrated) to receive messages, leading to ongoing costs during idle periods. While Service Bus can support FIFO, the solution does not mention configuring sessions, which is required for strict FIFO ordering. Finally, nothing in the solution enforces or manages the 80-GB queue size constraint (such as TTL, scaling out consumers, or operational controls), so it cannot be considered to meet all stated goals.

The solution does not meet the goal because an Azure Windows VM is not natively “triggered” by a Service Bus queue in a cost-efficient, event-driven way; you typically run a continuously listening process on the VM, incurring compute charges even when messages are not sent consistently. This conflicts with the requirement to minimize Azure costs for intermittent traffic, where serverless (e.g., Azure Functions Service Bus trigger) is the expected pattern. Also, the proposal does not specify enabling Service Bus Sessions for FIFO or any explicit control to ensure the queue does not exceed 80 GB, so key requirements are not demonstrably satisfied.

Question Analysis

Core concept: This question tests selecting an Azure messaging + processing architecture that meets constraints on queue capacity, FIFO ordering, and cost efficiency for bursty/irregular mobile message traffic. Why the answer is correct: Using Azure Service Bus Queue can satisfy FIFO ordering when sessions are used, and it can handle large message backlogs. However, the proposed compute choice—an Azure Windows VM “triggered” from a Service Bus queue—is not a native event-driven pattern and typically requires a continuously running listener/service on the VM (or custom polling/receiver logic). That means you pay for the VM even when no messages arrive, which violates the “minimize Azure costs” requirement for inconsistent traffic. Additionally, the solution does not specify any mechanism to enforce the 80-GB maximum queue size (e.g., monitoring/auto-scaling/TTL/backpressure), so it does not clearly meet the queue-size constraint. Key features / configurations: - Azure Service Bus Queue FIFO: achieved via Service Bus Sessions (SessionId) and enabling sessions on the queue. - Cost-optimized processing: Azure Functions (Service Bus trigger) or WebJobs/Container Apps jobs can scale to zero/near-zero when idle; VMs generally cannot. - Queue growth control: message TTL, dead-lettering, backpressure, and operational monitoring/alerts; Service Bus also has namespace/queue limits but you must design to stay under a target like 80 GB. Common misconceptions: - Assuming a VM can be “triggered” like serverless compute; in practice, a VM must run a receiver continuously or be orchestrated externally, which adds cost and complexity. - Assuming Service Bus queues are automatically FIFO without sessions; standard queues require sessions for strict FIFO per session. - Assuming queue size limits are automatically enforced at an arbitrary threshold (80 GB) without explicit design/controls. Exam tips: - Prefer Azure Functions with a Service Bus trigger for bursty workloads to minimize cost. - For strict FIFO in Service Bus, look for “sessions” (SessionId) and enabling sessions on the queue. - If a requirement states a maximum backlog/size, consider TTL, scaling consumers, and monitoring/alerts—don’t assume it happens automatically. - VMs are rarely the cheapest option for event-driven, intermittent message processing.

2
Question 2

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing an Azure Service application that processes queue data when it receives a message from a mobile application. Messages may not be sent to the service consistently. You have the following requirements: ✑ Queue size must not grow larger than 80 gigabytes (GB). ✑ Use first-in-first-out (FIFO) ordering of messages. ✑ Minimize Azure costs. You need to implement the messaging solution. Solution: Use the .Net API to add a message to an Azure Storage Queue from the mobile application. Create an Azure VM that is triggered from Azure Storage Queue events. Does the solution meet the goal?

Answering 'Yes' is incorrect because the proposed architecture fails key requirements. Storage Queues cannot reliably guarantee FIFO ordering, so message processing order may not be preserved. A VM is not directly triggered by Storage Queue events without implementing a polling worker or adding an event-driven compute service, and the solution does not implement controls (such as throttling, TTL, or quotas) to prevent the queue from growing beyond 80 GB.

The solution does not meet the FIFO requirement because Azure Storage Queues do not provide strict first-in-first-out ordering guarantees, especially under retries and visibility timeout scenarios. It also proposes an Azure VM being “triggered” by Storage Queue events, but VMs are not natively event-driven for Storage Queue messages and typically require continuous polling or additional services, which can increase cost and complexity. Finally, the design does not include any mechanism to ensure the queue never exceeds 80 GB, so backlog could grow beyond the stated limit during periods of inconsistent message processing.

Question Analysis

Core concept: This question tests selecting the appropriate Azure messaging service and processing model to meet FIFO ordering, bounded queue growth, and cost minimization—especially under bursty/irregular message arrival. Why the answer is correct: The proposed solution uses Azure Storage Queues and a VM “triggered” by Storage Queue events. Storage Queues do not provide strict FIFO guarantees (ordering is best-effort and can be affected by retries/visibility timeouts), so the FIFO requirement is not met. Additionally, Azure VMs are not natively event-triggered by Storage Queue events; you typically need a polling worker (or an intermediary like Functions/WebJobs), which increases operational complexity and can increase cost. Finally, nothing in the design enforces the 80-GB maximum queue size—Storage Queues can grow very large, and without explicit backpressure/TTL/dead-lettering controls, the queue can exceed the limit. Key features / configurations: - Azure Storage Queues: best-effort ordering; no strict FIFO semantics; consumers typically poll. - Event-driven processing: Azure Functions/WebJobs can trigger on Storage Queue messages, but VMs are not inherently event-triggered. - Controlling growth: requires explicit mechanisms (message TTL, producer throttling, scaling consumers, or using a service with quotas/limits and dead-lettering). - For strict FIFO: Azure Service Bus queues with sessions (SessionId) provide ordered processing within a session. Common misconceptions: - Assuming Storage Queues guarantee FIFO ordering; they do not guarantee strict FIFO in all conditions. - Believing a VM can be directly “triggered” by Storage Queue events without a polling/triggering service. - Assuming queue size limits are automatically enforced; most services require explicit design to prevent unbounded growth. Exam tips: - FIFO requirement usually points to Azure Service Bus with Sessions (or other ordered messaging patterns), not Storage Queues. - If you see “triggered by queue events,” think Azure Functions/WebJobs rather than VMs. - Always verify how the solution enforces constraints like maximum backlog/size (TTL, quotas, scaling, throttling).

3
Question 3
(Select 2)

You are developing a solution that will use Azure messaging services. You need to ensure that the solution uses a publish-subscribe model and eliminates the need for constant polling. What are two possible ways to achieve the goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Service Bus supports publish-subscribe via Topics and Subscriptions. A message sent to a Topic can be copied to multiple Subscriptions, enabling fan-out. It provides durable enterprise messaging features such as dead-letter queues, duplicate detection, sessions for ordered processing, and transactions. With Azure Functions/Logic Apps triggers, consumers can process messages without implementing constant polling logic.

Event Hubs is optimized for high-throughput event streaming (telemetry ingestion) and uses partitions and consumer groups. While multiple consumer groups can read the same stream, it is not the typical pub-sub broker pattern used for business messaging, and consumption is generally pull-based reading from partitions. It’s best for streaming pipelines (Stream Analytics, Spark) rather than push-based pub-sub notifications.

Event Grid is a fully managed event routing service designed for event-driven architectures. It uses a publish-subscribe model where publishers emit events and Event Grid pushes them to subscribers (webhooks, Azure Functions, Logic Apps, etc.) with retries and optional dead-lettering. This push delivery model eliminates the need for consumers to poll for changes and is ideal for reactive integrations.

Queue (such as Azure Storage Queues) implements point-to-point messaging with competing consumers: each message is typically processed by a single consumer. It does not provide a native publish-subscribe fan-out model. While queues reduce tight coupling and can reduce polling when used with triggers, they don’t meet the pub-sub requirement where multiple subscribers receive the same message.

Question Analysis

Core Concept: This question tests Azure messaging patterns—specifically the publish-subscribe (pub-sub) model and event-driven delivery that avoids constant polling. In Azure, pub-sub means publishers send messages/events to a broker, and multiple independent subscribers receive them via subscriptions/handlers. Why the Answer is Correct: Azure Service Bus (A) supports pub-sub through Topics and Subscriptions. A publisher sends a message to a Topic, and each Subscription receives its own copy, enabling fan-out to multiple consumers. Consumers can receive messages using push-like mechanisms (e.g., Service Bus-triggered Azure Functions) rather than polling in application code. Azure Event Grid (C) is a native event routing service built for reactive architectures. It delivers events to subscribers (webhooks, Azure Functions, Logic Apps, Service Bus, etc.) with push delivery and retry, which eliminates the need for consumers to poll for changes. Key Features / Best Practices: - Service Bus Topics/Subscriptions: durable messaging, at-least-once delivery, dead-lettering, sessions (ordering), duplicate detection, filters/actions on subscriptions, and transactions. Use when you need enterprise messaging guarantees and decoupling between services. - Event Grid: push-based event distribution, built-in integration with many Azure sources (Storage, Resource Groups, etc.), filtering, advanced routing, and managed retries with dead-lettering to Storage. Use for event notification and reactive workflows. From an Azure Well-Architected Framework perspective, both improve Reliability and Performance Efficiency by decoupling producers/consumers and avoiding wasteful polling. Common Misconceptions: - Event Hubs (B) is often mistaken for pub-sub. It is primarily for high-throughput telemetry/stream ingestion with partitioned consumer groups; it’s more “streaming” than classic pub-sub messaging and typically requires consumers to read from partitions (often perceived as polling/reading) rather than push delivery. - Queue (D) (e.g., Storage Queue) is point-to-point (competing consumers) rather than pub-sub; one message is processed by one consumer, not fanned out to multiple subscribers. Exam Tips: - If the question says “publish-subscribe” and “multiple subscribers,” think Service Bus Topics or Event Grid. - If it emphasizes “event notifications” and “push to handlers,” Event Grid is the best fit. - If it emphasizes “enterprise messaging, workflows, ordering, dead-lettering, transactions,” Service Bus is the best fit. - If it emphasizes “telemetry, logs, millions of events/sec, streaming analytics,” think Event Hubs.

4
Question 4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing an Azure solution to collect point-of-sale (POS) device data from 2,000 stores located throughout the world. A single device can produce 2 megabytes (MB) of data every 24 hours. Each store location has one to five devices that send data. You must store the device data in Azure Blob storage. Device data must be correlated based on a device identifier. Additional stores are expected to open in the future. You need to implement a solution to receive the device data. Solution: Provision an Azure Event Grid. Configure event filtering to evaluate the device identifier. Does the solution meet the goal?

Answering 'Yes' is incorrect because it assumes Event Grid can directly receive and manage device telemetry ingestion. Event Grid can filter and route events, but it is not a device ingestion gateway and does not provide the connectivity, authentication, and ingestion semantics typically required for thousands of globally distributed devices. Additionally, filtering on a device identifier only determines routing; it does not inherently provide durable ingestion and correlation/storage organization. IoT Hub or Event Hubs are the services intended for this type of workload, with downstream persistence to Blob storage.

The solution does not meet the goal because Azure Event Grid is not designed to be the primary service to receive continuous device telemetry from thousands of devices. Event Grid is optimized for routing discrete events and triggering handlers, and it lacks core IoT ingestion capabilities such as per-device identity, device authentication, and telemetry-oriented ingestion patterns. Although Event Grid supports event filtering, filtering does not solve the requirement to reliably ingest and correlate device data at scale before storing it in Blob storage. A more appropriate approach is Azure IoT Hub (or Event Hubs) to ingest device data, then route/process it into Blob storage organized by device identifier.

Question Analysis

Core concept: This question tests choosing the correct Azure ingestion service for device telemetry at scale and understanding the difference between Event Grid (event notification/routing) and services designed for high-throughput device-to-cloud data ingestion (IoT Hub/Event Hubs) before persisting to Blob storage. Why the answer is correct: Event Grid is primarily an event routing service for discrete events (e.g., “blob created”, “resource changed”) and is not intended to be the primary endpoint for continuous device telemetry uploads from thousands of POS devices. While Event Grid supports filtering on event metadata, it does not provide the device connectivity, ingestion semantics, or telemetry-oriented features needed to reliably receive and buffer device data streams at scale. A more appropriate pattern is to ingest device messages via Azure IoT Hub (or Event Hubs for non-IoT scenarios), then use routing/Stream Analytics/Functions to write to Blob Storage with partitioning by device identifier. Key features / configurations: - Azure Event Grid: event notification, push delivery to handlers, filtering on event subject/type/data fields, at-least-once delivery for events. - Azure IoT Hub: per-device identity, device authentication, device-to-cloud telemetry ingestion, message routing to storage/endpoints. - Azure Event Hubs: high-throughput event ingestion, partitions/consumer groups; typically paired with processors to persist to Blob. - Correlation by device identifier: use IoT Hub device ID or message properties; write to Blob paths like /deviceId=/yyyy=/mm=/dd= for efficient organization. Common misconceptions: - Assuming Event Grid is a general-purpose ingestion endpoint for arbitrary device payloads; it is mainly for routing events emitted by Azure services or custom publishers, not for direct device telemetry at scale. - Confusing “filtering” with “correlation/storage partitioning”; filtering only decides where events go, it doesn’t inherently organize or persist data by device ID. - Overlooking device management needs (identity, authentication, throttling) that IoT Hub provides. Exam tips: - Use Event Grid for reactive workflows and notifications (resource events, blob created, etc.), not for primary telemetry ingestion. - For many devices sending data: prefer IoT Hub (device identity + telemetry) or Event Hubs (stream ingestion). - If the requirement includes “correlate by device identifier,” think message properties + downstream partitioning/routing to storage. - When the destination is Blob Storage, expect an intermediate ingestion service plus a processor (Functions/Stream Analytics) or built-in routing (IoT Hub routing).

5
Question 5

You need to store the user agreements. Where should you store the agreement after it is completed?

Azure Storage queue is best for persisting a lightweight work item representing the completed agreement so downstream components can process it asynchronously. It’s durable, low cost, and integrates well with Azure Functions triggers. Use it to store agreement metadata (IDs, timestamps, blob URL), not the full signed document, due to message size limits and to keep processing decoupled and scalable.

Azure Event Hubs is optimized for high-throughput event streaming and telemetry ingestion (e.g., IoT, logs). It retains events for a configured time window and is consumed via partitions/consumer groups, not competing workers processing discrete tasks. For “store a completed agreement for later processing,” Event Hubs is usually the wrong abstraction unless you’re building a streaming analytics pipeline.

Azure Service Bus topic supports publish/subscribe with multiple subscriptions and advanced messaging features (sessions, transactions, dead-lettering, duplicate detection). It can work for agreement completion events if multiple independent systems must react. However, for simply storing a completed agreement work item for later processing, it’s typically more complex and costly than needed compared to Storage queues.

Azure Event Grid topic is for routing event notifications to handlers (Functions, WebHooks, Service Bus, etc.) with push delivery and event filtering. It’s not intended as a storage mechanism for completed agreements or as a work queue. Event Grid is appropriate when you need to broadcast “agreement completed” to multiple subscribers, but you’d still store the agreement elsewhere and/or enqueue work.

Question Analysis

Core concept: This question is testing which Azure messaging/eventing service is appropriate to persist a completed “user agreement” artifact for later processing. In AZ-204, these options map to different messaging patterns: queue-based work distribution (Storage queues), enterprise messaging (Service Bus), high-throughput telemetry streaming (Event Hubs), and event routing (Event Grid). None of these are long-term document stores like Blob Storage or Cosmos DB, so the intent is typically “store the completed agreement for asynchronous processing” rather than archival storage. Why A is correct: An Azure Storage queue is the simplest, most cost-effective way to persist a small message representing a completed agreement (for example, an agreement ID, user ID, timestamp, and a pointer/URL to where the signed document is stored). It provides durable, at-least-once delivery and decouples the web/API layer from downstream processors (Functions/WebJobs/worker services). This aligns with the Azure Well-Architected Framework reliability and performance pillars by smoothing spikes and enabling independent scaling of producers and consumers. Key features / best practices: Storage queues are highly available, support visibility timeouts, poison message handling (via dequeue count), and can be triggered by Azure Functions. Keep messages small (up to ~64 KB) and store the actual agreement document elsewhere (commonly Blob Storage) while placing only metadata/pointers in the queue. Use managed identity/SAS appropriately and consider encryption at rest (default) and private endpoints for network isolation. Common misconceptions: Event Grid and Event Hubs are often chosen because “agreement completed” sounds like an event. However, those services are for event distribution/streaming, not for durable work-item storage with competing consumers. Service Bus topics are powerful, but are typically overkill unless you need advanced enterprise features (sessions, transactions, duplicate detection, ordered delivery, multiple subscribers with independent subscriptions). Exam tips: If the scenario is “do work later” or “buffer and process asynchronously,” think Queue (Storage queue for simple/low-cost; Service Bus queue/topic for advanced enterprise needs). If it’s “notify many systems that something happened,” think Event Grid. If it’s “ingest massive streaming data,” think Event Hubs.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop a software as a service (SaaS) offering to manage photographs. Users upload photos to a web service which then stores the photos in Azure Storage Blob storage. The storage account type is General-purpose V2. When photos are uploaded, they must be processed to produce and save a mobile-friendly version of the image. The process to produce a mobile-friendly version of the image must start in less than one minute. You need to design the process that starts the photo processing. Solution: Trigger the photo processing from Blob storage events. Does the solution meet the goal?

Yes. Blob storage events are intended for event-driven reactions to blob changes such as uploads, so they can start processing soon after a photo is stored. This approach is near real time and is appropriate when the requirement is to begin processing in less than one minute. It also avoids the latency and inefficiency of polling the storage account for new files.

No is incorrect because Blob storage events are specifically designed to notify downstream services quickly when blobs are created or updated. The requirement is only that the process start in less than one minute, not that the entire image transformation complete in that time. A polling or scheduled approach might fail this requirement, but an event-based trigger from Blob storage does meet it.

Question Analysis

Core concept: This question tests whether Azure Blob storage events can be used to start downstream processing quickly after a blob is uploaded. In Azure, blob-created events are emitted in near real time, making them suitable for event-driven image processing workflows. Why correct: Triggering photo processing from Blob storage events meets the requirement because blob events are designed to notify subscribers shortly after uploads occur, typically well within one minute. This makes them appropriate for starting asynchronous processing such as generating mobile-friendly image versions. Key features: Blob storage integrates with Azure event-driven patterns so applications can react automatically when new blobs are created. This avoids polling delays and reduces unnecessary compute usage. It is a common design for image-processing pipelines in Azure. Common misconceptions: A common mistake is to choose polling mechanisms, scheduled jobs, or blob scans, which may not start processing fast enough. Another misconception is that the question requires a specific compute service; it only asks how to start the process, and blob events satisfy that need. Exam tips: When a requirement says processing must begin quickly after a storage change, prefer event-based triggers over polling. For Azure Storage blob uploads, Blob storage events are the standard near-real-time mechanism to initiate downstream work.

7
Question 7

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing an Azure solution to collect point-of-sale (POS) device data from 2,000 stores located throughout the world. A single device can produce 2 megabytes (MB) of data every 24 hours. Each store location has one to five devices that send data. You must store the device data in Azure Blob storage. Device data must be correlated based on a device identifier. Additional stores are expected to open in the future. You need to implement a solution to receive the device data. Solution: Provision an Azure Service Bus. Configure a topic to receive the device data by using a correlation filter. Does the solution meet the goal?

Answering 'Yes' is incorrect because it overestimates Service Bus as a device telemetry ingestion service. Correlation filters can help route messages to subscriptions, but they do not inherently solve device-scale ingestion requirements such as secure per-device connectivity, throttling patterns, and IoT protocol support. Additionally, the scenario’s emphasis on many stores worldwide and future growth aligns more closely with IoT Hub or Event Hubs, which are designed for high fan-in event ingestion and easy landing to Blob storage (via routing or Capture). Thus, the proposed Service Bus topic approach does not fully satisfy the intended goal.

The solution does not meet the goal because Azure Service Bus topics are not the best fit for ingesting telemetry from thousands of globally distributed devices. Although correlation filters can route messages based on a device identifier property, Service Bus lacks IoT-specific capabilities such as per-device identity, device authentication, and telemetry-optimized ingestion patterns. In Azure, IoT Hub (or Event Hubs with Capture) is typically used to receive device data at scale and then persist it to Blob storage. Therefore, using Service Bus topics with correlation filters is not the appropriate ingestion design for this scenario.

Question Analysis

Core concept: This question tests choosing an appropriate Azure ingestion service for high-scale device telemetry that must be routed/correlated by device identifier before landing in Azure Blob storage. Why the answer is correct: Azure Service Bus topics with correlation filters are designed for enterprise messaging patterns (commands/events between applications) and subscription-based routing, not for large-scale device telemetry ingestion. While correlation filters can route messages based on properties (e.g., deviceId), Service Bus is not the recommended front door for thousands of globally distributed devices and does not natively provide device identity management, per-device connectivity patterns, or telemetry-optimized ingestion. For POS/IoT-style device data at global scale with future growth, Azure IoT Hub (or Event Hubs for pure streaming) is the typical ingestion layer, then data is persisted to Blob Storage via routing, capture, or downstream processing. Key features / configurations: - Service Bus Topics/Subscriptions: pub-sub messaging, correlation filters on message properties, sessions for ordered processing. - IoT Hub: per-device identity, device authentication, device-to-cloud telemetry, message routing to Blob/Storage endpoints. - Event Hubs: high-throughput event ingestion; Event Hubs Capture can automatically write to Azure Blob Storage/ADLS. Common misconceptions: - Assuming “correlation filter” equals “device correlation at scale”: it routes messages to subscriptions but doesn’t provide IoT device management or telemetry ingestion optimizations. - Using Service Bus as an IoT ingestion service: Service Bus is optimized for application messaging reliability and workflows, not massive device fan-in. - Believing Blob storage requirement implies Service Bus: storage is a sink; the key is choosing the right ingestion service. Exam tips: - Prefer IoT Hub when devices are involved and you need per-device identity/authentication and scalable device-to-cloud ingestion. - Prefer Event Hubs for high-throughput streaming ingestion; use Capture to land data in Blob/ADLS. - Use Service Bus for enterprise messaging (commands, workflows, decoupling services) rather than raw telemetry ingestion from devices.

8
Question 8

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are developing an Azure solution to collect point-of-sale (POS) device data from 2,000 stores located throughout the world. A single device can produce 2 megabytes (MB) of data every 24 hours. Each store location has one to five devices that send data. You must store the device data in Azure Blob storage. Device data must be correlated based on a device identifier. Additional stores are expected to open in the future. You need to implement a solution to receive the device data. Solution: Provision an Azure Notification Hub. Register all devices with the hub. Does the solution meet the goal?

Answering 'Yes' would imply that Notification Hubs can receive and store device telemetry, but its primary function is outbound push notification delivery, not inbound data collection. Device registration in Notification Hubs is for targeting notifications (tags/templates), not for authenticating devices for telemetry ingestion or correlating incoming data by device ID. It also does not provide direct integration to write incoming device data to Azure Blob Storage. As a result, it does not satisfy the ingestion and storage requirements described.

Notification Hubs is not a data ingestion service; it is a push notification broker used to send notifications to registered devices via APNs/FCM/WNS. The scenario requires receiving POS device data and storing it in Azure Blob Storage with correlation by device identifier, which Notification Hubs does not natively support. It provides no built-in mechanism to accept telemetry streams from devices and persist them to Blob Storage. Therefore, this solution does not meet the goal.

Question Analysis

Core concept: This question tests selecting the correct Azure ingestion service/pattern to receive telemetry-like data from thousands of globally distributed POS devices and land it in Azure Blob Storage with device-based correlation and future scale. Why the answer is correct: Azure Notification Hubs is designed for sending push notifications to mobile devices (APNs/FCM/WNS), not for ingesting device-generated data payloads for storage and analytics. Registering devices with a Notification Hub enables targeting notifications to devices/users, but it does not provide a scalable telemetry ingestion endpoint, message routing to Blob Storage, or device-identifier-based correlation for incoming data. For IoT/telemetry ingestion into Blob Storage, services like Azure IoT Hub (with routing to storage), Azure Event Hubs (with Capture to Blob), or Azure Service Bus (with downstream processing) are appropriate. Key features / configurations: - Azure Notification Hubs: device registration, tags, templates; outbound push notification delivery to platforms (APNs/FCM/WNS). - Appropriate ingestion alternatives: - Azure IoT Hub: per-device identity, authentication, device-to-cloud messaging, message routing to Blob Storage. - Azure Event Hubs: high-throughput ingestion; Event Hubs Capture writes directly to Azure Blob Storage/ADLS. - Stream processing (optional): Azure Functions/Stream Analytics to enrich/correlate and write to Blob. Common misconceptions: - Confusing “hub” services: Notification Hubs (push notifications) vs Event Hubs (telemetry streaming) vs IoT Hub (device management + telemetry). - Assuming device registration in Notification Hubs implies it can accept arbitrary device telemetry and persist it to storage. Exam tips: - Notification Hubs = outbound push notifications, not telemetry ingestion. - For device telemetry at scale, think IoT Hub (device identity + routing) or Event Hubs (stream ingestion + Capture). - If the requirement includes per-device correlation/identity, IoT Hub is often the best fit. - If the requirement is primarily high-throughput ingestion to Blob, Event Hubs Capture is a common answer.

9
Question 9

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop a software as a service (SaaS) offering to manage photographs. Users upload photos to a web service which then stores the photos in Azure Storage Blob storage. The storage account type is General-purpose V2. When photos are uploaded, they must be processed to produce and save a mobile-friendly version of the image. The process to produce a mobile-friendly version of the image must start in less than one minute. You need to design the process that starts the photo processing. Solution: Convert the Azure Storage account to a BlockBlobStorage storage account. Does the solution meet the goal?

Answering "Yes" is incorrect because converting the storage account type does not create an event-driven pipeline or guarantee that processing begins within one minute. While Premium BlockBlobStorage can improve blob read/write performance, it does not provide a mechanism to detect uploads and invoke processing logic. The missing component is an event/trigger service (such as Event Grid) and a compute target (such as Azure Functions) to start the processing. As a result, the goal of starting processing quickly is not satisfied by this change alone.

Changing from a GPv2 storage account to a BlockBlobStorage account does not implement any trigger to start image processing after an upload. The requirement is about initiating processing within one minute, which is achieved by wiring blob creation events to compute (for example, Event Grid triggering an Azure Function). BlockBlobStorage mainly changes the performance tier and characteristics for blob workloads, not the eventing or orchestration behavior. Therefore, the solution does not meet the goal because it does not address how processing is started.

Question Analysis

Core concept: This question tests how to trigger near-real-time processing when blobs are uploaded to Azure Storage, and whether changing the storage account type affects event-driven processing latency. Why the answer is correct: Converting a General-purpose v2 (GPv2) storage account to a BlockBlobStorage (Premium) account does not, by itself, create or accelerate an event trigger to start image processing within one minute. The requirement is about initiating a processing workflow quickly after an upload, which is typically achieved using eventing (Azure Event Grid) or messaging (Storage queues/Service Bus) plus compute (Azure Functions/WebJobs). Storage account performance tier/type may improve throughput/latency for blob operations, but it does not provide an automatic “start processing” mechanism. Key features / configurations: - Azure Event Grid + BlobCreated events to trigger Azure Functions (near real-time, typically seconds) - Azure Functions Blob trigger (polling-based; can have delays depending on plan/runtime and is not as deterministic as Event Grid) - Storage queues or Service Bus to decouple upload from processing and ensure reliable processing - GPv2 supports Event Grid integration; no need to change account type for eventing Common misconceptions: - Assuming a Premium/BlockBlobStorage account type automatically triggers workflows or reduces trigger latency. - Confusing storage performance characteristics (IOPS/throughput) with event notification/trigger mechanisms. - Believing that changing account type is required to use Event Grid; GPv2 already supports it. Exam tips: - Use Event Grid for fast, event-driven blob processing initiation (BlobCreated → Function/Logic App). - Storage account type changes affect performance/cost, not workflow triggering. - For “start within X time” requirements, prefer push-based events (Event Grid) over polling triggers when possible.

10
Question 10

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop a software as a service (SaaS) offering to manage photographs. Users upload photos to a web service which then stores the photos in Azure Storage Blob storage. The storage account type is General-purpose V2. When photos are uploaded, they must be processed to produce and save a mobile-friendly version of the image. The process to produce a mobile-friendly version of the image must start in less than one minute. You need to design the process that starts the photo processing. Solution: Move photo processing to an Azure Function triggered from the blob upload. Does the solution meet the goal?

Yes. An Azure Function with a Blob trigger is designed to run automatically when a blob is added or updated in Azure Storage, which fits the photo-upload scenario directly. This allows the image-processing code to start without requiring a separate polling service or custom infrastructure. In typical AZ-204 exam context, Blob-triggered Functions are considered fast enough to begin processing within less than one minute after upload. It is also a common serverless pattern for generating thumbnails or mobile-friendly image variants.

No is incorrect because the proposed design does meet the stated goal in the context of this exam. The requirement is to start processing shortly after upload, and Blob-triggered Azure Functions are specifically intended for that kind of storage-driven automation. Although there can be small trigger latency in real environments, the solution is still considered valid for sub-minute initiation. Therefore rejecting the solution would not align with the expected Azure design pattern.

Question Analysis

Core concept: This scenario tests whether an Azure Function with a Blob storage trigger is an appropriate event-driven mechanism to start image processing soon after a blob is uploaded to Azure Blob Storage. Blob-triggered Azure Functions are commonly used to react to new files in storage and perform serverless processing such as image resizing. Why correct: A Blob-triggered Azure Function can automatically start when a new photo is uploaded to Blob storage, making it a suitable way to launch the mobile-image generation workflow. For AZ-204 exam purposes, this is considered an appropriate near-real-time solution and satisfies the requirement that processing begin in less than one minute. Key features: Azure Functions provides serverless execution, automatic scaling, and native Blob Storage integration through Blob triggers. This reduces infrastructure management and allows the application to process uploaded images as they arrive. It is a standard pattern for storage-driven workloads like thumbnail or mobile-friendly image generation. Common misconceptions: A Blob trigger is not literally invoked synchronously by the upload request; it reacts to blob changes after the blob is written. There can be slight latency, but in exam scenarios it is generally treated as sufficiently fast for requirements like starting processing within a minute. Some candidates confuse this with Event Grid-based triggers, which are a different integration pattern. Exam tips: When a question asks to process files automatically after they are uploaded to Blob storage, Azure Functions with a Blob trigger is a strong default answer. If the requirement is near-real-time serverless processing without managing infrastructure, Blob triggers are usually acceptable. Distinguish Blob triggers from queue-based decoupling and Event Grid-based event routing.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

You are developing an application that uses Azure Blob storage. The application must read the transaction logs of all the changes that occur to the blobs and the blob metadata in the storage account for auditing purposes. The changes must be in the order in which they occurred, include only create, update, delete, and copy operations and be retained for compliance reasons. You need to process the transaction logs asynchronously. What should you do?

Event Grid + Azure Functions is good for asynchronous processing and near-real-time reactions to blob events. However, it is not an authoritative transaction log for compliance: delivery is at-least-once, events can be duplicated, and ordering is not guaranteed across all events. Retention is also not inherent; you would need to persist events yourself, and you could still miss events if misconfigured.

Enabling Blob Storage Change Feed provides an immutable, append-only log of blob and blob metadata changes, including create, update, delete, and copy operations. It is designed for auditing and compliance scenarios and supports asynchronous processing by reading the feed files. It also preserves the sequence of changes as recorded in the feed, making it the best match for ordered transaction log requirements.

Storage Analytics logging (classic) captures request-level logs but is a legacy approach and not optimized for the requirement of an ordered, concise change log specifically for blob and metadata changes. It can be noisy (many operation types), requires parsing, and is not the recommended modern solution for change tracking/auditing compared to Blob Change Feed and Azure Monitor diagnostic settings.

The Azure Monitor HTTP Data Collector API is used to send custom logs into Log Analytics. It does not provide native access to storage transaction history. You would still need a source of truth for blob operations, and scanning request bodies is not a supported or reliable way to reconstruct ordered create/update/delete/copy changes for blobs and metadata.

Question Analysis

Core concept: This question tests Azure Blob Storage Change Feed, which provides an immutable, ordered log of changes to blobs and blob metadata. It is designed for auditing, compliance, and downstream processing scenarios where you need a durable record of create/update/delete/copy operations. Why the answer is correct: The requirement is to read transaction logs of all changes to blobs and blob metadata, in the order they occurred, limited to create, update, delete, and copy, retained for compliance, and processed asynchronously. Blob Change Feed is purpose-built for this: it records changes as append-only log files stored in the storage account, preserves ordering within the feed, and includes exactly the relevant change types (create, update, delete, and copy). Because the feed is stored in Blob Storage, you can process it asynchronously using batch jobs, Functions, Databricks/Synapse, or custom workers, and you can retain it according to compliance needs using storage lifecycle management and immutability policies. Key features / configuration notes: - Enable Change Feed at the storage account level. - Consume the feed via SDKs/APIs that read change feed segments and events. - Retention/compliance: use lifecycle management to retain for required duration, and consider immutable blob policies (WORM) if regulatory requirements demand tamper resistance. - Aligns with Azure Well-Architected Framework (Reliability and Security): durable, replayable log; decoupled asynchronous processing. Common misconceptions: Event Grid is excellent for near-real-time notifications, but it is not a compliance-grade, complete, ordered transaction log. Events can be delivered at-least-once, may arrive out of order, and are not intended as an authoritative audit trail. Storage Analytics logs are legacy and not as targeted for ordered change tracking of blob metadata changes. Azure Monitor HTTP Data Collector is for custom log ingestion, not for extracting authoritative storage transaction history. Exam tips: If you see “ordered log of blob changes,” “auditing/compliance,” and “create/update/delete/copy,” think “Blob Change Feed.” If you see “react to events” or “trigger serverless processing,” think “Event Grid,” but not for strict ordered audit logs.

12
Question 12

You need to support the requirements for the Shipping Logic App. What should you use?

Azure AD Application Proxy is used to publish on-premises web applications to external users securely via Azure AD (pre-authentication, conditional access). It is a user-to-app remote access solution, not an integration mechanism for Logic Apps connectors to reach on-premises databases or internal systems. It would not typically enable Logic Apps to access on-prem resources like SQL Server or file shares through managed connectors.

A Site-to-Site VPN connects an on-premises network to an Azure VNet, enabling private IP connectivity for many resources. However, standard Logic Apps (Consumption) commonly relies on connector patterns rather than directly routing over your VPN to on-prem endpoints. Unless the scenario explicitly states VNet-integrated Logic Apps/ISE and private routing requirements, S2S VPN is usually not the expected answer for connector-based on-prem access.

On-premises Data Gateway is the standard component that allows Azure Logic Apps (and Power Platform) to securely access on-premises data sources using supported connectors. The gateway runs inside the on-prem network and establishes an outbound, encrypted connection to Azure, avoiding inbound firewall exposure. It supports clustering for HA and is the most common exam answer when Logic Apps must connect to on-prem systems.

Point-to-Site VPN is designed for individual client devices to connect to an Azure VNet (remote user access). It is not intended to provide persistent, scalable connectivity for an on-premises datacenter or to enable server-side services like Logic Apps to reach on-prem resources. For organization-wide hybrid connectivity, S2S VPN or ExpressRoute is used, not P2S.

Question Analysis

Core concept: This question tests how Azure Logic Apps securely connects to on-premises resources (such as on-prem SQL Server, file shares, SAP, or internal HTTP endpoints) without exposing those resources to the public internet. The key integration component is the On-premises Data Gateway, which provides a secure outbound connection from the on-prem network to Azure. Why the answer is correct: For a “Shipping Logic App” scenario, the typical requirement is that the Logic App must call into on-premises systems (ERP/WMS/shipping label system, on-prem database, or internal APIs) to retrieve order/shipping data or write back shipment status. Logic Apps uses managed connectors; when the target system is on-premises and not publicly reachable, the supported and recommended approach is to install and register an On-premises Data Gateway. The gateway initiates an outbound connection to Azure Service Bus, so you usually do not need inbound firewall openings, aligning with least exposure and Azure Well-Architected security principles. Key features / configuration notes: - Supports Logic Apps connectors that require gateway (commonly SQL Server, File System, SAP, and some custom scenarios). - Runs on a Windows Server in the on-prem network; can be configured in a cluster for high availability. - Uses Azure AD for authentication/registration and encrypts traffic; connectivity is outbound over HTTPS. - Operationally, you monitor gateway health and ensure the gateway machine can reach required Azure endpoints. Common misconceptions: VPN options (S2S/P2S) provide network connectivity, but Logic Apps connectors generally do not “just use” your private routing unless you are using specific patterns (e.g., ISE/Standard with VNet integration and private endpoints) and the target is reachable via that network path. In many exam scenarios, the simplest and most canonical requirement for Logic Apps to reach on-premises data is the gateway, not a VPN. Exam tips: - If the question mentions Logic Apps accessing on-premises data sources, think “On-premises Data Gateway.” - Choose VPN when the requirement is broader private network connectivity between Azure VNets and on-prem networks for multiple workloads, not specifically connector-based access. - Azure AD Application Proxy is for publishing on-prem web apps to external users, not for Logic Apps connector access.

13
Question 13

You need to secure the Shipping Logic App. What should you use?

Azure App Service Environment (ASE) provides an isolated and dedicated environment for hosting Azure App Service resources (Web Apps, API Apps, and some Function App scenarios) inside a VNet. However, it is not the standard mechanism to secure or host Azure Logic Apps. Choosing ASE is a common mistake when candidates equate “secure hosting” with “ASE,” but Logic Apps use ISE for VNet injection.

Integration Service Environment (ISE) is the correct choice because it is the dedicated, single-tenant Logic Apps runtime deployed into your VNet. It enables private network access, tighter inbound/outbound control, and secure connectivity to VNet and on-prem resources. For exam wording like “secure the Logic App” or “run Logic Apps in a VNet,” ISE is the canonical solution.

A VNet service endpoint extends a VNet’s identity to supported Azure PaaS services (such as Azure Storage or Azure SQL) so traffic stays on the Azure backbone and access can be restricted to that VNet. It does not secure the Logic App endpoint itself or place the Logic Apps runtime into a VNet. It’s useful for securing dependencies, not the Logic App hosting plane.

Azure AD B2B integration is used to collaborate with external identities (guest users) and manage access to applications using Azure AD. It addresses authentication/authorization for users, not network isolation or securing the Logic App runtime. While identity controls are important, B2B does not provide the private VNet deployment and traffic control typically implied by “secure the Shipping Logic App.”

Question Analysis

Core concept: This question tests how to secure an Azure Logic App by isolating it from the public internet and enabling private network access. For Logic Apps (especially Logic Apps (Consumption)), the primary way to run workflows in a dedicated, network-isolated environment is the Integration Service Environment (ISE). Why the answer is correct: An Integration Service Environment (ISE) is a dedicated, single-tenant deployment of the Logic Apps runtime that is injected into your Azure virtual network (VNet). This allows the Shipping Logic App to be accessed privately and to reach resources in the VNet (or connected networks) without traversing the public internet. In exam scenarios, “secure the Logic App” commonly implies network isolation, private endpoints/VNet integration, and controlling inbound/outbound traffic—ISE is the Logic Apps-specific solution designed for that. Key features / best practices: - Network isolation: ISE deploys into your VNet, enabling private IP addressing and tighter control of traffic paths. - Enterprise connectivity: Works well with VNet-connected resources (SQL, SAP, on-prem via VPN/ExpressRoute) and supports integration account artifacts. - Governance and compliance: Single-tenant isolation helps meet stricter compliance requirements and aligns with Azure Well-Architected Framework security pillar (network segmentation, least exposure). - Predictable performance: Dedicated capacity (priced differently than Consumption) can be important for mission-critical shipping workflows. Common misconceptions: - App Service Environment (ASE) is for hosting App Service apps (web apps, APIs, functions in some cases), not Logic Apps runtime. It won’t “move” a Logic App into a private environment. - VNet service endpoints secure access from a VNet to certain Azure PaaS services (e.g., Storage, SQL) but do not place Logic Apps into a VNet or secure the Logic App’s inbound endpoint. - Azure AD B2B is identity collaboration for external users; it doesn’t provide network isolation for Logic Apps. Exam tips: - If the question is specifically about securing/isolating a Logic App with VNet-level control, think ISE (Logic Apps-specific). - If the question is about securing access to a downstream service (Storage/SQL) from a VNet, think service endpoints or private endpoints. - Always map the service to the correct isolation construct: ASE = App Service; ISE = Logic Apps.

14
Question 14

You develop Azure solutions. You must connect to a No-SQL globally-distributed database by using the .NET API. You need to create an object to configure and execute requests in the database. Which code segment should you use?

Incorrect. Container is not instantiated directly with EndpointUri and PrimaryKey in the Azure Cosmos DB .NET SDK v3+. A Container object is obtained from a CosmosClient (e.g., cosmosClient.GetContainer(databaseId, containerId)). The client handles authentication, connection management, retries, and other request pipeline behaviors, which a Container constructor does not provide in this way.

Incorrect. Database is not created by calling a constructor with EndpointUri and PrimaryKey in the Cosmos DB .NET SDK v3+. You first create a CosmosClient, then get a Database reference (e.g., cosmosClient.GetDatabase(databaseId)) or create it via CreateDatabaseIfNotExistsAsync. Credentials and endpoint configuration belong to CosmosClient, not Database.

Correct. CosmosClient is the primary .NET SDK object used to configure connectivity to Azure Cosmos DB and execute requests. It is created with the account endpoint and key (or other credentials) and is intended to be reused for the lifetime of the application. From CosmosClient you obtain Database and Container references and perform CRUD operations.

Question Analysis

Core Concept: This question tests how to connect to Azure Cosmos DB (a globally distributed NoSQL database) using the Azure Cosmos DB .NET SDK (v3+). In this SDK, the primary entry point for configuring and executing requests is the CosmosClient object, which manages connectivity, authentication, retries, and efficient resource usage. Why the Answer is Correct: CosmosClient is the top-level client used to interact with Cosmos DB accounts. You create it with the account endpoint URI and a key (or other credential), and then use it to obtain references to Database and Container objects (e.g., GetDatabase, GetContainer). CosmosClient is designed to be long-lived and reused across the application lifetime, enabling connection management and performance optimizations. Therefore, new CosmosClient(EndpointUri, PrimaryKey) is the correct code segment to configure and execute requests. Key Features / Best Practices: - Reuse CosmosClient as a singleton (or one per app) to avoid socket exhaustion and improve performance. - Configure CosmosClientOptions for consistency, preferred regions, connection mode (Direct/Gateway), retries, and diagnostics. - Use Azure AD (DefaultAzureCredential) where possible instead of keys for improved security posture. - Cosmos DB’s global distribution and multi-region reads/writes are handled at the account level; CosmosClient can be configured with ApplicationPreferredRegions to optimize latency. These align with Azure Well-Architected Framework pillars: Performance Efficiency (reuse client, preferred regions), Reliability (retries, multi-region), and Security (AAD over keys). Common Misconceptions: Developers sometimes assume they can directly instantiate Database or Container with endpoint/key. In the Cosmos DB .NET SDK, Database and Container are logical resource references obtained from CosmosClient; they are not constructed directly with credentials. Another confusion is mixing older SDK patterns (DocumentClient) with the modern CosmosClient approach. Exam Tips: For AZ-204, remember the object model: CosmosClient (account-level) -> Database -> Container -> Items. If a question asks for the object that configures connectivity and executes requests, it’s almost always CosmosClient. Also remember the guidance: create once, reuse many times, and prefer managed identity/AAD when feasible.

15
Question 15

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop Azure solutions. You must grant a virtual machine (VM) access to specific resource groups in Azure Resource Manager. You need to obtain an Azure Resource Manager access token. Solution: Use an X.509 certificate to authenticate the VM with Azure Resource Manager. Does the solution meet the goal?

Answering "Yes" is incorrect because authenticating a VM to Azure Resource Manager using an X.509 certificate is not the expected VM workload identity pattern. While certificates can be used as credentials for a service principal (app registration), that would require creating and managing an Azure AD application, installing and protecting the private key on the VM, and handling rotation/expiry. In contrast, managed identities allow the VM to obtain tokens via IMDS without storing secrets and then use RBAC assignments scoped to the required resource groups. Therefore, the certificate-based solution is not considered to meet the goal in this context.

An X.509 certificate is not the standard mechanism for an Azure VM to obtain an ARM access token in a secure, credential-free way. The recommended approach is to enable a managed identity on the VM and request an OAuth 2.0 token from the Azure Instance Metadata Service, then use Azure RBAC to grant that identity access to specific resource groups. Certificate-based authentication generally applies to Azure AD app registrations (service principals) and requires managing the certificate lifecycle and private key on the VM. Because the scenario is specifically about granting a VM access to resource groups and obtaining an ARM token, managed identity is the intended solution, so the certificate approach does not meet the goal.

Question Analysis

Core concept: This question tests how to obtain an Azure Resource Manager (ARM) access token for a workload running on an Azure VM, and how to grant that workload scoped permissions (for example, to specific resource groups) using Azure AD identities and Azure RBAC. Why the answer is correct: Using an X.509 certificate by itself is not the recommended or intended way for an Azure VM to authenticate to ARM to obtain tokens. For VM-to-ARM access, the standard approach is to use a managed identity (system-assigned or user-assigned), which can request tokens from the Azure Instance Metadata Service (IMDS) without storing secrets/certificates on the VM. You then grant that managed identity Azure RBAC role assignments scoped to the required resource groups. Therefore, the proposed certificate-based approach does not meet the goal as stated for this scenario. Key features / configurations: - Managed identities for Azure resources (system-assigned or user-assigned) for workload identity on VMs. - Azure Instance Metadata Service (IMDS) token endpoint: http://169.254.169.254/metadata/identity/oauth2/token. - Azure RBAC role assignments scoped at the resource group level (least privilege). - Azure AD app registrations and certificate credentials are typically used for service principals, not as the primary VM workload identity pattern. Common misconceptions: - Assuming certificates are the default/required method for non-interactive authentication from Azure compute to ARM. - Confusing “service principal with certificate” (app registration credential) with “VM identity” (managed identity) and how tokens are obtained. - Overlooking that the goal includes granting access to specific resource groups, which is most cleanly done via RBAC assignments to a managed identity. Exam tips: - Prefer Managed Identity for Azure resources when an Azure VM needs to call ARM or other Azure services. - Use RBAC scope (subscription/resource group/resource) to limit permissions; assign roles to the managed identity. - Certificates are commonly used with Azure AD app registrations (service principals), but they introduce credential management on the VM. - For ARM tokens from a VM, remember IMDS is the typical token acquisition mechanism.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop Azure solutions. You must grant a virtual machine (VM) access to specific resource groups in Azure Resource Manager. You need to obtain an Azure Resource Manager access token. Solution: Use the Reader role-based access control (RBAC) role to authenticate the VM with Azure Resource Manager. Does the solution meet the goal?

Answering Yes is incorrect because assigning the Reader role does not provide an authentication mechanism or a token acquisition method. A VM needs an Azure AD-backed identity (managed identity or service principal) to request an OAuth 2.0 access token, and only then does RBAC determine whether ARM operations are allowed. Reader merely grants read permissions once authenticated; it does not enable token retrieval. As a result, the solution does not satisfy the requirement to obtain an ARM access token.

The Reader RBAC role cannot be used to authenticate a VM or to obtain an Azure Resource Manager access token. RBAC roles are evaluated by ARM after an identity presents a valid Azure AD token; they do not generate or provide that token. To get an ARM token, the VM must use an Azure AD identity (commonly a managed identity) and request a token for the ARM audience (`https://management.azure.com/`) via IMDS or an Azure AD OAuth flow. Therefore, the proposed solution does not meet the goal.

Question Analysis

Core concept: This question tests how Azure Resource Manager (ARM) authentication and authorization work for a VM, specifically the difference between RBAC roles (authorization) and obtaining an ARM access token (authentication via Azure AD). Why the answer is correct: The Reader RBAC role does not authenticate a VM or issue tokens; it only defines what an already-authenticated identity is allowed to do. To obtain an Azure Resource Manager access token, the VM must use an Azure AD identity (typically a managed identity) and request a token for the ARM resource (audience) `https://management.azure.com/`. RBAC is then used to grant that identity access to specific resource groups, but it is not the mechanism that produces the token. Therefore, using the Reader role “to authenticate the VM” does not meet the goal of obtaining an ARM access token. Key features / configurations: - Managed identities for Azure resources (system-assigned or user-assigned) to provide an Azure AD identity to the VM - Token acquisition from the Azure Instance Metadata Service (IMDS): `http://169.254.169.254/metadata/identity/oauth2/token` - Requesting a token for ARM using resource/audience `https://management.azure.com/` - Azure RBAC role assignment (e.g., Reader/Contributor) scoped to specific resource groups for authorization after token issuance Common misconceptions: - Confusing RBAC roles with authentication: RBAC controls permissions, not identity verification or token issuance. - Assuming assigning a role to a VM “enables” token retrieval: token retrieval requires an Azure AD identity (managed identity/service principal) and a token endpoint. - Thinking “Reader” is required to get a token: you can obtain a token without any RBAC permissions; RBAC only affects what ARM calls succeed. Exam tips: - RBAC = authorization (what you can do); Azure AD/managed identity = authentication (who you are) and token issuance. - For VM-to-ARM access, look for “managed identity + IMDS token for `https://management.azure.com/`”. - Scope RBAC assignments at the resource group level to limit access to specific resource groups.

17
Question 17

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop Azure solutions. You must grant a virtual machine (VM) access to specific resource groups in Azure Resource Manager. You need to obtain an Azure Resource Manager access token. Solution: Run the Invoke-RestMethod cmdlet to make a request to the local managed identity for Azure resources endpoint. Does the solution meet the goal?

Yes. A VM with managed identity enabled can obtain an Azure Resource Manager access token by calling the local managed identity endpoint from inside the VM. Using Invoke-RestMethod is a valid PowerShell way to make that HTTP request and retrieve the token payload. This is the recommended pattern because it avoids storing credentials and uses the VM's managed identity, which can then be authorized to specific resource groups through Azure RBAC.

No is incorrect because the proposed solution is exactly how a VM-based managed identity is intended to request tokens for Azure services. The local managed identity endpoint is designed for workloads running on the Azure resource itself, and PowerShell can call it directly. The only caveat is that the VM must have a managed identity enabled and the identity must have the necessary RBAC permissions, but the token acquisition method itself is correct.

Question Analysis

Core concept: This question tests how a virtual machine uses a managed identity to obtain an Azure Resource Manager access token. On an Azure VM with managed identity enabled, applications can request tokens from the local managed identity endpoint exposed by the Azure Instance Metadata Service (IMDS) without storing credentials. Why correct: Calling the local managed identity endpoint with Invoke-RestMethod is the standard way for code or scripts running on the VM to request an OAuth 2.0 access token for Azure Resource Manager. The request targets the local endpoint and specifies the ARM resource/audience, and Azure returns a token tied to the VM's managed identity. That token can then be used to access only the resource groups and resources permitted by RBAC assignments. Key features: Managed identities eliminate the need to embed secrets, certificates, or service principal credentials in the VM. The token is obtained from a local endpoint available only from within the Azure resource, which improves security. Access is controlled through Azure RBAC, so the identity must be granted the appropriate role on the specific resource groups. Common misconceptions: A managed identity does not automatically grant access to all Azure resources; it only provides an identity that must still be assigned roles. Also, the local endpoint returns tokens only for the requested resource or scope, so the request must target Azure Resource Manager when an ARM token is needed. Simply running on a VM is not enough unless managed identity is enabled. Exam tips: For Azure VMs, App Service, and other compute services with managed identity, remember that tokens are typically retrieved from a local metadata/managed identity endpoint. If the question asks for an ARM token from code running on the resource itself, using the managed identity endpoint is usually the correct approach. Distinguish token acquisition from authorization: token retrieval uses managed identity, while access to resource groups is granted through RBAC.

18
Question 18

A company is implementing a publish-subscribe (Pub/Sub) messaging component by using Azure Service Bus. You are developing the first subscription application. In the Azure portal you see that messages are being sent to the subscription for each topic. You create and initialize a subscription client object by supplying the correct details, but the subscription application is still not consuming the messages. You need to ensure that the subscription client processes all messages. Which code segment should you use?

AddRuleAsync adds or updates a subscription rule/filter. A TrueFilter would match all messages, but the prompt already indicates messages are being sent to the subscription (visible in the portal), meaning routing and filtering are not the problem. Adding a rule also doesn’t start message consumption; it only affects which messages are forwarded into the subscription.

This line constructs the SubscriptionClient, which the scenario states you already did (“create and initialize a subscription client object by supplying the correct details”). Instantiation alone does not start receiving messages. You still need to register a handler or implement a receive loop to actually consume messages from the subscription.

CloseAsync shuts down the client and releases network resources. Calling it would stop any receiving and is the opposite of what you need. CloseAsync is used during graceful shutdown after processing is complete, not to initiate message consumption.

RegisterMessageHandler starts the background message pump that pulls messages from the subscription and invokes your ProcessMessagesAsync callback. Combined with MessageHandlerOptions (e.g., MaxConcurrentCalls, AutoComplete, ExceptionReceivedHandler), it ensures messages are actively received and processed. This is the required step when using the Microsoft.Azure.ServiceBus SubscriptionClient in a push-based handler model.

Question Analysis

Core concept: This question tests how to consume messages from an Azure Service Bus Topic Subscription using the .NET client. In the legacy Microsoft.Azure.ServiceBus library, creating a SubscriptionClient only establishes connectivity and entity targeting; it does not start message processing. You must explicitly register a message pump (handler) or manually receive messages. Why the answer is correct: subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions) starts the client-side message pump. It wires up an asynchronous callback (ProcessMessagesAsync) that is invoked whenever messages are available on the subscription. Without registering a handler (or calling ReceiveAsync in a loop), the application will appear “connected” but will never consume messages. RegisterMessageHandler also enables proper settlement patterns (Complete/Abandon/Dead-letter) depending on ReceiveMode and your handler logic. Key features / best practices: - Use MessageHandlerOptions to control concurrency (MaxConcurrentCalls) and error handling (ExceptionReceivedHandler). This is critical to ensure throughput and resiliency. - Ensure AutoComplete is set appropriately. If AutoComplete=false, your handler must call CompleteAsync(message.SystemProperties.LockToken) after successful processing, otherwise messages will be redelivered after the lock expires. - Handle transient failures and implement retry/backoff in the exception handler. This aligns with Azure Well-Architected Framework reliability principles. - Confirm the subscription has an appropriate rule/filter. By default, a subscription typically has a TrueFilter rule that matches all messages unless modified. Common misconceptions: Many developers assume that instantiating SubscriptionClient automatically begins receiving. It does not. Another confusion is around rules/filters: if messages are visible in the portal for the subscription, routing is working; the issue is on the consumer side (no receive loop/handler). Exam tips: For AZ-204, remember the pattern: create client + register handler (push model) OR create client + ReceiveAsync loop (pull model). If the question says the client is created correctly but no messages are consumed, the missing step is usually RegisterMessageHandler (or an explicit receive loop). Also note newer SDKs (Azure.Messaging.ServiceBus) use ServiceBusProcessor/ProcessMessageAsync, but this question’s API matches Microsoft.Azure.ServiceBus.

19
Question 19
(Select 2)

You have a new Azure subscription. You are developing an internal website for employees to view sensitive data. The website uses Azure Active Directory (Azure AD) for authentication. You need to implement multifactor authentication for the website. Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Incorrect. Azure AD B2C is designed for customer-facing (external) identity scenarios with social identities and custom user flows. The question describes an internal employee website using Azure AD (workforce tenant). For employees, you typically use Azure AD (Microsoft Entra ID) directly with Conditional Access to require MFA, not B2C.

Correct. Creating a Conditional Access policy is the standard way to enforce MFA for a specific application and set of users. You can target the website’s enterprise application and require “multifactor authentication” as a grant control. Conditional Access provides granular controls (users/groups, locations, device state) and is the recommended approach for app-specific MFA enforcement.

Correct. Conditional Access generally requires Azure AD Premium (P1 or higher). In a new subscription/tenant, you often start with the free tier, which does not include Conditional Access for most use cases. Upgrading to Azure AD Premium ensures you can create and apply Conditional Access policies to require MFA for the website.

Incorrect. Azure AD Application Proxy is used to securely publish on-premises applications to be accessed externally through Azure AD. It is not required for an Azure-hosted internal website, and it does not by itself implement MFA. If you used it, MFA would still typically be enforced via Conditional Access, not by Application Proxy alone.

Incorrect. “Baseline policies” are legacy and have largely been replaced by Security Defaults and Conditional Access. Even when available, baseline/security defaults are tenant-wide and not as granular as a dedicated Conditional Access policy for a specific application. The question asks to implement MFA for the website, which is best done with an app-targeted Conditional Access policy.

Question Analysis

Core concept: This question tests how to enforce multifactor authentication (MFA) for an Azure AD-authenticated application. In Azure AD, MFA is typically enforced through Conditional Access (CA) policies. CA is a key identity security control aligned with the Azure Well-Architected Framework (Security pillar) because it enables strong authentication and risk-based access decisions. Why the answer is correct: To require MFA for an internal employee-facing website that already uses Azure AD, you create a Conditional Access policy targeting the application (enterprise app / app registration) and the relevant users/groups, and set the grant control to “Require multifactor authentication.” However, Conditional Access is not available in the free tier for most scenarios; it requires Azure AD Premium (commonly P1) licensing. Therefore, you must (1) upgrade to Azure AD Premium and (2) create a new Conditional Access policy. Key features and configuration points: - Licensing: Azure AD Premium P1 enables Conditional Access. Ensure users who will be subject to CA are properly licensed. - Policy scope: Assign the policy to the specific cloud app (your website’s enterprise application) and to appropriate users/groups (ideally a pilot group first). - Controls: Under Grant, select “Require multifactor authentication.” Optionally add conditions (device compliance, location, sign-in risk) depending on requirements. - Best practice: Use least privilege and phased rollout (report-only mode if available) to avoid locking out users; maintain break-glass accounts excluded from CA. Common misconceptions: - “Baseline policies” were an older approach and have been replaced by Security Defaults in many tenants; they are not the recommended fine-grained method for app-specific MFA. - Azure AD B2C is for external customer identities, not internal employee authentication. - Application Proxy is for publishing on-prem apps externally; it doesn’t itself enforce MFA without CA. Exam tips: If a question says the app already uses Azure AD and asks to enforce MFA, think “Conditional Access + Azure AD Premium.” If it’s internal workforce identity, avoid B2C. If it’s about publishing on-prem apps, consider Application Proxy, but MFA enforcement still typically comes from Conditional Access.

20
Question 20
(Select 2)

You are developing an ASP.NET Core Web API web service. The web service uses Azure Application Insights for all telemetry and dependency tracking. The web service reads and writes data to a database other than Microsoft SQL Server. You need to ensure that dependency tracking works for calls to the third-party database. Which two dependency telemetry properties should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Telemetry.Context.Cloud.RoleInstance identifies the specific compute instance that emitted the telemetry, such as an App Service instance or VM. This can help diagnose node-specific issues in scaled-out environments, but it does not define or enable dependency tracking for a database call. It is host metadata, not a dependency telemetry property required for representing the dependency itself. Therefore it is not part of the solution here.

Telemetry.Id is the unique identifier for the dependency telemetry item. Application Insights uses this identifier as part of the telemetry record so the dependency call is represented as a distinct event in traces and diagnostics. When manually tracking dependencies for a third-party database, supplying the dependency item's Id is one of the core properties of the dependency telemetry contract. It identifies the dependency record itself, rather than metadata about the host or user session.

Telemetry.Name is the human-readable name of the dependency operation, such as a command, query, or logical database action. This field is what appears in dependency views, search results, and analytics groupings, so it is essential for making the third-party database call visible and understandable. Without a meaningful Name, the dependency telemetry is much less useful operationally. It is a standard property of dependency telemetry and directly supports dependency tracking.

Telemetry.Context.Operation.Id is a correlation context property used to associate telemetry items with the broader request or distributed trace. Although correlation is important, this question asks which dependency telemetry properties to use to ensure dependency tracking works for a third-party database. The dependency item itself is primarily defined by fields like Id and Name, while Operation.Id is broader context that is often flowed automatically by the SDK. As a result, it is not the best answer among the given choices.

Telemetry.Context.Session.Id is intended to associate telemetry with a user or client session, most commonly in interactive application scenarios. A server-side Web API dependency call to a database does not rely on session context to be tracked as a dependency. This property does not identify the dependency operation or make it appear correctly in dependency telemetry. Therefore it is unrelated to the requirement.

Question Analysis

Core concept: For unsupported or third-party databases in Application Insights, you must create custom dependency telemetry and populate the fields that identify the dependency call and make it appear correctly as a dependency event. Why correct: The dependency telemetry needs a meaningful Name so the call is recognizable in the portal, and it needs its own Id so the dependency event is uniquely identified within the telemetry stream. Key features: DependencyTelemetry uses fields such as Id, Name, Data, Target, Type, Duration, Success, and ResultCode; correlation to the surrounding request is typically handled automatically by the SDK/activity context rather than by manually choosing unrelated context properties. Common misconceptions: Operation.Id is used for correlation across an operation, but it is not one of the two dependency telemetry properties typically required here to define the dependency item itself; Session.Id and Cloud.RoleInstance are unrelated to dependency tracking. Exam tips: When asked specifically for dependency telemetry properties, prefer fields that belong directly to the dependency item, such as Id and Name, rather than broader telemetry context fields unless the question explicitly asks about correlation context.

Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Practice Test #4

50 Questions·100 min·Pass 700/1000

Practice Test #5

50 Questions·100 min·Pass 700/1000

Other Microsoft Certifications

Microsoft AI-102

Microsoft AI-102

Associate

PL-300: Microsoft Power BI Data Analyst

PL-300: Microsoft Power BI Data Analyst

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-305

Microsoft AZ-305

Expert

Microsoft AZ-500

Microsoft AZ-500

Associate

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AZ-204 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.