CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. Microsoft AI-102
Microsoft AI-102

Microsoft

Microsoft AI-102

306+ Practice Questions with AI-Verified Answers

Designing and Implementing a Microsoft Azure AI Solution

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 306+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every Microsoft AI-102 answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Plan and Manage an Azure AI SolutionWeight 23%
Implement Generative AI SolutionsWeight 18%
Implement an Agentic SolutionWeight 8%
Implement Computer Vision SolutionsWeight 13%
Implement Natural Language Processing SolutionsWeight 19%
Implement Knowledge Mining and Information Extraction SolutionsWeight 19%

Practice Questions

1
Question 1

DRAG DROP - You have 100 chatbots that each has its own Language Understanding model. Frequently, you must add the same phrases to each model. You need to programmatically update the Language Understanding models to include the new phrases. How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:

Part 1:

var phraselistId = await client.Features.______

The blank is after client.Features.______, so it must be a method on the Features operations group that returns a phrase list identifier. In the LUIS Authoring SDK, phrase lists are managed under Features, and the method used to create a new phrase list feature is AddPhraseListAsync, which returns the created phrase list ID (commonly an integer). This matches the variable name phraselistId and the use of await. Why others are wrong: - Phraselist and Phrases are not methods; they look like property names and would not fit client.Features.<method> syntax. - PhraselistCreateObject is a type used as an argument, not a method. - SavePhraselistAsync is used to update an existing phrase list (requires an ID), not to initially obtain an ID. - UploadPhraseListAsync is not the standard LUIS phrase list creation call in the Authoring SDK for this context.

Part 2:

(appId, versionId, new ______

The snippet shows: (appId, versionId, new ______. This indicates the code is instantiating an object with the new keyword to pass into the API call. For creating a phrase list in LUIS via the Authoring SDK, the request body is represented by PhraselistCreateObject (a DTO containing properties such as Name, Phrases, and IsExchangeable). Therefore, new PhraselistCreateObject is the correct completion. Why others are wrong: - AddPhraseListAsync and SavePhraselistAsync are methods, not types you instantiate with new. - Phraselist and Phrases are not the correct request object types; Phrases is typically a property within the create object (e.g., a comma-separated string or list depending on SDK version). - UploadPhraseListAsync is a method and does not fit after new. In practice, you’d populate the Phrases field with the shared phrases and then call AddPhraseListAsync per app/version, followed by training/publishing if required.

2
Question 2

You need to build a chatbot that meets the following requirements: ✑ Supports chit-chat, knowledge base, and multilingual models ✑ Performs sentiment analysis on user messages ✑ Selects the best language model automatically What should you integrate into the chatbot?

QnA Maker supports knowledge base responses and can include chit-chat content, while Language Understanding helps identify intents. Dispatch can route requests among multiple models, so this option partially addresses chatbot orchestration. However, it does not include Text Analytics, which is the Azure service required for sentiment analysis. Because sentiment analysis is explicitly required, this option is incomplete.

Translator supports multilingual scenarios and Speech supports voice-based input and output, while Dispatch can route utterances to different models. However, Speech is unrelated to the stated requirements unless voice interaction is explicitly needed, which it is not here. This option also lacks Text Analytics for sentiment analysis and does not include a knowledge base capability such as QnA Maker. Therefore it does not satisfy the full set of requirements.

Language Understanding can identify intents, Text Analytics performs sentiment analysis, and QnA Maker supports knowledge base and chit-chat scenarios. However, this option does not include Dispatch, which is the Azure component used to automatically select the best model among multiple language models or knowledge sources. LUIS by itself performs intent recognition within a model, but it is not the classic routing mechanism for choosing among several models. Since automatic model selection is explicitly required, this option is not the best fit.

Text Analytics is the Azure service used for sentiment analysis, so it satisfies the requirement to analyze user messages. Translator provides multilingual support by translating text between languages, which addresses the multilingual aspect of the chatbot. Dispatch is specifically used to route an utterance to the most appropriate model, such as the correct LUIS app or QnA Maker knowledge base, which matches the requirement to select the best language model automatically. Although this option does not explicitly list QnA Maker, it is the only option that covers the three stated capabilities the question emphasizes: sentiment, multilingual support, and automatic model selection.

Question Analysis

Core concept: This question is about choosing Azure services for a chatbot that must support multilingual interactions, analyze sentiment, and automatically route user utterances to the most appropriate language model. In classic Azure Bot Framework architectures, Dispatch is used to determine which downstream model or knowledge source should handle an utterance, while Text Analytics provides sentiment analysis and Translator enables multilingual support. Why correct: The requirement to "select the best language model automatically" points directly to Dispatch, which is designed to route utterances across multiple LUIS apps and QnA Maker knowledge bases. Sentiment analysis is provided by Text Analytics, and multilingual support is provided by Translator. Among the available options, only D contains all three of these required capabilities. Key features: - Text Analytics performs sentiment analysis on incoming user messages. - Translator enables multilingual conversations by translating text between languages. - Dispatch routes utterances to the most appropriate language understanding model or knowledge source. These services are commonly combined in bots that must support multiple languages and multiple NLP back ends. Common misconceptions: A frequent mistake is choosing QnA Maker and LUIS because they are core chatbot services, but neither performs sentiment analysis. Another common confusion is assuming LUIS alone automatically selects among multiple models; in Azure's classic architecture, Dispatch is the routing layer for that purpose. QnA Maker can support chit-chat, but the question emphasizes automatic model selection and multilingual support more directly. Exam tips: - Sentiment analysis = Text Analytics. - Multilingual translation = Translator. - Automatic routing across language/intent models = Dispatch. - QnA Maker is for FAQ/KB responses, but if the option lacks required sentiment or translation capabilities, it is incomplete.

3
Question 3

You are building a Language Understanding model for an e-commerce platform. You need to construct an entity to capture billing addresses. Which entity type should you use for the billing address?

Machine learned entities are designed for variable, natural language inputs and can generalize from labeled examples. A billing address has many valid formats and components, making it ideal for a machine learned (often composite) entity with sub-entities like street, city, region, postal code, and country. This approach scales across regions and reduces brittle rule maintenance compared to pattern-based extraction.

Regex entities work best for consistently formatted strings (e.g., invoice numbers, order IDs, phone numbers with strict patterns). Addresses are not consistently formatted across countries and even within a single country (abbreviations, unit formats, punctuation, ordering). A regex-based approach becomes complex, error-prone, and difficult to maintain, leading to poor reliability in production NLU.

geographyV2 is a prebuilt entity focused on geographic concepts such as cities, countries/regions, and other location-related terms. It does not reliably extract a complete postal/billing address with street lines, unit numbers, and postal codes as a single structured entity. It may help as a sub-entity (e.g., extracting city/country), but it is not the best primary entity type for a full billing address.

Pattern.any is used to capture free-form text segments, often when you want to grab the remainder of an utterance without detailed structure (e.g., a short description). It does not provide structured extraction of address components and can over-capture unrelated words. For billing addresses, you typically need structured fields for downstream processing, which is better achieved with machine learned entities.

List entities are appropriate for closed, finite sets of known values and synonyms (e.g., supported payment methods: Visa, MasterCard, PayPal). Billing addresses are effectively infinite and user-specific, so a list entity is not feasible. Maintaining a list of possible addresses is impossible, and it would not generalize to new addresses entered by users.

Question Analysis

Core Concept: This question targets entity design in Language Understanding (LUIS / Conversational Language Understanding). Entities are how you extract structured data (slots) from user utterances. Choosing the right entity type affects accuracy, maintainability, and how well the model generalizes. Why the Answer is Correct: A billing address is a complex, variable, multi-part concept (street number, street name, unit, city, state/province, postal code, country) and can be expressed in many formats across regions. A machine learned entity is best suited because it learns from labeled examples and can generalize to unseen address variations. In practice, you often model “BillingAddress” as a composite machine learned entity with child entities (Street, City, Region, PostalCode, Country) to capture structure. Key Features / Best Practices: - Use a machine learned entity (often as a composite) and label multiple examples covering different address formats (US, EU, APAC), abbreviations, and punctuation. - Consider adding sub-entities for components you need downstream (tax calculation, shipping validation, fraud checks). - Combine with validation outside the NLU layer (e.g., address verification services) because NLU extraction is not the same as postal validation. - From an Azure Well-Architected perspective, this improves Reliability and Operational Excellence: the model is resilient to new formats and easier to evolve than brittle patterns. Common Misconceptions: Regex can look attractive because addresses “look structured,” but real-world addresses vary too much and regex becomes fragile and hard to maintain. Prebuilt geography entities (geographyV2) focus on locations (cities, countries, points of interest), not full postal addresses. List entities work only for closed sets, which addresses are not. Exam Tips: - Use machine learned entities for open-ended, highly variable user inputs. - Use list entities for fixed vocabularies (e.g., payment method types). - Use regex entities for strongly patterned strings (order IDs, SKUs) where format is consistent. - Prebuilt entities are best when they match the exact concept (e.g., datetimeV2 for dates), but they don’t replace modeling for complex multi-field concepts like full addresses.

4
Question 4

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training. ✑ Find contacts in London. ✑ Who do I know in Seattle? ✑ Search for contacts in Ukraine. You need to implement the phrase list in Language Understanding. Solution: You create a new intent for location. Does this meet the goal?

Yes is incorrect because a phrase list in LUIS is not implemented by creating a new intent. Intents are used to classify what the user wants to do, and all the provided utterances clearly belong to the FindContact intent. The words London, Seattle, and Ukraine are examples of location information, which should be extracted through an entity rather than split into another intent. Using a new intent for location would confuse the model and fail to meet the stated design goal.

No is correct because creating a new intent for location does not satisfy the requirement to implement the phrase list properly. The user's goal in all sample utterances is still FindContact, so the intent should remain unchanged. The varying terms such as London, Seattle, and Ukraine are location values that should be captured as an entity, potentially supported by a phrase list feature. Creating a separate Location intent would incorrectly model data values as a user goal and reduce the effectiveness of intent classification.

Question Analysis

Core concept: This question tests understanding of how to use phrase lists in Language Understanding (LUIS) to improve recognition of related words within the same intent or entity context. A phrase list is a feature used to boost the importance of interchangeable or related terms, not to create a new intent. In this scenario, the utterances all express the same user goal—finding a contact—while the city or country is the variable data to extract. Why correct: The solution does not meet the goal because creating a new intent for location is the wrong modeling choice. "Find contacts in London," "Who do I know in Seattle?" and "Search for contacts in Ukraine" all map to the existing FindContact intent. The location values should be handled as an entity, and a phrase list can be used to improve recognition of location-related vocabulary if needed. Key features: Phrase lists in LUIS act as feature hints that help the model recognize related words and improve prediction accuracy. Intents represent the user's goal, while entities capture parameters such as location. In this case, location should be modeled as an entity like a geographical location, not as a separate intent. Common misconceptions: A common mistake is to create a new intent for every important noun or keyword in an utterance. However, intents should represent actions or goals, not data values. Another misconception is that phrase lists replace entities; in reality, phrase lists support the model, while entities extract the actual values. Exam tips: On AI-102, if multiple utterances express the same action but differ only by details like names, dates, or places, keep one intent and extract the varying parts as entities. Use phrase lists to improve recognition of related terms, but do not create a new intent unless the user's goal is actually different.

5
Question 5

HOTSPOT - You need to create a new resource that will be used to perform sentiment analysis and optical character recognition (OCR). The solution must meet the following requirements: ✑ Use a single key and endpoint to access multiple services. ✑ Consolidate billing for future services that you might use. ✑ Support the use of Computer Vision in the future. How should you complete the HTTP request to create the new resource? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

______ https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/RG1/providers/Microsoft.CognitiveServices/accounts/CS1?api-version=2017-04-18

The correct verb is PUT because the request targets a specific resource ID that includes the resource name (…/accounts/CS1). In Azure Resource Manager, PUT is used to create a new resource or fully replace an existing resource at that exact URI, and it is idempotent (repeating the request results in the same resource state). This is the standard pattern for provisioning resources such as Microsoft.CognitiveServices/accounts. POST is not correct here because POST is typically used when the service creates a subordinate resource and/or generates the identifier, or for invoking operations (for example, /listKeys, /regenerateKey, or other action endpoints). PATCH is used to partially update an existing resource (for example, modifying tags or certain properties) and is not the typical method to create a brand-new resource at a fixed resource URI. Therefore, to create CS1 via the management endpoint, you use PUT.

Part 2:

"kind": "______",

The correct value for the "kind" property is "CognitiveServices" because the requirements explicitly call for: 1) a single key and endpoint to access multiple services, 2) consolidated billing for future services, and 3) future support for Computer Vision. "CognitiveServices" indicates a multi-service Azure AI resource (multi-service account). This resource type is designed to provide one endpoint/keys that can be used across multiple Azure AI services under the same account, which directly meets the consolidation and future expansion requirements. "TextAnalytics" would create a single-service resource limited to language features (such as sentiment analysis) and would not satisfy the requirement to use the same endpoint/key for OCR or future Computer Vision. "ComputerVision" would similarly be single-service and would not cover sentiment analysis under the same endpoint/key. Thus, "kind": "CognitiveServices" is the only option that matches all requirements.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You successfully run the following HTTP request. POST https://management.azure.com/subscriptions/18c51a87-3a69-47a8-aedc-a54745f708a1/resourceGroups/RG1/providers/ Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 Body{"keyName": "Key2"} What is the result of the request?

Incorrect. The regenerateKey ARM action regenerates a Cognitive Services account key on the resource itself; it does not automatically create or update a secret in Azure Key Vault. Key Vault integration is a best practice you implement separately (e.g., storing the returned key as a secret, using automation). The request shown targets management.azure.com and the CognitiveServices provider, not Key Vault APIs.

Incorrect. “Query keys” are associated with Azure AI Search (formerly Azure Cognitive Search) and are managed via the Search service APIs, not via Microsoft.CognitiveServices/accounts/regenerateKey. Cognitive Services accounts use subscription keys (Key1/Key2) for authentication to the service endpoint. The request body explicitly references Key2, which is an account key, not a query key.

Incorrect. Rotating both keys would require two separate calls: one with keyName=Key1 and another with keyName=Key2 (or an explicit operation that rotates both, which this API does not do). The regenerateKey action regenerates only the specified key. Since the request specifies Key2, only the secondary key changes; Key1 remains valid and unchanged.

Correct. The request invokes the Cognitive Services account action regenerateKey with {"keyName":"Key2"}. Cognitive Services accounts have two keys (Key1 and Key2). Regenerating Key2 resets the secondary subscription key, invalidating the old Key2 and returning a new Key2 value. This supports standard key rotation practices while keeping Key1 unchanged for continuity.

Question Analysis

Core concept: This question tests management-plane key rotation for an Azure Cognitive Services (Azure AI services) account using the Azure Resource Manager (ARM) REST API. Cognitive Services accounts expose two access keys (Key1 and Key2) used by client applications to authenticate to the data-plane endpoint. The ARM action regenerateKey is specifically for rotating one of those account keys. Why the answer is correct: The request calls: POST .../Microsoft.CognitiveServices/accounts/contoso1/regenerateKey?api-version=2017-04-18 with body {"keyName":"Key2"}. In Cognitive Services, Key1 is commonly treated as the primary key and Key2 as the secondary key. The regenerateKey action regenerates (resets) the specified key only. Because keyName is Key2, the service invalidates the existing Key2 and returns a newly generated Key2 value. Key1 remains unchanged. Therefore, the result is that the secondary subscription key was reset. Key features / best practices: Having two keys enables safe key rotation with minimal downtime: update applications to use the non-rotated key, regenerate the other key, then switch and repeat. This aligns with Azure Well-Architected Framework security guidance (credential rotation, least privilege, and secret hygiene). In production, store the keys in Azure Key Vault and reference them from apps (managed identities where possible), but the regenerateKey call itself does not create or store secrets in Key Vault. Common misconceptions: Some confuse “regenerateKey” with rotating both keys, but it targets only the named key. Others confuse Cognitive Services account keys with “query keys” (a concept associated with Azure AI Search), which are different resources and APIs. Exam tips: Recognize management-plane vs data-plane operations: management.azure.com with a provider action like regenerateKey is an ARM operation affecting the resource configuration/credentials. Also remember the two-key rotation pattern: specify Key1 or Key2; only that key changes. If a question mentions Key Vault, that is typically a separate step (storing/reading secrets), not an automatic outcome of key regeneration.

7
Question 7

HOTSPOT - You plan to deploy a containerized version of an Azure Cognitive Services service that will be used for text analysis. You configure https://contoso.cognitiveservices.azure.com as the endpoint URI for the service, and you pull the latest version of the Text Analytics Sentiment Analysis container. You need to run the container on an Azure virtual machine by using Docker. How should you complete the command? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ ______ \

The blank in the docker run command (immediately after the backslash) is where the container image reference goes. Because the scenario states you pulled the latest version of the Text Analytics Sentiment Analysis container, the correct image is the sentiment container: mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment (Option D). Why others are wrong: - A (http://contoso.blob.core.windows.net) is an Azure Storage endpoint, not a container image. - B (https://contoso.cognitiveservices.azure.com) is the Azure AI resource endpoint used for Billing, not an image. - C is a different Text Analytics container (key phrase extraction), which doesn’t match the requirement for sentiment analysis. On the exam, always map “pulled container X” to the corresponding MCR image name; the image is the argument at the end of docker run (or after any environment variables/flags).

Part 2:

Eula=accept \ Billing= ______ \

The Billing parameter must be set to the endpoint URI of the Azure AI resource that will be used for metering the container’s usage. The prompt explicitly says the endpoint URI is configured as https://contoso.cognitiveservices.azure.com, so Billing must be that value (Option B). Why others are wrong: - A is an Azure Blob Storage endpoint and is not used for Azure AI container billing. - C and D are container image names, not billing endpoints. Exam tip: Azure AI containers typically require at least Eula=accept, Billing=<cognitiveservices endpoint>, and ApiKey=<key>. Even if ApiKey isn’t shown in the hotspot, Billing almost always points to https://<resource>.cognitiveservices.azure.com (or the regional endpoint format used by the resource).

8
Question 8

DRAG DROP - You are developing a photo application that will find photos of a person based on a sample image by using the Face API. You need to create a POST request to find the photos. How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:

Part 1:

POST {Endpoint}/face/v1.0/______

The correct endpoint is POST {Endpoint}/face/v1.0/findsimilars. The Find Similar API is designed to take a query face (faceId) and search for similar faces in a specified faceListId/largeFaceListId (or in some configurations, a set of candidate faceIds). This matches the requirement to “find photos of a person based on a sample image,” because each stored photo can have an associated persistedFaceId in a face list, and the API returns the closest matches. Why others are wrong: detect only extracts faces and returns faceId(s) but does not search. identify is for identifying a person against a PersonGroup/LargePersonGroup (returns personId candidates), which is a different workflow than directly finding similar photos. verify only compares two faces (or face vs person) for a yes/no confidence, not a search. group clusters unknown faces into groups, not a targeted search. matchFace/matchPerson are not endpoints; they are values for the findsimilars “mode” property.

Part 2:

"mode": "______"

The correct value for the request body property is "mode": "matchFace". In the Face API Find Similar request, “mode” determines the matching strategy. matchFace is used when you want to find faces that look similar to the query face—ideal for returning matching photos (each photo has a face entry) from a FaceList/LargeFaceList. This aligns with a photo search experience where the output should be a ranked list of similar faces. Why other options are wrong: detect/findsimilars/group/identify/verify are operations/endpoints, not valid values for the “mode” field. "matchPerson" is a valid mode but is used when you want to match against persons (aggregated identities) rather than individual faces; that’s more aligned with person-level matching scenarios and typically pairs with different data organization. For “find photos,” face-level matching (matchFace) is the most appropriate.

9
Question 9

HOTSPOT - You develop an application that uses the Face API. You need to add multiple images to a person group. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

using (______ t = File.OpenRead(imagePath))

The blank is the variable type used in a C# using statement: using (______ t = File.OpenRead(imagePath)). File.OpenRead(string path) returns a FileStream, which derives from Stream. Therefore, the correct type to declare is Stream. Why others are wrong: - A. File is a static helper class in System.IO; you don’t declare variables of type File. - C. Uri represents a URI/URL value, not a file stream returned by OpenRead. - D. Url is not a standard .NET type for this scenario. In Face API scenarios, using Stream is common when images are stored locally, in a private blob, or otherwise not publicly accessible. It also avoids needing to make the image reachable via an internet-accessible URL, which can be better for security.

Part 2:

await faceClient.PersonGroupPerson.______(personGroupId, personId, t);

Given the code passes a stream variable (t) into the method call, the correct SDK method is AddFaceFromStreamAsync(personGroupId, personId, t). This method uploads image bytes from a Stream to add a persisted face to the specified person in the specified person group. Why others are wrong: - B. AddFaceFromUrlAsync requires a URL string (or a request containing a URL), not a Stream. - C. CreateAsync is used to create a resource (for example, creating a person), not to add a face image. - D. GetAsync retrieves existing resources/metadata; it does not add training images. Exam tip: after adding multiple faces, you typically call PersonGroup.TrainAsync and then poll training status. Also ensure each image contains a detectable face; otherwise the add operation fails.

10
Question 10

HOTSPOT - You are developing an application that will use the Computer Vision client library. The application has the following code.

public async TaskAnalyzeImage(ComputerVisionClient client, string localImage)
{
    List features = new List()
    {
        VisualFeatureTypes.Description,
        VisualFeatureTypes.Tags,
    };
    using (Stream imageStream = File.OpenRead(localImage))
    {
        try
        {
            ImageAnalysis results = await client.AnalyzeImageInStreamAsync(imageStream, features);

            foreach (var caption in results.Description.Captions)
            {
                Console.WriteLine($"{caption.Text} with confidence {caption.Confidence}");
            }

            foreach (var tag in results.Tags)
            {
                Console.WriteLine($"{tag.Name} {tag.Confidence}");
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
    }
}

For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The code will perform face recognition.

No. The code does not perform face recognition. It calls client.AnalyzeImageInStreamAsync(imageStream, features) with features containing only VisualFeatureTypes.Description and VisualFeatureTypes.Tags. That operation returns a general ImageAnalysis result with captions and tags, not identity recognition. Face recognition (identifying a person) is not provided by this call and, in Azure, is typically handled by the dedicated Azure AI Face service (or specific face-related capabilities/endpoints), which involves detecting faces and then verifying/identifying against a person group or similar construct. Even face detection (finding face rectangles/attributes) would require requesting face-related outputs or using the appropriate API; it is not implied by requesting Description/Tags. Therefore, the statement that the code will perform face recognition is incorrect.

Part 2:

The code will list tags and their associated confidence.

Yes. The code will list tags and their associated confidence. The features list explicitly includes VisualFeatureTypes.Tags, which instructs the Computer Vision service to return tag predictions for the image. After the analysis call completes, the code iterates through results.Tags: foreach (var tag in results.Tags) { Console.WriteLine($"{tag.Name} {tag.Confidence}"); } This prints each tag’s Name and Confidence score. This is a direct mapping between the requested visual feature (Tags) and the output being enumerated. If Tags had not been included in the features list, results.Tags would typically be empty or not populated, and the loop would not produce meaningful output. Because Tags is requested and printed, the statement is true.

Part 3:

The code will read a file from the local file system.

Yes. The code reads a file from the local file system. The method signature includes a string localImage, and inside the using block it calls File.OpenRead(localImage). File.OpenRead is a .NET API that opens a file path on the machine where the application is running and returns a Stream. That stream is then passed to AnalyzeImageInStreamAsync, which is specifically designed for analyzing image content provided as a stream (for example, a local file, memory stream, or other stream source). This is different from AnalyzeImageAsync, which typically takes an image URL. Because the code uses File.OpenRead with a local path, it is definitively reading from the local file system.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

You deploy a web app that is used as a management portal for indexing in Azure Cognitive Search. The app is configured to use the primary admin key. During a security review, you discover unauthorized changes to the search index. You suspect that the primary access key is compromised. You need to prevent unauthorized access to the index management endpoint. The solution must minimize downtime. What should you do next?

Incorrect. Regenerating the primary admin key first would immediately invalidate the credential the app is currently using. That means the management portal could lose access until its configuration is updated to the secondary key, which does not minimize downtime. Although the final state would rotate both keys, the sequence is operationally wrong for a live app currently bound to the compromised primary key.

Incorrect. A query key cannot call index management endpoints in Azure Cognitive Search. Since the web app is a management portal for indexing, switching it to a query key would remove the permissions it needs to function. Regenerating the admin keys is useful, but this option breaks required administrative capabilities.

Correct. The application is currently using the primary admin key, so regenerating the secondary key first creates a fresh alternate credential without affecting the running app. After updating the app to use that new secondary key, you can safely regenerate the compromised primary key and immediately invalidate unauthorized access. This sequence preserves management functionality while minimizing downtime and follows the standard dual-key rotation pattern.

Incorrect. Query keys are only for read-only search requests and have no effect on admin-level index management access. Adding or deleting query keys does nothing to invalidate a compromised admin key that can modify indexes. This option also fails because the management portal cannot use a query key for administrative operations.

Question Analysis

Core concept: Azure Cognitive Search provides two admin keys specifically so you can rotate credentials without interrupting applications that require management access. Because admin keys allow full control over indexes, indexers, and other search resources, a suspected compromise requires immediate rotation. The safest low-downtime pattern is to regenerate the key not currently in use, move the application to that newly generated key, and then regenerate the compromised key. Why correct: The app is currently using the primary admin key, and that key is suspected to be compromised. If you regenerate the primary key first, the app will immediately lose access until its configuration is updated, which increases downtime risk. By regenerating the secondary key first, you create a fresh valid admin credential, switch the app to it, and then regenerate the compromised primary key to invalidate the attacker’s access. Key features: - Admin keys are required for index management operations; query keys are read-only. - Azure Cognitive Search exposes both a primary and secondary admin key to support seamless rotation. - Regenerating a key invalidates the previous value for that slot immediately. - Rotating the unused key first is the standard approach when minimizing service interruption. Common misconceptions: A common mistake is to regenerate the compromised key first because it seems most urgent, but doing so can break the application if it is still using that key. Another misconception is that query keys can be substituted for admin keys; they cannot perform management operations. It is also unnecessary to regenerate both keys immediately unless completing a full rotation cycle is explicitly required. Exam tips: - If a key is compromised and the app is currently using it, do not regenerate it first unless downtime is acceptable. - For Azure services with primary/secondary keys, the exam often expects: rotate the unused key, switch clients, then rotate the old key. - Remember that query keys are only for search queries, not for administrative changes.

12
Question 12

HOTSPOT - You are building a chatbot that will provide information to users as shown in the following exhibit.

diagram

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The chatbot is showing ______.

Correct answer: A (an Adaptive Card). The exhibit shows a complex, structured layout: multiple headings (Passengers, Stops), repeated itinerary blocks, aligned left/right columns (SFO/AMS on both sides), and careful spacing. This is typical of Adaptive Cards, which allow flexible composition using containers and column sets. Why not Hero Card (B): A Hero Card has a fixed schema (title, subtitle, text, one large image, and buttons). It is not designed for multi-column, repeated structured sections like an itinerary. Why not Thumbnail Card (C): A Thumbnail Card is similar to a Hero Card but with a smaller thumbnail image; it still uses a fixed layout and doesn’t support the kind of grid/column formatting shown. In exam terms: whenever you see “form-like” or “UI-like” structured content, especially with columns or repeated sections, choose Adaptive Card.

Part 2:

The card includes ______.

Correct answer: B (an image). The card clearly includes an airplane icon in the middle of each flight segment. That is an image element rendered within the card. Why not action set (A): An ActionSet would typically manifest as buttons (e.g., “Select”, “View details”, “Book”), which are not visible in the exhibit. While Adaptive Cards can include actions, the screenshot doesn’t show any. Why not image group (C): An image group (often represented by ImageSet in Adaptive Cards) is a collection of multiple images displayed together (e.g., a row of thumbnails/avatars). The exhibit shows a single airplane icon per segment, not a grouped set. Why not media (D): “Media” in Adaptive Cards refers to embedded audio/video playback. The exhibit shows a static icon, not playable media.

13
Question 13

HOTSPOT - You are reviewing the design of a chatbot. The chatbot includes a language generation file that contains the following fragment.

Greet(user)

  • ${Greeting()}, ${user.name} For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Part 1:

${user.name} retrieves the user name by using a prompt.

${user.name} does not retrieve the user name by using a prompt. In Bot Framework Composer LG, ${...} evaluates an expression against the bot’s memory/state at runtime. The expression "user.name" reads the value of the "name" property from the "user" memory scope (user state). That value might have been set earlier by code, a dialog step, or a prompt, but the expression itself is not a prompt and does not ask the user for input. A prompt would be implemented in a dialog (for example, an "Ask a question" step) that collects user input and stores it into a property such as user.name. Only after that storage occurs would ${user.name} resolve to the collected value. Therefore, saying it retrieves the name “by using a prompt” is incorrect; it retrieves it from state/memory.

Part 2:

Greet() is the name of the language generation template.

In an LG file, the header `# Greet(user)` defines a template named `Greet` with one parameter named `user`. The parentheses contain the parameter list and are not part of the template name itself. Therefore, the statement that `Greet()` is the name of the language generation template is false. `Greet` is the template name, while `user` is the input parameter.

Part 3:

${Greeting()} is a reference to a template in the language generation file.

In LG, `${...}` contains an expression to evaluate at runtime. `Greeting()` uses function-like syntax and may resolve to another LG template, but it can also represent a function depending on the available definitions and context. Because the fragment does not show a `# Greeting` template declaration, you cannot conclude with certainty that it is a reference to a template in the language generation file. The statement is therefore not reliably true based on the snippet alone.

14
Question 14

DRAG DROP - You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images. How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place:

Part 1:

"______": "employeefaces",

Because the collection contains 60,000 images, the Find Similar request must target a LargeFaceList rather than a FaceList. In the JSON body, the correct property name is "largeFaceListId" with a lowercase initial letter, and its value should be "employeefaces". "faceListId" is for the smaller FaceList resource and would not fit this scale. "matchFace" and "matchPerson" are mode values, not collection identifier properties.

Part 2:

"mode": "______"

The correct mode is "matchFace" because the scenario explicitly asks to find similar faces from an existing face list. In the Find Similar API, "matchFace" returns faces that are visually similar to the query face, while "matchPerson" is stricter and is intended to return faces that are likely the same person. Since the task is similarity search against a LargeFaceList, "matchFace" best matches the requirement. "faceListId" and "largeFaceListId" are identifier properties, not valid values for the "mode" field.

15
Question 15

HOTSPOT - You are developing a text processing solution. You develop the following method.

static void GetKeyPhrases(TextAnalyticsClient textAnalyticsClient, string text)
{
    var response = textAnalyticsClient.ExtractKeyPhrases(text);
    Console.WriteLine("Key phrases:");

    foreach (string keyphrase in response.Value)
    {
        Console.WriteLine($"\t{keyphrase}");
    }
}

You call the method by using the following code. GetKeyPhrases(textAnalyticsClient, "the cat sat on the mat"); For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The call will output key phrases from the input string to the console.

Yes. The method will call the Azure AI Language key phrase extraction operation and write the returned phrases to the console. Specifically, ExtractKeyPhrases returns a Response<KeyPhraseCollection>, and the code iterates response.Value (the KeyPhraseCollection) and prints each string. Assuming the client is correctly authenticated, the resource endpoint is valid, and the call succeeds (no exception), the console output will include the header "Key phrases:" followed by one line per extracted key phrase. The key point is that the code is correctly using the synchronous SDK method and enumerating the returned collection. There is no additional filtering or transformation in the loop, so whatever the service returns will be printed.

Part 2:

The output will contain the following words: the, cat, sat, on, and mat.

No. Key phrase extraction does not return every word from the input. It returns only the most relevant phrases, typically focusing on nouns/noun phrases and excluding stop words and low-value terms. In the sentence "the cat sat on the mat", words like "the" and "on" are classic stop words and are very unlikely to be returned as key phrases. Even verbs like "sat" may or may not be returned depending on the model’s assessment of salience. A more typical output would be a small set such as "cat" and "mat" (and possibly "cat sat" or "the mat" depending on language heuristics), but you cannot assume it will contain all tokens. For the exam, remember: key phrase extraction is summarization-like, not tokenization.

Part 3:

The output will contain the confidence level for key phrases.

No. The output will not contain confidence levels because the ExtractKeyPhrases API in the Azure.AI.TextAnalytics SDK returns only the extracted key phrase strings (KeyPhraseCollection). There is no confidence score property per key phrase in this response type, and the sample code prints only the phrase text. This differs from other Azure AI Language features where confidence scores are part of the response schema (for example, SentimentConfidenceScores for sentiment analysis, or certain classification outputs). Here, the SDK surface area is intentionally simple: a list of phrases. Therefore, neither the SDK call nor the provided code can output confidence values.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You build a language model by using a Language Understanding service. The language model is used to search for information on a contact list by using an intent named FindContact. A conversational expert provides you with the following list of phrases to use for training. ✑ Find contacts in London. ✑ Who do I know in Seattle? ✑ Search for contacts in Ukraine. You need to implement the phrase list in Language Understanding. Solution: You create a new pattern in the FindContact intent. Does this meet the goal?

Yes is incorrect because creating a new pattern would only help if the utterances shared a predictable structure that could be templated. In this case, the examples use different phrasings like 'Find contacts in London' and 'Who do I know in Seattle?', so the main reusable element is the location term rather than the sentence form. A phrase list can provide LUIS with related words such as city and country names to improve entity recognition. Therefore, using a pattern alone does not satisfy the requirement to implement the phrase list.

No is correct because the goal is to implement the provided terms in Language Understanding, and a pattern is not the right feature for that purpose. Patterns are designed for utterances that follow a stable template with placeholders, such as 'Find contacts in {Location}'. The sample utterances vary significantly in wording, so a single pattern would not represent them effectively. A phrase list is the more appropriate feature when you want to improve recognition of related location values across different utterance forms.

Question Analysis

Core concept: This question tests how to evaluate and tune safety/content filtering for a generative AI chatbot that uses Azure OpenAI (AI1) and Azure AI Content Safety (CS1). In AI-102 terms, it’s about selecting the correct tooling/workflow to run test prompts and optimize filter configurations. Why the answer is correct: Using Content Safety Studio’s “Safety metaprompt” feature is not the appropriate method to run systematic tests on sample questions for optimizing content filter configurations. Content Safety Studio is used to explore and evaluate Content Safety capabilities (text/image moderation, severity thresholds, testing), but “Safety metaprompt” is not the primary feature for batch-style testing and tuning of content filters for a chatbot’s input/output pipeline. For optimizing content filter configurations, you typically use Content Safety Studio’s testing experiences (for text/image) and/or programmatic evaluation by sending representative prompts and responses through the Content Safety APIs, adjusting category thresholds (hate, sexual, violence, self-harm) and actions (block, allow, review) based on results. Key features and best practices: - Use Content Safety Studio to test text moderation with configurable severity thresholds and to review detections across categories. - For repeatable optimization, run a test harness that submits a curated dataset of sample user inputs and model outputs to CS1 via API, capturing scores/severity and iterating thresholds. - Align with Azure Well-Architected Framework (Reliability/Operational Excellence): automate evaluations, version your safety configs, and monitor false positives/negatives. - In Azure OpenAI, distinguish between model-side content filtering and external moderation (CS1). Ensure you test both user input and model output paths. Common misconceptions: It’s easy to assume any “metaprompt” or “safety prompt” feature is meant for testing filters. However, filter optimization is about measuring moderation outcomes against a dataset and tuning thresholds/actions, not about a prompt template feature. Exam tips: When you see “optimize content filter configurations” and “run tests on sample questions,” think: Content Safety Studio testing tools and/or automated API-based evaluation pipelines—not prompt engineering features. Also remember that Content Safety is the dedicated service for moderation scoring; OpenAI prompt techniques don’t replace threshold tuning and measurable test runs.

17
Question 17

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. Solution: You add the new images, and then use the Smart Labeler tool. Does this meet the goal?

No. Adding images and using Smart Labeler does not, by itself, update the classifier’s learned behavior. Smart Labeler only suggests tags; it does not train the model. For new flower species, you must create new tags (classes), label the images (Smart Labeler may not be accurate for unseen species), then retrain and typically republish the model so the application can use the updated classifier.

No is correct because adding images and using Smart Labeler does not by itself make the classifier recognize new flower species. In Azure Custom Vision, a classifier learns from labeled training images associated with tags, so new species must be represented as new classes or tags in the project. Smart Labeler can help suggest labels, but it is only an annotation aid and does not update the trained model automatically. After labeling the new images correctly, you must retrain the classifier so the new species are included in the model.

Question Analysis

Core concept: This question tests how to evaluate and tune safety/content filtering for a generative AI chatbot that uses Azure OpenAI (AI1) and Azure AI Content Safety (CS1). In AI-102 terms, it’s about selecting the correct tooling/workflow to run test prompts and optimize filter configurations. Why the answer is correct: Using Content Safety Studio’s “Safety metaprompt” feature is not the appropriate method to run systematic tests on sample questions for optimizing content filter configurations. Content Safety Studio is used to explore and evaluate Content Safety capabilities (text/image moderation, severity thresholds, testing), but “Safety metaprompt” is not the primary feature for batch-style testing and tuning of content filters for a chatbot’s input/output pipeline. For optimizing content filter configurations, you typically use Content Safety Studio’s testing experiences (for text/image) and/or programmatic evaluation by sending representative prompts and responses through the Content Safety APIs, adjusting category thresholds (hate, sexual, violence, self-harm) and actions (block, allow, review) based on results. Key features and best practices: - Use Content Safety Studio to test text moderation with configurable severity thresholds and to review detections across categories. - For repeatable optimization, run a test harness that submits a curated dataset of sample user inputs and model outputs to CS1 via API, capturing scores/severity and iterating thresholds. - Align with Azure Well-Architected Framework (Reliability/Operational Excellence): automate evaluations, version your safety configs, and monitor false positives/negatives. - In Azure OpenAI, distinguish between model-side content filtering and external moderation (CS1). Ensure you test both user input and model output paths. Common misconceptions: It’s easy to assume any “metaprompt” or “safety prompt” feature is meant for testing filters. However, filter optimization is about measuring moderation outcomes against a dataset and tuning thresholds/actions, not about a prompt template feature. Exam tips: When you see “optimize content filter configurations” and “run tests on sample questions,” think: Content Safety Studio testing tools and/or automated API-based evaluation pipelines—not prompt engineering features. Also remember that Content Safety is the dedicated service for moderation scoring; OpenAI prompt techniques don’t replace threshold tuning and measurable test runs.

18
Question 18

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. Solution: You add the new images and labels to the existing model. You retrain the model, and then publish the model. Does this meet the goal?

Yes. Adding the new images with the correct labels (tags) expands the training dataset to include the new flower species. Retraining is required to generate a new iteration that learns from the newly added labeled examples. Publishing the retrained iteration updates what the Prediction API serves, ensuring the application uses the updated classifier.

No is incorrect because the described steps are exactly the standard Custom Vision workflow for updating a classifier with new labeled data. Without retraining, the model would not learn the new species, and without publishing, the updated iteration would not be used for predictions. Since the solution includes both retraining and publishing after adding images and labels, it does meet the goal.

Question Analysis

Core concept: This question tests how to update an Azure Custom Vision image classification model when new labeled training data (new flower species images) becomes available, and what steps are required to make the updated model available for prediction. Why the answer is correct: In Azure Custom Vision, improving or expanding a classifier requires adding new training images with the appropriate tags (labels), retraining to create a new iteration, and then publishing that iteration so prediction endpoints use the updated model. Adding images and labels updates the training dataset, but it does not change the model until you retrain. Similarly, retraining creates a new iteration, but clients will not use it until you publish it (or update which iteration is published). Therefore, adding the new images/tags, retraining, and publishing is the correct end-to-end workflow to incorporate new flower species into the classifier. Key features / configurations: - Tags (labels) in Custom Vision represent classes for classification. - Training produces a new model iteration; each retrain creates a new iteration. - Publishing an iteration makes it available to the Prediction API under a published name. - You can keep older iterations for rollback/comparison while publishing the latest. Common misconceptions: - Assuming that uploading images automatically updates the live model without retraining. - Forgetting to publish after retraining, which leaves the prediction endpoint serving an older iteration. - Confusing “training” with “publishing”; training builds an iteration, publishing deploys it for inference. Exam tips: - Always pair: add labeled data → train (new iteration) → publish (serve via Prediction API). - If the question mentions “use for predictions” or “available to the app,” publishing is required. - Remember: multiple iterations can exist, but only published iterations are used by the prediction endpoint.

19
Question 19

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You develop an application to identify species of flowers by training a Custom Vision model. You receive images of new flower species. You need to add the new images to the classifier. Solution: You create a new model, and then upload the new images and labels. Does this meet the goal?

Yes is incorrect because the goal is to add new images and labels to the existing classifier, not replace it with a separate model. Custom Vision is designed for incremental improvement by adding tagged images to the current project and retraining. Creating a new model would require rebuilding the classifier context independently and would not be the intended approach for extending an existing flower-species classifier. On the exam, when the requirement is to add classes or examples, the expected action is usually retraining the existing project.

No is correct because creating a new model is not the standard way to add a new flower species to an existing Custom Vision classifier. In Azure Custom Vision, you normally keep the same project, upload the new images, assign them a new tag representing the new species, and retrain the model. This creates a new iteration that includes both the original species and the newly added one. Starting a separate model would fragment the solution and fail to directly extend the existing classifier as required.

Question Analysis

Core concept: This question tests how to update an Azure Custom Vision image classification model when new classes are introduced. In Custom Vision, you typically continue using the same project, add new tagged images for the new class, and retrain to create a new iteration rather than starting over with a separate model. Why the answer is correct: The proposed solution does not meet the goal because creating a new model is unnecessary and contrary to the normal Custom Vision workflow for extending an existing classifier. To add a new flower species, you add the new images to the existing project, assign a new tag/label for that species, and retrain the classifier so the model learns the additional class while retaining prior knowledge. Key features: - Custom Vision supports iterative training within the same project. - New categories are added by creating new tags and uploading labeled images. - Retraining produces a new iteration of the same model, which can then be evaluated and published. - Keeping the same project preserves the existing classes and training context. Common misconceptions: A common mistake is assuming that any new class requires a brand-new model. In Custom Vision, a new model/project is generally only needed when the problem type changes significantly, such as switching from classification to object detection, or when you intentionally want a completely separate solution. Exam tips: When a question says to add new labeled examples or new classes to an existing Custom Vision classifier, think: update the existing project and retrain. On AI-102, words like iteration, retrain, tag, and existing classifier usually indicate that you should not create a separate model from scratch.

20
Question 20

HOTSPOT - You are developing a service that records lectures given in English (United Kingdom). You have a method named AppendToTranscriptFile that takes translated text and a language identifier. You need to develop code that will provide transcripts of the lectures to attendees in their respective language. The supported languages are English, French, Spanish, and German. How should you complete the code? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

var lang = new List<string> ______

Correct answer: B ({"fr", "de", "es"}). In Speech translation, you specify the source recognition language separately (here, English UK is typically set as "en-GB" on the translation config). Then you add target languages using BCP-47 language tags. For French, German, and Spanish, the common tags are "fr", "de", and "es". This list is therefore the set of target languages to translate into. Why others are wrong: - A ({"en-GB"}) is the source language, not the set of additional attendee languages to translate to. - C ({"French", "Spanish", "German"}) uses display names, not valid language identifiers expected by the SDK. - D ({"languages"}) is not a list of language codes; it looks like a placeholder variable name rather than actual values.

Part 2:

using var recognizer = new ______ (config, audioConfig);

Correct answer: D (TranslationRecognizer). To produce translated transcripts from speech, the Speech SDK uses TranslationRecognizer, which works with a translation configuration (for example, SpeechTranslationConfig) and an AudioConfig. It can recognize speech in the source language (en-GB) and simultaneously provide translations into multiple target languages (fr/de/es). The recognizer exposes translation results (often via result.Translations["fr"], etc.), which matches the method signature that appends translated text along with a language identifier. Why others are wrong: - IntentRecognizer is for intent recognition (often with LUIS/CLU patterns), not speech translation. - SpeakerRecognizer is not a standard Speech SDK class for translation; speaker recognition/verification uses different APIs and doesn’t translate. - SpeechSynthesizer is text-to-speech (output audio), the opposite direction of what’s needed for transcripts.

Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Other Microsoft Certifications

PL-300: Microsoft Power BI Data Analyst

PL-300: Microsoft Power BI Data Analyst

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-305

Microsoft AZ-305

Expert

Microsoft AZ-204

Microsoft AZ-204

Associate

Microsoft AZ-500

Microsoft AZ-500

Associate

Start Practicing Now

Download Cloud Pass and start practicing all Microsoft AI-102 exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.