CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
  1. Cloud Pass
  2. Microsoft
  3. PL-300: Microsoft Power BI Data Analyst
PL-300: Microsoft Power BI Data Analyst

Microsoft

PL-300: Microsoft Power BI Data Analyst

363+ Practice Questions with AI-Verified Answers

Free questions & answersReal Exam Questions
AI-powered explanationsDetailed Explanation
Real exam-style questionsClosest to the Real Exam
Browse 363+ Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every PL-300: Microsoft Power BI Data Analyst answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Exam Domains

Prepare the DataWeight 27%
Model the DataWeight 27%
Visualize and Analyze the DataWeight 28%
Manage and Secure Power BIWeight 18%

Practice Questions

1
Question 1

You have a Microsoft SharePoint Online site that contains several document libraries. One of the document libraries contains manufacturing reports saved as Microsoft Excel files. All the manufacturing reports have the same data structure. You need to use Power BI Desktop to load only the manufacturing reports to a table for analysis. What should you do?

Correct. SharePoint Folder is designed to enumerate files in SharePoint Online document libraries and exposes Folder Path for filtering. Using Transform lets you filter to only the manufacturing reports library/folder before combining, which improves performance and prevents mixing files from other libraries. After filtering, you can use Combine Files to create a single table from identically structured Excel reports.

Incorrect. SharePoint List is intended for importing list item data (columns/rows) rather than combining the contents of Excel files stored in a document library. Although document libraries are implemented as lists in SharePoint, this connector typically won’t provide the same file-binary combine experience as SharePoint Folder, and filtering by folder path is not the primary pattern here.

Incorrect. SharePoint Folder is the right connector, but selecting Combine & Load immediately (without first filtering in Transform) risks combining Excel files from multiple libraries/folders across the site. That can lead to incorrect data, more complex cleanup, and slower refresh because Power Query evaluates more files than necessary.

Incorrect. SharePoint List is not the appropriate connector for loading and combining Excel files from a document library. Combine & Load also implies quickly combining without the critical step of filtering to the specific library/folder, increasing the chance of pulling irrelevant content and creating refresh/performance issues.

Question Analysis

Core concept: This question tests choosing the correct Power BI connector and Power Query approach to ingest multiple Excel files stored in SharePoint Online. When files are in a document library (not a SharePoint List), you typically use the SharePoint Folder connector to enumerate files and then filter to the specific library/folder before combining. Why the answer is correct: Option A uses Get data > SharePoint folder, provides the SharePoint site URL, then uses Transform to open Power Query and filter by Folder Path to the manufacturing reports library. This is the standard pattern: (1) connect to the site, (2) narrow the file set to only the target library/folder, and (3) then combine/transform the Excel binaries into a single table. Filtering first is important because it reduces the number of files Power Query evaluates, improves refresh performance, and avoids accidentally combining unrelated Excel files from other libraries. Key features / best practices: - SharePoint Folder returns metadata (Name, Extension, Folder Path) plus a Content (binary) column for each file. - Filtering by Folder Path (or by library name within the path) isolates only the manufacturing reports. - After filtering, you typically use Combine Files (from the Content column) to apply a consistent transform to all files with the same structure. - This aligns with Power BI performance best practices (minimize data scanned, filter early) and the Azure Well-Architected Framework performance efficiency principle (optimize resource usage and refresh time). Common misconceptions: - “SharePoint list” is for list items, not document library file binaries. While document libraries are technically lists in SharePoint, Power BI’s SharePoint List connector is not the right tool for combining Excel file contents. - “Combine & Load” without filtering can pull in all Excel files across the site, causing incorrect results and slower refresh. Exam tips: - If the source is files in a document library: prefer SharePoint Folder. - If the source is list rows/columns: use SharePoint Online List. - Look for wording like “document library” and “Excel files with same structure” → filter to the folder/library, then combine in Power Query.

2
Question 2

DRAG DROP - You have a Microsoft Excel workbook that contains two sheets named Sheet1 and Sheet2. Sheet1 contains the following table named Table1. Products abc def ghi jkl mno Sheet2 contains the following table named Table2. Products abc xyz tuv mno pqr stu You need to use Power Query Editor to combine the products from Table1 and Table2 into the following table that has one column containing no duplicate values. Products abc xyz tuv mno pqr stu def ghi jkl Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

To create one Products column from both tables, you must first load both Excel tables into Power Query. Next, use Append because append stacks rows from tables with the same schema; Merge is incorrect because it performs a join and is used to add columns based on matching keys. After appending, remove duplicates on the combined query so repeated values such as abc and mno appear only once. Removing errors is irrelevant because the scenario does not mention any errors, and removing duplicates before appending would not eliminate duplicates that exist across both tables.

3
Question 3

You are creating a report in Power BI Desktop. You load a data extract that includes a free text field named coll. You need to analyze the frequency distribution of the string lengths in col1. The solution must not affect the size of the model. What should you do?

A DAX calculated column using LEN(col1) would enable a histogram of lengths, but calculated columns are materialized and stored in the model. That increases model size (even if compression reduces impact). The requirement explicitly says the solution must not affect model size, so adding a stored column is not appropriate.

A DAX measure to calculate average length (for example, AVERAGEX over LEN) would not increase model size, but it does not provide a frequency distribution of lengths. An average is a single aggregate value and cannot show how many rows have each length without additional structures (like a length column or a disconnected table).

Adding a length column in Power Query (Text.Length) is a common preparation step, but it typically loads into the model and therefore increases model size. Unless you specifically configure it as not loaded (not stated here), this violates the requirement. It also changes the dataset schema rather than just analyzing it.

This is the only option that points to using Power Query Editor’s profiling experience rather than creating a new field in the model. Profiling features are used to inspect data characteristics during preparation and do not persist additional columns into the semantic model. That makes it the best fit for the requirement to analyze the data without affecting model size. Although the wording about grouping specifically by length is imprecise, the intent is clearly to use profiling rather than model changes.

Question Analysis

Core concept: This question tests the difference between creating persisted model artifacts and using Power Query’s data profiling tools for exploratory analysis. The requirement is to analyze the frequency distribution of text lengths without increasing model size, so any solution that adds a new column to the model is undesirable. Why correct: Among the options, the only approach that does not add a calculated or transformed column to the model is to use Power Query Editor’s profiling capabilities. Data profiling is intended for inspection and analysis during data preparation and does not create stored model data, so it satisfies the constraint about model size. Key features: DAX calculated columns and Power Query added columns both create per-row values that are loaded into the model unless explicitly excluded. Measures do not store row-level results, but a single aggregate such as average length does not provide a frequency distribution. Power Query profiling features such as Column distribution and Column profile are lightweight analysis tools used to inspect data characteristics without persisting new fields. Common misconceptions: A common mistake is to assume that any length analysis requires adding a LEN/Text.Length column. That works functionally, but it changes the model and can increase memory usage. Another misconception is that a measure like average length can substitute for a distribution, when in reality it only returns a summary statistic. Exam tips: When a PL-300 question says a solution must not affect model size, prefer profiling or report-time logic over persisted columns. Eliminate options that explicitly add columns first. If one remaining option uses Power Query profiling, it is usually the intended answer even if the wording is less precise than the actual UI.

4
Question 4

HOTSPOT - You have a Power BI model that contains a table named Sales and a related date table. Sales contains a measure named Total Sales. You need to create a measure that calculates the total sales from the equivalent month of the previous year. How should you complete the calculation? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Sales Previous Year = ______

The correct function to start this measure is CALCULATE because you need to re-calculate an existing measure ([Total Sales]) under a modified filter context (the prior-year dates). CALCULATE is the core DAX function for context transition and filter manipulation, which is exactly what time intelligence requires. Why the others are wrong: - EVALUATE is used in DAX queries (for example, in DAX Studio or SSMS) to return a table result; it is not used to define measures in Power BI. - SUM aggregates a column directly (for example, SUM(Sales[Amount])). Since you already have a measure [Total Sales], you should reuse it rather than re-summing a column. - SUMX is an iterator that evaluates an expression row-by-row over a table. It’s useful for calculated row logic, but it’s unnecessary and less efficient for a simple prior-year version of an existing measure.

Part 2:

[Total Sales], ______(

SAMEPERIODLASTYEAR is the best match for “equivalent month of the previous year.” When the current filter context is a month (for example, March 2025), SAMEPERIODLASTYEAR returns the set of dates for March 2024, preserving the shape of the current period and shifting it back one year. Why the others are wrong: - DATESMTD returns dates from the start of the month to the current date in context (month-to-date), not the same month last year. - PARALLELPERIOD can shift periods (for example, -1 year), but the exam typically expects SAMEPERIODLASTYEAR for standard prior-year comparisons, and it’s more explicit for this scenario. - TOTALMTD is a wrapper that calculates a month-to-date total, which is not requested here (you want total sales for the equivalent month, not MTD).

Part 3:

You should pass the date column from the Date table into SAMEPERIODLASTYEAR, which is `Date`[Date]. Time intelligence functions require a column of type Date (or DateTime) from a proper date table to generate the correct set of shifted dates. Why the others are wrong: - [Date] is ambiguous because it doesn’t specify the table. In DAX, especially in models with multiple tables, you should fully qualify columns to avoid ambiguity and ensure the function uses the date dimension. - `Date`[Month] is not appropriate because it is typically a text or numeric month attribute and does not uniquely identify days. SAMEPERIODLASTYEAR operates on a contiguous set of daily dates; using a month column would break the required granularity and can produce incorrect or invalid results. Putting it together, the intended measure pattern is: Sales Previous Year = CALCULATE([Total Sales], SAMEPERIODLASTYEAR(`Date`[Date]))

5
Question 5

You have a Microsoft Power BI data model that contains three tables named Orders, Date, and City. There is a one-to-many relationship between Date and Orders and between City and Orders. The model contains two row-level security (RLS) roles named Role1 and Role2. Role1 contains the following filter. City[State Province] = "Kentucky" Role2 contains the following filter.

Date[Calendar Year] = 2020 - If a user is a member of both Role1 and Role2, what data will they see in a report that uses the model?

Correct. Role1 allows rows related to City[State Province] = "Kentucky", and Role2 allows rows related to Date[Calendar Year] = 2020. Since the user belongs to both roles, Power BI combines the allowed rows from both roles additively. That means the user can see data where the state is Kentucky or where the calendar year is 2020, including rows that satisfy both conditions.

Incorrect. Being a member of multiple RLS roles does not generate an error in Power BI. The service evaluates each applicable role and combines the permitted rows. The result is expanded visibility based on the union of the role filters, not a failure to load the report.

Incorrect. This option ignores the effect of Role2 on the Date table. Because Date is related to Orders, the 2020 filter also affects what Orders rows are visible. A user in both roles is not limited only to Kentucky rows; they also see rows from any state when the year is 2020.

Incorrect. This option assumes that Power BI intersects multiple role filters using AND. In reality, Power BI combines permissions from multiple roles using OR, so the user is not restricted only to rows that are both Kentucky and 2020. They can see all Kentucky data plus all 2020 data.

Question Analysis

Core concept: This question tests how Microsoft Power BI row-level security (RLS) behaves when a user belongs to more than one role. In Power BI, role filters are additive across roles, meaning the effective security is the union of the rows allowed by each role rather than the intersection. Why correct: Role1 filters the City table to Kentucky, and that filter propagates to Orders through the City-to-Orders relationship. Role2 filters the Date table to Calendar Year 2020, and that filter propagates to Orders through the Date-to-Orders relationship. Because the user belongs to both roles, they can see Orders rows that satisfy either the Kentucky condition or the 2020 condition. Key features: - RLS filters on dimension tables propagate to related fact tables through active relationships. - Membership in multiple roles broadens access because Power BI unions role results. - A user in both roles can see all Kentucky rows, all 2020 rows, and any rows that meet both conditions. Common misconceptions: A common mistake is assuming multiple Power BI roles are intersected with AND, which would make security more restrictive. That behavior is common in some other security models, but not in Power BI RLS role membership. Another misconception is thinking multiple roles cause an error, but Power BI simply evaluates all assigned roles together. Exam tips: For PL-300, remember that Power BI RLS across multiple assigned roles is additive. Trace each role filter through the relationships to the fact table, then combine the visible rows from each role. If the options present AND versus OR behavior, choose OR for multiple role membership in Power BI.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You have a project management app that is fully hosted in Microsoft Teams. The app was developed by using Microsoft Power Apps. You need to create a Power BI report that connects to the project management app. Which connector should you select?

Microsoft Teams Personal Analytics is intended for personal productivity and activity insights in Teams, not for accessing the business data behind a Power Apps application. Even though the app is hosted in Teams, its data is not stored in a Teams Personal Analytics dataset. The hosting surface and the storage platform are different concepts, which makes this option incorrect.

SQL Server database would only be correct if the app were explicitly using SQL Server as its backend data source. The scenario gives no indication of that and instead points to a Teams-hosted Power Apps solution, which typically uses Dataverse for Teams. Without evidence of SQL Server storage, this is not the best connector choice.

Dataverse is correct because Power Apps applications that are fully hosted in Microsoft Teams commonly use Dataverse for Teams as their underlying data store. Power BI can connect to that business data by using the Dataverse connector, which is designed for Dataverse tables and metadata. This makes it the most appropriate connector for reporting on the app’s data in this scenario.

Dataflows are used to ingest, transform, and prepare data for reuse in Power BI and other Power Platform services. They are not the native source connector you would choose to connect directly to the data behind a Power Apps app. In this case, a dataflow could potentially consume Dataverse data later, but the correct source connector remains Dataverse.

Question Analysis

Core concept: This question tests whether you recognize the default data platform used by Power Apps apps that are built and hosted entirely inside Microsoft Teams. Such apps commonly use Dataverse for Teams as their underlying storage, and Power BI connects to that data by using the Dataverse connector. Why correct: Because the app is fully hosted in Microsoft Teams and was developed with Power Apps, the most likely underlying data source is Dataverse for Teams rather than SQL Server or a Teams analytics dataset. Power BI provides a connector for Dataverse, which is the correct choice for accessing the app’s tables and data model. Therefore, Dataverse is the best answer. Key features: Dataverse for Teams is a built-in low-code data platform for Teams-based Power Apps solutions. It stores relational business data in tables and integrates with the Power Platform ecosystem, including Power BI. Using the Dataverse connector aligns with how Teams-hosted Power Apps solutions are typically built. Common misconceptions: A common mistake is assuming that because the app runs in Teams, the data should be accessed through a Teams-related connector such as Microsoft Teams Personal Analytics. Another misconception is choosing SQL Server just because Power Apps can connect to SQL Server in some scenarios; the question does not indicate that SQL Server is the app’s backend. Dataflows are also incorrect because they are a transformation and ingestion feature, not the source connector itself. Exam tips: On PL-300, when you see a Power Apps app hosted in Teams, think first of Dataverse for Teams unless another storage technology is explicitly named. Focus on the underlying data platform, not the user interface where the app runs. If the question does not mention SQL Server, SharePoint, or another external source, Dataverse is usually the intended answer in this scenario.

7
Question 7

You import a Power BI dataset that contains the following tables: ✑ Date ✑ Product ✑ Product Inventory The Product Inventory table contains 25 million rows. A sample of the data is shown in the following table. ProductKey DateKey MovementDate UnitCost UnitsIn UnitsOut UnitsBalance 167 20101228 28-Dec-10 0.19 0 0 875 167 20101229 29-Dec-10 0.19 0 0 875 167 20110119 19-Jan-11 0.19 0 0 875 167 20110121 21-Jan-11 0.19 0 0 875 167 20110122 22-Jan-11 0.19 0 0 875 The Product Inventory table relates to the Date table by using the DateKey column. The Product Inventory table relates to the Product table by using the ProductKey column. You need to reduce the size of the data model without losing information. What should you do?

Changing Summarization for DateKey to Don’t Summarize only changes how Power BI aggregates the field by default in visuals. It does not remove the column, change its cardinality, or materially reduce VertiPaq storage. This is a modeling/UX setting, not a data reduction technique, so it won’t achieve the goal of shrinking the dataset size.

Removing the relationship between Date and Product Inventory would harm the model’s ability to filter inventory by date and would break common time intelligence patterns. It also doesn’t remove any columns or rows from the fact table, so the storage footprint remains essentially the same. Relationships are metadata; they don’t typically drive large memory usage compared to fact table columns.

Changing UnitCost to Integer would likely lose decimal precision, which violates the requirement to reduce size without losing information. While data types can affect compression, the correct approach is to use an appropriate numeric type (often Decimal Number or Fixed Decimal) and optimize by reducing cardinality/columns. Converting currency/price to integer is only valid if the source is truly integer-valued.

Removing MovementDate eliminates a redundant date column from a 25-million-row fact table. The model already has DateKey and a relationship to the Date dimension, so the actual date can be retrieved from the Date table when needed. This reduces memory usage significantly while preserving information, aligning with star schema best practices and VertiPaq optimization principles.

Question Analysis

Core concept: This question tests Power BI data model optimization for Import mode (VertiPaq). Model size is driven primarily by column cardinality and data types. Reducing the number of stored columns (especially high-cardinality ones) is one of the most effective ways to shrink the model without losing analytical capability. Why the answer is correct: MovementDate is a full date value (likely stored as Date/DateTime) that duplicates the information already represented by DateKey and the relationship to the Date dimension. In a star schema, you typically keep a single date key in the fact table and use the Date dimension for all date attributes and filtering. Because MovementDate repeats across 25 million rows, it consumes significant memory. If you can derive MovementDate from DateKey via the Date table (or keep it as a calculated column in the Date table), you are not losing information—only removing redundant storage from the fact table. Key features / best practices: - VertiPaq compression benefits from fewer columns and lower cardinality. Removing redundant columns is a top optimization. - Use surrogate keys (DateKey) in the fact table and a Date dimension for human-readable dates and attributes (year, month, etc.). - Prefer a clean star schema: dimensions filter facts through relationships; avoid duplicating dimension attributes in the fact. Common misconceptions: - Changing summarization (Option A) affects default visual aggregation behavior, not storage size. - Removing relationships (Option B) can break filtering and time intelligence; it doesn’t reduce the stored data volume meaningfully. - Converting UnitCost to Integer (Option C) would lose precision (information) and is not an acceptable optimization if decimals matter. Exam tips: When asked to reduce model size “without losing information,” look first for redundant columns (especially dates/text) that can be obtained through relationships or dimension tables. In large fact tables, removing a single high-cardinality column often yields a bigger reduction than tweaking formatting or summarization settings.

8
Question 8

HOTSPOT - You plan to create the Power BI model shown in the exhibit. (Click the Exhibit tab.)

diagram

The data has the following refresh requirements: ✑ Customer must be refreshed daily. ✑ Date must be refreshed once every three years. ✑ Sales must be refreshed in near real time. ✑ SalesAggregate must be refreshed once per week. You need to select the storage modes for the tables. The solution must meet the following requirements: ✑ Minimize the load times of visuals. ✑ Ensure that the data is loaded to the model based on the refresh requirements. Which storage mode should you select for each table? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

Customer: ______

Customer should be set to Dual. Customer is a dimension table that refreshes daily (so Import is feasible), but the model also contains a near real-time fact table (Sales) that must be DirectQuery. If Customer were Import only, queries that slice the DirectQuery Sales table by Customer could require cross-storage joins and may reduce performance or cause more DirectQuery activity. Dual allows Customer to act as Import for queries that can be answered from cached data (fast visuals) and as DirectQuery when it must interact with the DirectQuery Sales table, improving filter propagation and reducing query complexity. DirectQuery for Customer would meet refresh needs but would unnecessarily slow many visuals; Import alone risks poorer performance/behavior in composite scenarios.

Part 2:

Date: ______

Date should be set to Dual. Date changes extremely infrequently (once every three years), so Import would easily meet the refresh requirement and would be fast. However, because Sales is DirectQuery, Date is used to filter that DirectQuery fact table (e.g., by DueDateKey). Setting Date to Dual is a best practice in composite models: it behaves as Import for most queries (fast slicing, cached dimension) but can switch to DirectQuery behavior when needed to efficiently join/filter the DirectQuery Sales table. DirectQuery for Date would add unnecessary runtime queries for a small, stable dimension. Import-only can work, but Dual is preferred to minimize visual load times and avoid cross-storage penalties when interacting with DirectQuery facts.

Part 3:

Sales: ______

Sales should be DirectQuery because it must be refreshed in near real time. Import mode cannot satisfy near real-time requirements unless you use very frequent refresh (which is limited by capacity/licensing and still not truly real time). DirectQuery queries the source at report time, reflecting the latest data with minimal delay, which aligns with the requirement. Dual is not appropriate for a large fact table that needs real-time behavior; Dual is primarily intended for dimension tables in composite models, not for the main detailed transactional fact. While DirectQuery can slow visuals, the model mitigates this by using an imported aggregation table (SalesAggregate) and Dual dimensions to accelerate common queries.

Part 4:

SalesAggregate: ______

SalesAggregate should be Import. It only needs to refresh weekly, which aligns perfectly with scheduled refresh into the in-memory model. Import provides the best performance for visuals because VertiPaq can answer many queries quickly, and with the aggregations feature Power BI can hit the imported aggregate table for summarized queries instead of sending DirectQuery requests to the detailed Sales table. DirectQuery would not help because there is no near real-time requirement for the aggregate and it would reduce performance. Dual is unnecessary for a fact/aggregate table; Dual is mainly beneficial for shared dimensions. Therefore, Import best meets both requirements: weekly refresh and minimized visual load times.

9
Question 9

DRAG DROP - You plan to create a report that will display sales data from the last year for multiple regions. You need to restrict access to individual rows of the data on a per region-basis by using roles. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

A is correct because the required sequence for RLS is well-defined: 1) Import the data to Power BI Desktop (build the dataset/model). 2) Create a role definition (Modeling > Manage roles) using a DAX filter such as [Region] = "West" or a dynamic rule using USERPRINCIPALNAME() with a mapping table. 3) Publish the report (and dataset) to the Power BI service. 4) Assign users to the role in the Power BI service (Dataset > Security), ideally using Azure AD groups. The option “Add a filter to the report” is incorrect for this requirement because report filters are not security; they don’t prevent a user from accessing underlying rows if they have permissions, and they don’t enforce dataset-level restrictions. RLS roles are the correct mechanism when the question explicitly says “restrict access to individual rows… by using roles.”

10
Question 10

You have a Microsoft Power BI report. The size of PBIX file is 550 MB. The report is accessed by using an App workspace in shared capacity of powerbi.com. The report uses an imported dataset that contains one fact table. The fact table contains 12 million rows. The dataset is scheduled to refresh twice a day at 08:00 and 17:00. The report is a single page that contains 15 AppSource visuals and 10 default visuals. Users say that the report is slow to load the visuals when they access and interact with the report. You need to recommend a solution to improve the performance of the report. What should you recommend?

Incorrect. Iterator functions (for example, SUMX, AVERAGEX) often increase CPU cost because they evaluate row-by-row over a table expression. While there are cases where iterators are necessary, recommending “change measures to use iterator functions” is not a general performance optimization and commonly makes measures slower compared to using native aggregations (SUM, COUNT, etc.) with proper model design.

Incorrect. Enabling visual interactions (cross-filtering/highlighting) typically increases the number of queries and re-render operations when users click or select data points. That can make a report feel slower, not faster—especially on a page already containing 25 visuals. If anything, limiting interactions can improve performance, but that is the opposite of this option.

Incorrect. AppSource visuals are custom visuals and can introduce additional rendering overhead and sometimes less-optimized query patterns compared to built-in visuals. Replacing default visuals with AppSource visuals would likely degrade performance further. A common best practice is to prefer built-in visuals where possible and use custom visuals sparingly.

This is the best recommendation because the report has 25 visuals on a single page, including 15 AppSource custom visuals, all of which must load, query, and render together. Splitting the visuals across multiple pages reduces the number of visuals rendered at one time, which lowers query concurrency, browser rendering overhead, and cross-filtering workload. This is especially important in shared capacity, where CPU and memory resources are limited and heavily shared. Reducing visuals per page is a standard Power BI performance optimization for slow-loading and slow-interacting reports.

Question Analysis

Core concept: This question is about report rendering and interaction performance in Power BI, especially when a single report page contains many visuals. In Import mode, each visual can trigger its own query and rendering workload, and custom AppSource visuals often add extra client-side overhead. Why correct: Splitting 25 visuals across multiple pages reduces the number of visuals that must load, query, and render simultaneously, which improves initial page load and interaction responsiveness. Key features: Fewer visuals per page means fewer concurrent queries, less cross-filtering activity, and less browser rendering work; this is especially important in shared capacity where compute resources are limited. Common misconceptions: Using more AppSource visuals does not improve performance, and iterator functions usually increase calculation cost rather than reduce it. Exam tips: For PL-300, when a report page has many visuals and users complain about slow loading and interaction, first consider reducing the number of visuals on a page, limiting interactions, and preferring built-in visuals over custom ones.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

11
Question 11

HOTSPOT - You are creating a Microsoft Power BI imported data model to perform basket analysis. The goal of the analysis is to identify which products are usually bought together in the same transaction across and within sales territories. You import a fact table named Sales as shown in the exhibit. (Click the Exhibit tab.) SalesRowID | ProductKey | OrderDateKey | OrderDate | CustomerKey | SalesTerritoryKey | SalesOrderNumber | SalesOrderLineNumber | OrderQuantity | LineTotal | TaxAmt | Freight | LastModified | AuditID 1 | 310 | 20101229 | 2010-12-29 00:00:00.000 | 21768 | 6 | SO43697 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-10 00:00:00.000 | 127 2 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 28389 | 7 | SO43698 | 1 | 1 | 3399.99 | 271.9992 | 84.9998 | 2011-01-10 00:00:00.000 | 127 3 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 25863 | 1 | SO43699 | 1 | 1 | 3399.99 | 271.9992 | 84.9992 | 2011-01-10 00:00:00.000 | 127 4 | 336 | 20101229 | 2010-12-29 00:00:00.000 | 14501 | 4 | SO43700 | 1 | 1 | 699.0982 | 55.9279 | 17.4775 | 2011-01-10 00:00:00.000 | 127 5 | 346 | 20101229 | 2010-12-29 00:00:00.000 | 11003 | 9 | SO43701 | 1 | 1 | 3399.99 | 271.9992 | 84.9998 | 2011-01-10 00:00:00.000 | 127 6 | 311 | 20101230 | 2010-12-30 00:00:00.000 | 27645 | 4 | SO43702 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-11 00:00:00.000 | 127 7 | 310 | 20101230 | 2010-12-30 00:00:00.000 | 16624 | 9 | SO43703 | 1 | 1 | 3578.27 | 286.2616 | 89.4568 | 2011-01-11 00:00:00.000 | 127 The related dimension tables are imported into the model. Sales contains the data shown in the following table. Column name Data type Description SalesRowID Integer ID of the row from the source system, which represents a unique combination of SalesOrderNumber and SalesOrderLineNumber ProductKey Integer Surrogate key that relates to the product dimension OrderDateKey Integer Surrogate key that relates to the date dimension and is in the YYYYMMDD format OrderDate Datetime Date and time an order was processed CustomerKey Integer Surrogate key that relates to the customer dimension SalesTerritoryKey Integer Surrogate key that relates to the sales territory dimension SalesOrderNumber Text Unique identifier of an order SalesOrderLineNumber Integer Unique identifier of a line within an order OrderQuantity Integer Quantity of the product ordered LineTotal Decimal Total sales amount of a line before tax TaxAmt Decimal Amount of tax charged for the items on a specified line within an order Freight Decimal Amount of freight charged for the items on a specified line within an order LastModified Datetime The date and time that a row was last modified in the source system AuditID Integer The ID of the data load process that last updated a row You are evaluating how to optimize the model. For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

The SalesRowID and AuditID columns can be removed from the model without impeding the analysis goals.

Yes. SalesRowID and AuditID are technical/operational columns that do not help identify products bought together. Basket analysis requires grouping lines into a transaction (SalesOrderNumber) and identifying the products in that transaction (ProductKey), then optionally slicing by territory/date/customer. SalesRowID is a source-system row identifier (unique combination of SalesOrderNumber and SalesOrderLineNumber). Since SalesOrderNumber and SalesOrderLineNumber already exist, SalesRowID is redundant for analysis and not needed for relationships. AuditID tracks the ETL/load process that last updated the row; it is useful for data engineering troubleshooting but not for analytical grouping, filtering, or measures related to co-purchase behavior. Removing both columns reduces model size and can improve refresh/query performance without impeding the stated analysis goals.

Part 2:

Both the OrderDateKey and OrderDate columns are necessary to perform the basket analysis.

No. Both OrderDateKey and OrderDate are not necessary for basket analysis. In a star schema, OrderDateKey is typically used to relate the Sales fact table to a Date dimension (which then provides year/quarter/month/day attributes for slicing). For basket analysis across time, that relationship is sufficient. The OrderDate (datetime) column is often redundant if you already have a proper Date dimension and do not need time-of-day granularity. Basket analysis is usually performed at the transaction level (SalesOrderNumber) and may be filtered by date, but that can be done via the Date dimension using OrderDateKey. You would keep OrderDate only if you specifically need the timestamp (hours/minutes) or if you lack a Date dimension/relationship. Given the prompt states related dimension tables are imported, OrderDateKey alone is enough.

Part 3:

The TaxAmt column must retain the current number of decimal places to perform the basket analysis.

No. TaxAmt does not need to retain the current number of decimal places to perform basket analysis. Basket analysis primarily evaluates which products co-occur in the same order; it relies on transaction identifiers and product identifiers, and sometimes quantities. Tax amount precision is not part of determining whether two products were bought together. Even if you were to include value-based metrics (e.g., total basket value), you could typically round currency values to a sensible precision (often 2 decimal places) without changing the co-occurrence results. In many basket-analysis models, TaxAmt is not required at all and could be removed to optimize the model. Therefore, retaining the exact current decimal precision is not a requirement for achieving the stated analysis goal.

12
Question 12

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are modeling data by using Microsoft Power BI. Part of the data model is a large Microsoft SQL Server table named Order that has more than 100 million records. During the development process, you need to import a sample of the data from the Order table. Solution: From Power Query Editor, you import the table and then add a filter step to the query. Does this meet the goal?

Yes is correct because adding a filter step in Power Query Editor can limit the rows imported from the SQL Server Order table. With SQL Server, simple filters usually support query folding, so Power Query sends a reduced query to the source instead of retrieving all 100 million rows. That means only a sample or subset of the data is imported during development. This is a common and valid way to work with large tables while building and testing a model.

No is incorrect because the proposed solution can meet the goal when the filter step folds back to SQL Server. In Power Query, transformations are not necessarily executed after a full table load; instead, they are often translated into source queries for foldable sources. Since SQL Server supports query folding for standard filters, the subset can be retrieved directly from the source. The answer would only be doubtful if the filter could not fold, but that is not the typical assumption for this exam scenario.

Question Analysis

Core concept: This question tests whether Power Query can be used to limit the amount of data imported from a large SQL Server table during development. Why correct: Adding a filter step in Power Query against a SQL Server source will typically use query folding, meaning the filter is translated into a SQL WHERE or TOP-style restriction and only the filtered subset is retrieved. Key features: SQL Server is a foldable source, and Power Query transformations such as simple row filters are commonly pushed down to the source. Common misconceptions: Some candidates assume that importing the table first always loads the full table before later steps are applied, but in Power Query the applied steps form a single query plan and can be folded to the source. Exam tips: For PL-300, when asked how to reduce imported data during development, think about source-side filtering and query folding rather than loading the full dataset and trimming it afterward in the model.

13
Question 13

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen. You are modeling data by using Microsoft Power BI. Part of the data model is a large Microsoft SQL Server table named Order that has more than 100 million records. During the development process, you need to import a sample of the data from the Order table. Solution: You write a DAX expression that uses the FILTER function. Does this meet the goal?

Yes is incorrect because DAX FILTER does not act at the data acquisition stage for an Import model. Even if you create a filtered calculated table with DAX, the source data must already exist in the model before DAX can evaluate it. That means the large SQL Server table would still be imported first, defeating the purpose of sampling during development. The correct approach is to sample in Power Query or in the source query so only a subset is loaded.

No is correct because writing a DAX expression that uses FILTER does not help import only a sample from the SQL Server Order table. DAX operates on data after it has already been loaded into the Power BI model, so it cannot limit the rows retrieved from the source during import. To sample data during development, you should use Power Query steps such as Keep Top Rows, filters that fold back to SQL Server, or a native SQL statement. Those approaches reduce the data volume before it is imported, which is the stated goal.

Question Analysis

Core concept: This question tests how to retrieve only a sample of rows from a large source table during data import in Power BI. When importing data from SQL Server, sampling must be done in Power Query or at the source query level so that fewer rows are fetched before they enter the model. DAX is evaluated after data is loaded into the model, so a DAX FILTER expression does not reduce the amount of source data imported. A common misconception is thinking DAX can control extraction volume from the source, but DAX works on model data rather than source acquisition. For the exam, remember that row reduction during import is done with Power Query transformations, native SQL queries, or source-side filtering—not with DAX.

14
Question 14

HOTSPOT - You have a Power BI report. You have the following tables. Name Description Balances The table contains daily records of closing balances for every active bank account. The closing balances appear for every day the account is live, including the last day. Date The table contains a record per day for the calendar years of 2000 to 2025. There is a hierarchy for financial year, quarter, month, and day. You have the following DAX measure. Accounts := CALCULATE ( DISTINCTCOUNT (Balances[AccountID]), LASTDATE ('Date'[Date]) For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:

Part 1:

A table visual that displays the date hierarchy at the year level and the [Accounts] measure will show the total number of accounts that were live throughout the year.

At the year level, the filter context includes all dates in that year. LASTDATE('Date'[Date]) returns only the final date of that year (e.g., 31-Dec for calendar year, or the last day of the financial year if the hierarchy uses financial year boundaries). CALCULATE then evaluates DISTINCTCOUNT(Balances[AccountID]) only for that single last date. So the visual will show the number of accounts that were live on the last day of the year, not the number of accounts that were live throughout the year. “Live throughout the year” would require accounts to have Balances rows for every day in the year (or to meet a start/end date condition spanning the whole year), which this measure does not test. Therefore, the statement is false.

Part 2:

A table visual that displays the date hierarchy at the month level and the [Accounts] measure will show the total number of accounts that were live throughout the month.

At the month level, the filter context includes all days in the selected month. LASTDATE returns the last day of that month in the current context. The measure then counts distinct accounts present in Balances on that last day only. That result is not “the total number of accounts that were live throughout the month.” It is “accounts live on the last day of the month.” Accounts that were active earlier in the month but closed before month-end will be excluded, and accounts opened mid-month but still active at month-end will be included (even though they were not live for the full month). To compute “throughout the month,” you’d need logic ensuring the account appears on every day of the month (or spans the full month). Hence, No.

Part 3:

A table visual that displays the date hierarchy at the day level and the [Accounts] measure will show the total number of accounts that were live that day.

At the day level, the filter context is already a single date (each row in the table visual corresponds to one day). In that context, LASTDATE('Date'[Date]) returns that same date. Therefore, CALCULATE applies a filter that effectively keeps the same single day. Because Balances contains a daily record for every account that is live on that day, DISTINCTCOUNT(Balances[AccountID]) for that date returns the number of accounts live that day. This matches the statement exactly. Therefore, Yes.

15
Question 15
(Select 2)

You have a report that contains four pages. Each page contains slicers for the same four fields. Users report that when they select values in a slicer on one page, the selections are not persisted on other pages. You need to recommend a solution to ensure that users can select a value once to filter the results on all the pages. What are two possible recommendations to achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Incorrect. Bookmarks capture a saved state of a report page (including slicer selections), but they are static snapshots. Creating a bookmark for each slicer value is not practical for many values and does not allow users to freely select any value once and have it persist dynamically across pages. Bookmarks are better for navigation, storytelling, and toggling predefined views.

Correct. Report-level filters apply to every page in the report. Moving the four fields from slicers to report-level filters ensures a single selection filters all pages consistently. This fully meets the persistence requirement, though it changes the UX from on-canvas slicers to the Filters pane. It’s a valid complete solution when global filtering is the primary goal.

Correct. Sync slicers is specifically designed to keep slicer selections consistent across multiple pages. When you sync slicers for the same field, a selection made on one page is reflected on other pages. You can also control visibility per page, allowing one slicer to drive filtering everywhere without duplicating the visual experience on each page.

Incorrect. Page-level filters only affect visuals on the current page. Even if you configure the same page-level filters on each page, users would still need to set them repeatedly, and selections would not automatically persist when navigating between pages. This does not satisfy the requirement to select once and filter all pages.

Incorrect. Visual-level filters apply only to a single visual, not to all visuals on a page or across pages. Replacing slicers with visual-level filters would fragment the filtering experience and require repeated configuration per visual. It cannot provide a single selection that filters all pages’ content.

Question Analysis

Core concept: This question tests how to apply consistent filtering across multiple report pages in Power BI. Slicers are visuals, and by default their selections are page-scoped unless you explicitly synchronize them. Alternatively, filters can be applied at the report level so they affect every page. Why the answers are correct: Syncing slicers across pages (C) is the most direct solution when you want users to interact with a slicer once and have that same selection propagate to slicers on other pages. Using the Sync slicers pane, you can choose which pages are synced and whether the slicer is visible on each page. This preserves the slicer experience and ensures the selection state is shared. Replacing slicers with report-level filters (B) is also a complete solution because report-level filters apply to all pages in the report. If the requirement is “select a value once to filter results on all pages,” a report-level filter satisfies the functional need. While it changes the user experience (filter pane instead of on-canvas slicer), it guarantees persistence across pages without needing multiple slicer visuals. Key features / configuration notes: - Sync slicers: Use View > Sync slicers. Ensure the slicers are based on the same field and that sync is enabled for all relevant pages. You can keep the slicer visible on one page and hidden on others while still syncing. - Report-level filters: Add the field(s) to the Filters pane under “Filters on all pages.” This is especially useful when you want global constraints (e.g., Region, Fiscal Year) consistently applied. Common misconceptions: - Page-level filters (D) only affect a single page, so they do not persist across pages. - Visual-level filters (E) only affect one visual, not the whole page or report. - Bookmarks (A) can capture a state, but they are not a scalable way to persist arbitrary slicer selections across pages; they require predefining states and do not meet the “select once” interactive requirement. Exam tips: Remember the scope hierarchy: visual < page < report. For slicers specifically, “Sync slicers” is the feature designed to share slicer selections across pages while keeping the slicer interaction model.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

16
Question 16

DRAG DROP - You are using existing reports to build a dashboard that will be viewed frequently in portrait mode on mobile phones. You need to build the dashboard. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Pin items from the reports to the dashboard.

Yes. A dashboard in Power BI is composed of tiles, and those tiles typically come from existing report visuals (or live pages) that you pin in the Power BI service. Since the requirement is to use existing reports to build a dashboard, pinning is the foundational step that actually creates the dashboard content. Without pinning items, there is nothing to arrange in the dashboard (including in the mobile layout). Why not No: If you skip pinning, you would only be editing an empty dashboard or not have a dashboard at all. Even if a dashboard already exists, you still need to pin the specific visuals you want users to see frequently on mobile. Pinning is therefore required to meet the scenario’s goal of building a mobile-consumable dashboard from existing reports.

Part 2:

Open the dashboard.

Yes. After pinning items (which creates/populates the dashboard), you must open the dashboard to access its settings and edit experiences, including the Mobile view. The mobile layout editor is accessed from within the dashboard in the Power BI service. Why not No: You cannot edit a dashboard’s mobile view without being in the context of that dashboard. While you can pin directly from a report visual and choose an existing/new dashboard, the step of opening the dashboard is necessary to proceed to mobile-specific layout editing and to validate the final portrait phone experience.

Part 3:

Create a phone layout for the existing reports.

No. Creating a phone layout for existing reports is a report-level optimization done in Power BI Desktop (View > Mobile layout). That feature controls how report pages render in the Power BI mobile app, not how a dashboard’s tiles are arranged on a phone. Why not Yes: The question is specifically about building a dashboard from existing reports and having that dashboard viewed frequently on mobile phones in portrait mode. For dashboards, the correct feature is editing the dashboard Mobile view in the Power BI service. Report phone layout could still be useful if users also open the underlying reports on mobile, but it is not required to build and optimize the dashboard itself, so it is not part of the necessary sequence.

Part 4:

Edit the Dashboard mobile view.

Yes. To optimize a dashboard for portrait mode on mobile phones, you use the dashboard’s Mobile view editor in the Power BI service. This provides a dedicated canvas representing a phone screen where you choose which pinned tiles appear and how they are arranged for mobile consumption. Why not No: If you only rely on the default dashboard layout, the mobile experience may be suboptimal (tiny tiles, poor ordering, excessive scrolling). The requirement explicitly calls out frequent viewing on mobile in portrait mode, which strongly implies you should tailor the Mobile view to prioritize key KPIs and improve usability.

Part 5:

Rearrange, resize, or remove items from the mobile layout.

Yes. After entering the dashboard Mobile view editor, you must rearrange, resize, or remove items to create an effective portrait phone layout. This is the step where you implement the actual mobile-first design: placing the most important KPIs at the top, resizing tiles for readability, and removing nonessential visuals to reduce scrolling. Why not No: Simply opening Mobile view without adjusting the layout does not meet the requirement to build a dashboard that is optimized for frequent mobile viewing. The exam expects you to recognize that mobile optimization is an explicit design activity, not an automatic outcome of pinning tiles.

17
Question 17

You have a Power BI report. The report contains a visual that shows gross sales by date. The visual has anomaly detection enabled. No anomalies are detected. You need to increase the likelihood that anomaly detection will identify anomalies in the report. What should you do?

Increasing Expected range transparency only changes how the expected range band is displayed (more or less see-through). It does not change the statistical threshold or the algorithm’s behavior, so it will not increase the likelihood of anomalies being detected. This is a common trap: many formatting settings affect readability, not detection outcomes.

Adding a data field to the Legend creates multiple series (multiple lines) and anomaly detection is then applied per series. This does not inherently increase detection likelihood; it can even make detection harder if each series has fewer data points or more variability. Legend is useful for comparing categories, not for making anomaly detection more sensitive.

Increasing the Sensitivity setting makes anomaly detection more likely to flag deviations as anomalies. This is the correct control to adjust when no anomalies are detected but you expect some. Higher sensitivity typically results in more anomalies being identified (with a higher chance of false positives), which aligns with the requirement to increase likelihood.

Adding a data field to Secondary values changes the visual to include additional measures/series. This does not directly tune anomaly detection thresholds. Like using a Legend, it can complicate the chart and does not guarantee more anomalies will be detected. The correct approach is to adjust Sensitivity rather than adding more measures.

Question Analysis

Core concept: This question tests Power BI’s built-in anomaly detection on time series visuals (typically line charts). Anomaly detection uses statistical/ML techniques to model expected behavior over time and flags points that deviate beyond an expected range. The key control that affects how readily points are flagged is Sensitivity. Why the answer is correct: If no anomalies are detected and you want to increase the likelihood that anomalies will be identified, you should increase the Sensitivity setting. Higher sensitivity makes the detection algorithm more “eager” to flag deviations from the expected pattern. In practice, this narrows the tolerance for variation around the expected range, so smaller departures from the forecasted/expected values are more likely to be marked as anomalies. Key features and configuration notes: - Sensitivity is the primary tuning knob for anomaly detection. Increasing it generally increases the number of detected anomalies (at the risk of more false positives). - The expected range band is a visual representation of the model’s expected values and uncertainty; it helps interpret anomalies but is not the main driver for detection. - Anomaly detection works best with a clean, continuous date axis and enough historical points. Missing dates, irregular intervals, or very noisy data can reduce detection quality. Common misconceptions: - Expected range transparency is only a formatting choice; it changes how visible the band is, not the underlying detection. - Adding fields to Legend or Secondary values changes the visual’s series structure (multiple lines) and can complicate interpretation; it does not directly increase the probability of detecting anomalies and may even reduce it if each series has fewer points. Exam tips: For PL-300, remember: when asked to “increase/decrease likelihood” of anomaly detection, look for Sensitivity. Formatting options (like transparency) affect appearance only. Also, ensure you distinguish between improving the model’s ability to detect anomalies (data quality, continuous time series, sufficient history) versus simply making the visualization easier to read. (Reference: Power BI anomaly detection feature documentation and visual analytics behavior.)

18
Question 18
(Select 2)

You have a Power BI query named Sales that imports the columns shown in the following table.

diagram

Users only use the date part of the Sales_Date field. Only rows with a Status of Finished are used in analysis. You need to reduce the load times of the query without affecting the analysis. Which two actions achieve this goal? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.

Correct. Only rows where Status = Finished are used in analysis, so removing canceled rows does not change report results. This reduces the number of rows imported, transformed, and stored in the model, which improves refresh performance and memory usage. If the source supports query folding, the filter can be pushed down to the source system, making this an especially strong optimization.

Incorrect. Removing Sales_Date would eliminate the sale date entirely, but users still need the date portion for analysis. That would affect reporting, grouping, filtering, and potentially relationships to a calendar table. Because the date is required, the column cannot simply be removed.

Incorrect. Delivery_Time represents elapsed delivery time in hours and includes decimal precision, so converting it to Integer would change the meaning of the data. That could affect calculations, averages, and business interpretation. It is not a safe optimization because it alters analytical results rather than just improving performance.

Correct. Users only need the date part of Sale_Date, not the time portion. Splitting the datetime into separate date and time columns enables retention of the date column and removal of the unnecessary time column, which preserves analysis requirements while reducing unnecessary data content. This also lowers cardinality compared to keeping a full datetime value, which can improve model compression and performance.

Incorrect. Removing Canceled_Date may reduce model size slightly because the column is not needed for finished sales analysis, but by itself it is not the best answer to the explicit requirements given. The question specifically highlights that only the date part of Sale_Date is used, so the more appropriate optimization is to reshape that field rather than focus on Canceled_Date. Therefore E is not one of the two intended complete solutions.

Question Analysis

Core concept: This question is about reducing Power BI query load time by eliminating unnecessary rows and unnecessary data content while preserving all information required for analysis. The best optimizations are those that reduce imported row count and avoid loading unused portions of columns. In this scenario, analysts only use rows where Status = Finished and only use the date portion of Sale_Date, so the query should be shaped to match those requirements as closely as possible. Why correct: Filtering out canceled rows directly reduces the number of records loaded and processed, which is one of the most effective ways to improve refresh performance. Separating Sale_Date into date and time allows the model to keep only the date information needed for analysis and discard the unnecessary time portion. Together, these changes reduce data volume without changing report meaning. Key features: Row filtering can often fold back to the source, making it highly efficient. Reducing datetime content to date-only data lowers cardinality and can improve compression. Keeping only required data elements is a standard Power Query and data modeling best practice. Common misconceptions: Removing Canceled_Date alone is helpful but does not fully address the explicit requirement that only the date part of Sale_Date is used. Changing Delivery_Time to integer would alter the data and could affect calculations. Removing Sale_Date entirely would break analysis because the sale date is still required. Exam tips: For PL-300 performance questions, prioritize actions that remove unneeded rows first, then remove or simplify unneeded columns or column content. Be careful to distinguish between optimizations that preserve semantics and those that change the meaning of the data. If a question explicitly says only part of a field is used, expect a transformation that keeps only that needed part.

19
Question 19

For the sales department at your company, you publish a Power BI report that imports data from a Microsoft Excel file located in a Microsoft SharePoint folder. The data model contains several measures. You need to create a Power BI report from the existing data. The solution must minimize development effort. Which type of data source should you use?

Power BI dataset is correct because it lets you build a new report directly on the existing published semantic model that already contains the imported data and measures. This minimizes effort by avoiding re-creating Power Query steps, relationships, and DAX measures. It also improves consistency and governance (one model, many reports) and supports the thin report pattern commonly tested on PL-300.

A SharePoint folder would require connecting to the Excel file(s) again and rebuilding the data model in the new report. You would need to recreate measures and any transformations, increasing development time and the chance of inconsistent logic. SharePoint is a valid source for ingestion, but it does not reuse the existing published semantic model.

Power BI dataflows are designed to reuse data preparation (Power Query transformations) across multiple datasets. However, dataflows do not store or reuse the existing dataset’s measures and model relationships. If you used a dataflow, you would still need to create a new dataset/semantic model and rebuild the measures, which does not minimize effort given the existing model.

An Excel workbook source would mean importing directly from the file again, then rebuilding the model and measures in the new report. This duplicates logic and refresh configuration and can lead to multiple versions of calculations. It’s appropriate when starting from scratch, but not when an existing published dataset with measures already exists and should be reused.

Question Analysis

Core concept: This question tests Power BI reuse patterns: building a new report from an existing semantic model (dataset) versus rebuilding the data acquisition and modeling layers. In Power BI, the dataset (semantic model) contains the imported data, relationships, calculated columns, and—critically here—measures. Why the answer is correct: You already published a report that imports Excel data from SharePoint and the model contains several measures. To create another report from the existing data while minimizing development effort, you should connect to the existing Power BI dataset (semantic model) in the Power BI service (or via Desktop using “Power BI datasets” / “Get data > Power Platform > Power BI datasets” depending on experience). This approach reuses the existing measures and model logic without re-creating Power Query steps, relationships, or DAX measures. It also centralizes governance: one model, many reports. Key features / best practices: - Thin reports: Create multiple reports that connect live to a single dataset. This reduces duplication and improves consistency across departments. - Single source of truth: Updates to measures or model propagate to all connected reports. - Manageability and security: Dataset permissions, RLS, and endorsement (Promoted/Certified) can be applied once. This aligns with Azure Well-Architected Framework principles of operational excellence and reliability by reducing configuration drift and simplifying change management. - Performance: Reusing a curated model avoids repeated refresh configurations and redundant imports. Common misconceptions: - SharePoint folder or Excel workbook might seem “closest” to the original file, but using them would require rebuilding the model and measures, increasing effort and risk of inconsistent calculations. - Dataflows are great for reusable ETL, but they don’t automatically carry over the existing dataset’s measures; you’d still need to rebuild the semantic model. Exam tips: When you see “existing report/dataset with measures” and “minimize development effort,” the exam usually expects “Power BI dataset” (thin report pattern). Choose dataflows when the goal is to reuse data preparation across multiple datasets, not when the goal is to reuse an existing semantic model and DAX measures. Official guidance to review: Power BI semantic models (datasets), thin reports, and shared datasets in the Power BI service.

20
Question 20

DRAG DROP - You receive revenue data that must be included in Microsoft Power BI reports. You preview the data from a Microsoft Excel source in Power Query as shown in the following exhibit. Column1 Column2 Column3 Column4 Column5 Column6 Valid: 100% Valid: 100% Valid: 100% Valid: 100% Valid: 100% Valid: 100% Error: 0% Error: 0% Error: 0% Error: 0% Error: 0% Error: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0% Empty: 0%

Department Product 2016 2017 2018 2019 Bikes Carbon mountainbike 1002815 1006617 1007814 1007239 Bikes Aluminium road bike 1007024 1001454 1005842 1007105 Bikes Touring bike 1003676 1005171 1001669 1003244 Accessories Bell 76713 10247 60590 52927 Accessories Bottle holder 26690 29613 67955 71466 Accessories Satnav 83189 40113 71684 24697 Accessories Mobilephone holder 68641 80336 58099 45706 You plan to create several visuals from the data, including a visual that shows revenue split by year and product. You need to transform the data to ensure that you can build the visuals. The solution must ensure that the columns are named appropriately for the data that they contain. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place:

Part 1:

Select the correct answer(s) in the image below.

question-image

Pass is appropriate because the transformation sequence is determinable from the scenario and the listed actions. The required end state is a long table with columns like Department, Product, Year, Revenue. To get there, you must (1) promote the first row to headers so the year columns are correctly named (2016, 2017, 2018, 2019) rather than Column1–Column6; (2) select Department and Product and use Unpivot Other Columns to convert all year columns into rows (this is preferred over Unpivot Columns because it automatically includes any additional year columns that may appear later); and (3) rename Attribute to Year and Value to Revenue so the columns are semantically correct for modeling and visuals. Therefore, the correct response is that you know the answer (Pass).

Practice Tests

Practice Test #1

50 Questions·100 min·Pass 700/1000

Practice Test #2

50 Questions·100 min·Pass 700/1000

Practice Test #3

50 Questions·100 min·Pass 700/1000

Practice Test #4

50 Questions·100 min·Pass 700/1000

Practice Test #5

50 Questions·100 min·Pass 700/1000

Other Microsoft Certifications

Microsoft AI-102

Microsoft AI-102

Associate

Microsoft AI-900

Microsoft AI-900

Fundamentals

Microsoft SC-200

Microsoft SC-200

Associate

Microsoft AZ-104

Microsoft AZ-104

Associate

Microsoft AZ-900

Microsoft AZ-900

Fundamentals

Microsoft SC-300

Microsoft SC-300

Associate

Microsoft DP-900

Microsoft DP-900

Fundamentals

Microsoft SC-900

Microsoft SC-900

Fundamentals

Microsoft AZ-305

Microsoft AZ-305

Expert

Microsoft AZ-204

Microsoft AZ-204

Associate

Microsoft AZ-500

Microsoft AZ-500

Associate

Start Practicing Now

Download Cloud Pass and start practicing all PL-300: Microsoft Power BI Data Analyst exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.