CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Security Engineer
Google Professional Cloud Security Engineer

Practice Test #1

Simulate the real exam experience with 50 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.

50Questions120Minutes700/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

Your healthcare analytics startup is building a multi-region telemetry pipeline on Google Cloud that spans Compute Engine VMs, a GKE Autopilot cluster, Cloud Storage buckets, BigQuery datasets (~50 TB), and Pub/Sub topics processing ~80,000 messages per second. Under your GDPR data protection by design program, the security review mandates that: (1) you—not Google—must control key creation, 90-day rotation, and IAM-scoped usage of encryption keys; (2) keys must reside in Google Cloud KMS/HSM with no dependency on external key stores; and (3) a single key management approach must be supported uniformly across all listed services. Which option should you choose to meet these requirements?

Cloud External Key Manager (EKM) integrates Google Cloud services with keys stored in an external key management system (often on-prem or third-party HSM). While it can provide strong customer control and separation from Google, it violates the requirement of “no dependency on external key stores” and “keys must reside in Google Cloud KMS/HSM.” Therefore it is not suitable for this scenario.

Customer-managed encryption keys (CMEK) use Cloud KMS (optionally Cloud HSM-backed) keys that you create and control. You can enforce IAM-scoped usage, manage key versions, and configure a 90-day rotation policy. CMEK is supported across major data and messaging services including Cloud Storage, BigQuery, Pub/Sub, and Compute Engine, enabling a single, uniform approach while keeping keys entirely within Google Cloud.

Customer-supplied encryption keys (CSEK) require you to provide raw key material to Google Cloud services at request time. This increases operational complexity, complicates rotation, and is not uniformly supported across all the listed services (notably many managed services like BigQuery and Pub/Sub rely on CMEK rather than CSEK). It also does not meet the requirement that keys must reside in Cloud KMS/HSM.

Google default encryption uses Google-managed keys and provides encryption at rest automatically, but you do not control key creation, rotation cadence, or IAM-scoped key usage. This fails the explicit requirement that the customer—not Google—must control key lifecycle and access. It also does not satisfy compliance-driven “customer control” expectations common in regulated environments.

Question Analysis

Core Concept: This question tests encryption key management choices on Google Cloud—specifically the difference between Google-managed encryption, customer-managed encryption keys (CMEK) in Cloud KMS/Cloud HSM, customer-supplied encryption keys (CSEK), and External Key Manager (EKM). It also tests uniform applicability across multiple services (Compute Engine, GKE, Cloud Storage, BigQuery, Pub/Sub) and compliance-driven controls (GDPR “data protection by design”). Why the Answer is Correct: Customer-managed encryption keys (CMEK) is the only option that satisfies all three requirements simultaneously: (1) you control key creation, rotation (including 90-day rotation), and IAM-scoped usage via Cloud KMS permissions; (2) keys reside in Google Cloud KMS and can be backed by Cloud HSM (still Google Cloud–native, no external keystore dependency); and (3) CMEK is broadly supported across the listed services, enabling a single consistent approach. Key Features / How to Implement: - Use Cloud KMS key rings/keys (regional) aligned to your multi-region architecture; for higher assurance, use Cloud HSM-backed keys. - Enforce IAM least privilege: grant services access to the CryptoKey via roles/cloudkms.cryptoKeyEncrypterDecrypter (or narrower where possible) and use separation of duties for key admins. - Configure rotation: set rotation period to 90 days and ensure new key versions are used automatically where supported. - Service integrations: Cloud Storage bucket default KMS key, BigQuery dataset/table CMEK, Pub/Sub topic CMEK, Compute Engine disk/image/snapshot CMEK, and GKE (Autopilot) integrates with CMEK for supported encryption use cases (e.g., node boot disks and certain control-plane related encryption features depending on configuration). Common Misconceptions: - “External Key Manager” sounds like stronger control, but it explicitly depends on external key stores—disallowed here. - “Customer-supplied encryption keys” can look like maximum control, but it is operationally brittle, not uniformly supported across all services, and does not meet the requirement to keep keys in Cloud KMS/HSM. - Google default encryption is always on, but you do not control key lifecycle or IAM-scoped key usage. Exam Tips: When requirements say: control key creation/rotation + IAM-scoped key usage + keys in Cloud KMS/HSM + broad service support, the exam almost always points to CMEK (Cloud KMS/Cloud HSM). If external key custody is required, then EKM/Cloud HSM with external systems may appear—but here external dependency is explicitly prohibited.

2
Question 2

Your logistics company runs a route-optimization model as a managed Vertex AI Batch Predictions job on Google Cloud. Twenty external carriers upload up to 1,000 CSV files per day (each <= 100 MB) to a dedicated Cloud Storage bucket via 15-minute signed URLs; a Cloud Function triggers the batch predictions and writes results to partner-specific buckets. You are conducting a configuration review with stakeholders and must clearly describe your security responsibilities for this managed AI workflow. What should you do?

Incorrect. While managed services reduce your operational burden, they do not eliminate your security responsibilities. Rate limits and budget alerts are cost/availability controls, not the core security posture for partner uploads and managed batch inference. You still must manage IAM, data access boundaries, and monitoring. This option misrepresents the shared responsibility model by implying most security shifts entirely to Google.

Incorrect. Securing CSV normalization code is important (input validation, safe parsing), but the question asks to describe security responsibilities for the managed workflow end-to-end. Limiting IAM discussion to “within the development team” ignores the biggest risks here: external partner access, service account permissions between Cloud Functions/Vertex AI/Cloud Storage, and auditability. It’s too narrow for a configuration review.

Correct. It accurately applies Google’s shared responsibility model: Google secures the underlying managed Vertex AI infrastructure, while you secure identities, permissions, and data access patterns. It highlights least-privilege IAM for service accounts and partners, secure upload/download via short-lived signed URLs and TLS, restrictive bucket policies, and operational monitoring using Cloud Audit Logs and Cloud Logging to detect misuse or malicious activity.

Incorrect. You generally cannot place custom network firewalls or deep IDS/IPS “around” Vertex AI’s managed service control plane in the way you would for self-managed VMs. The more appropriate controls are IAM, organization policies, VPC Service Controls, private access patterns, and logging/monitoring. Vulnerability scanning of a Google-managed runtime is also largely Google’s responsibility, not yours.

Question Analysis

Core Concept - This question tests Google Cloud’s shared responsibility model in a managed ML workflow (Vertex AI Batch Predictions) and how to articulate customer vs. Google responsibilities across IAM, data access, and operational monitoring. Why the Answer is Correct - Even though Vertex AI Batch Predictions is a managed service, you still own security “in the cloud”: who can upload data, who can trigger jobs, what identities run the pipeline, where outputs are written, and how activity is monitored. Option C correctly frames the review around IAM for service accounts and partners, secure upload/download patterns (short-lived signed URLs, TLS), least-privilege bucket policies, and logging/monitoring for detection and response. This is exactly what stakeholders need to hear in a configuration review: clear boundaries of responsibility and concrete controls. Key Features - Use dedicated service accounts for Cloud Functions and Vertex AI with minimal roles (e.g., storage.objectViewer/objectCreator on specific buckets, Vertex AI job permissions only where needed). Prefer uniform bucket-level access, disable public access, and scope signed URLs to object name, method, content-type, and short expiration. Ensure Cloud Storage and Vertex AI access is over TLS (default) and consider VPC Service Controls for data exfiltration risk reduction. Enable and review Cloud Audit Logs (Admin Activity, Data Access where appropriate) and Cloud Logging metrics/alerts for anomalous uploads, job triggers, and cross-bucket writes. Common Misconceptions - Many assume “PaaS means Google handles security” (A), but customers still manage identities, permissions, and data governance. Others focus narrowly on application code (B) while ignoring partner access and service-to-service permissions. Designing custom firewalls/IDS around a managed runtime (D) misunderstands that you can’t insert traditional network appliances into Google-managed control planes; instead, you use IAM, organization policies, VPC SC, and logging. Exam Tips - For managed services, answer choices that mention shared responsibility, least privilege IAM, secure data access patterns (signed URLs, bucket policies), and audit logging are usually correct. Be wary of options that overemphasize network perimeter controls for serverless/managed services or treat cost controls as “security responsibilities.”

3
Question 3

Your fintech organization operates 12 Google Cloud projects under 2 folders and uses 25 service accounts; an internal review found some accounts assigned roles/editor and external contractors from two partner domains with excessive access, and you must gain within 5 minutes detailed visibility into IAM policy changes, user activity, service account key usage, and access to three restricted projects, retain these records for at least 400 days, and correlate them centrally with AWS and on-prem security events without deploying any agents on VMs—what should you do?

Cloud Functions on IAM policy changes is partial and brittle: it focuses on IAM changes only, not comprehensive user activity, Data Access, or service account key usage across services. It also depends on building/maintaining custom detection logic and storage, and it doesn’t inherently provide centralized, near-real-time export and 400-day retention for all relevant audit events. Policy Simulator is for analysis, not an audit log system.

Cloud Monitoring Metrics Explorer can alert on certain authentication/usage metrics, but metrics are not a complete, immutable audit trail of administrative actions and data reads/writes. It won’t reliably capture IAM policy deltas, detailed API caller identity, or service account key creation/usage events with the fidelity needed for compliance and forensics, nor is it designed for SIEM-grade cross-environment log correlation and long-term retention.

Cloud Audit Logs provide authoritative records for Admin Activity (IAM policy changes, role grants, key creation), System Events, and Data Access (reads/writes to sensitive resources). Aggregated sinks at org/folder level centralize logs from all projects and export them in near real time to Pub/Sub for SIEM ingestion, enabling correlation with AWS/on-prem events. Logging bucket retention or export destinations can meet 400+ day retention without VM agents.

OS Config requires agents on VMs, which violates the “no agents” constraint. Even if allowed, patch and configuration monitoring does not provide authoritative visibility into IAM policy changes, API-level user activity, or service account key usage across managed services. It’s a systems management tool, not a centralized cloud audit and compliance logging solution, and it won’t address cross-environment SIEM correlation requirements.

Question Analysis

Core Concept: This question tests centralized security visibility and auditability using Cloud Logging/Cloud Audit Logs, plus near-real-time export to an external SIEM without installing VM agents. It aligns with the Google Cloud Architecture Framework (Operational Excellence and Security) by emphasizing centralized logging, least privilege verification, and continuous monitoring. Why the Answer is Correct: Option C directly satisfies every requirement: visibility within ~5 minutes into IAM policy changes (Admin Activity logs), user activity and API calls (Admin Activity and Data Access logs), service account key usage (Audit logs for IAM/Service Account Key operations and relevant Data Access where applicable), and access to restricted projects (Data Access logs for sensitive services). It also enables central correlation with AWS and on-prem events by exporting logs in near real time via aggregated sinks to Pub/Sub (or other supported destinations) feeding an external SIEM—no agents required. Key Features / Configurations: 1) Enable Cloud Audit Logs at the organization level: Admin Activity and System Event are on by default and should be retained; explicitly enable Data Access logs for the three restricted projects (and any critical services) because Data Access is not enabled by default and can be high volume/cost. 2) Use aggregated log sinks at the organization or folder level to capture logs from all 12 projects and both folders, simplifying management and ensuring coverage even as projects change. 3) Route to Pub/Sub for streaming to a SIEM (common pattern), or to BigQuery/Cloud Storage for analytics/archival; then set retention to >= 400 days using Logging buckets with custom retention or by exporting to Cloud Storage with lifecycle policies / BigQuery retention controls. 4) Use log views and exclusions carefully to control cost while preserving compliance-relevant events. Common Misconceptions: It’s tempting to use Monitoring metrics (option B) or custom automation (option A), but those do not provide authoritative, comprehensive audit trails across IAM, data access, and key usage, nor do they meet long retention and cross-environment correlation requirements as cleanly as Audit Logs + sinks. Exam Tips: When you see “IAM policy changes,” “who did what,” “service account key usage,” “central correlation,” “no agents,” and “long retention,” think: Cloud Audit Logs + aggregated sinks + SIEM integration. Remember Data Access logs must be explicitly enabled and can be scoped to sensitive projects/services to balance cost and compliance.

4
Question 4

A regional engineering group at a healthcare company registered a separate Google Workspace with Cloud Identity and created a new Organization resource. Within 90 days, they launched 180 projects across 8 folders to host regulated analytics workloads and connected them to a shared VPC. Your centralized platform security team must assume control of who can grant permissions across this Organization and ensure the ability to audit access and configuration activity across all projects and folders. Which type of access should be granted to your team at the Organization level to meet this requirement?

Organization Administrator is the best answer because it provides centralized administrative control over the Organization resource, including the ability to manage IAM policies and govern access delegation across folders and projects. The requirement is not merely to define roles, but to assume control over who can grant permissions throughout the hierarchy. In a large, newly created organization with many projects and folders, this role enables the platform security team to establish and enforce centralized IAM governance. Auditability is preserved through Cloud Audit Logs, which record administrative and IAM changes at all levels of the hierarchy.

Security Reviewer is a read-only role intended for visibility into security posture, configurations, and findings. It can help a team inspect resources and support audits, but it does not allow them to change IAM policies or control who can grant permissions. Because the requirement explicitly calls for assuming control of permission-granting authority, a reviewer role is insufficient. It addresses observation, not administration.

Organization Role Administrator manages custom IAM roles at the organization level, such as creating, updating, or deleting custom role definitions. However, defining roles is different from granting permissions to principals through IAM policy bindings on organizations, folders, and projects. This role does not by itself provide the broad authority needed to control who can assign access across the hierarchy. Therefore it is too narrow for the stated requirement.

Organization Policy Administrator manages organization policy constraints, such as restricting resource locations or allowed services. These policies are useful for governance and compliance, but they do not directly control IAM permission grants or who can bind roles to users and service accounts. The question is specifically about controlling access delegation, which is an IAM administration function rather than an organization policy function. As a result, this role does not meet the core requirement.

Question Analysis

Core concept: This question is about centralized IAM governance at the Google Cloud Organization level. The requirement is to take control of who can grant permissions across the entire resource hierarchy and to preserve auditability of access and configuration changes across all folders and projects. In Google Cloud, the ability to grant permissions is primarily controlled through managing IAM policies and administrative authority at the organization node. Why correct: Organization Administrator is the best fit because it provides broad administrative control over the Organization resource, including the ability to manage IAM policies and delegate or restrict administrative access across folders and projects. That allows the centralized security team to assume control over who can grant permissions throughout the hierarchy. Auditability is then supported by Cloud Audit Logs, which automatically record IAM and administrative changes across the organization. Key features: Organization-level administration applies centrally to inherited resources, making it appropriate for a rapidly grown environment with many folders and projects. It supports centralized governance over IAM and resource administration. Cloud Audit Logs provide the audit trail for access and configuration activity, independent of which admin role is used. Common misconceptions: Organization Role Administrator sounds relevant because it includes the word 'Role,' but it is focused on managing custom IAM roles, not broadly controlling IAM policy bindings across the hierarchy. Security Reviewer is read-only and cannot enforce governance. Organization Policy Administrator manages policy constraints, not permission grants. Exam tips: When a question asks who can grant permissions, think first about IAM policy administration rather than custom role definition. Distinguish between managing roles, managing policies, and reviewing security posture. On the exam, choose the role that actually enables centralized control of IAM delegation, even if it is broader than a specialized read-only or policy-focused role.

5
Question 5

Your biomedical analytics team is migrating a bursty batch-processing render farm to a Compute Engine cluster that uses autoscaling Managed Instance Groups (MIGs) across 3 zones and can scale from 8 to 200 VMs in under 5 minutes, and security requires that you retain full control of the boot disk encryption key lifecycle (including quarterly rotation, immediate disablement during incidents, and audit visibility); which boot disk encryption solution should you configure to meet these requirements without slowing rapid instance creation?

CSEK requires you to provide the raw encryption key material with each API request that creates or attaches the disk. In a bursty autoscaling MIG (8 to 200 VMs in minutes), securely distributing and injecting keys at scale is operationally complex and error-prone, and it reduces agility. Auditability is also less centralized than Cloud KMS. While you “control” the key, it is not the best fit for rapid automated instance creation.

CMEK with Cloud KMS provides centralized key lifecycle control (rotation via key versions and rotation policies), strong incident response controls (disable/destroy key versions to block future decrypt operations), and robust audit visibility through Cloud KMS audit logs. It integrates cleanly with Compute Engine and MIG automation, supporting rapid VM provisioning without requiring you to pass key material per instance creation request.

Google-managed encryption keys (default encryption) provide minimal operational overhead and good baseline security, but they do not meet the requirement to retain full control of the key lifecycle. You cannot perform your own quarterly rotation policy, immediate disablement during incidents, or get the same level of customer-controlled governance over key usage decisions. This option fails the explicit security requirements.

Pre-encrypting files before upload can protect data at rest for application-level inputs/outputs, but it does not satisfy boot disk encryption key lifecycle control for Compute Engine instances. The requirement is specifically about boot disk encryption keys and the ability to rotate/disable/audit them. This approach also adds complexity to the pipeline and does not provide a centralized “kill switch” for VM boot disk access.

Question Analysis

Core Concept: This question tests Google Cloud disk encryption choices for Compute Engine at scale, specifically the difference between CSEK (customer-supplied keys) and CMEK (customer-managed keys in Cloud KMS) and how they affect operational control, auditability, and autoscaled Managed Instance Group (MIG) performance. Why the Answer is Correct: Customer-managed encryption keys (CMEK) with Cloud KMS best matches the stated requirements: full control of the key lifecycle (quarterly rotation), immediate disablement during incidents, and audit visibility, while still supporting rapid VM creation in autoscaling MIGs. With CMEK, the boot disk is encrypted with a key you control in Cloud KMS. You can rotate keys on a schedule, disable or destroy a key version to prevent future disk attach/VM boot operations that require decrypt, and rely on Cloud Audit Logs for key usage visibility. This aligns with the Google Cloud Architecture Framework’s security and compliance principles: centralized key management, auditable controls, and operational resilience. Key Features / Configurations / Best Practices: - Use Cloud KMS key rings/keys in the same region as the disks; grant the Compute Engine service agent the required KMS permissions (e.g., cloudkms.cryptoKeyEncrypterDecrypter) on the key. - Enable rotation policy (e.g., 90 days) and manage key versions; understand that rotation affects new encrypt operations and re-encryption workflows, not magically re-encrypting existing disks unless you perform re-encryption. - Incident response: disabling a key version can immediately block new decrypt operations (impacting new instance boots/attaches that need decrypt), providing a strong “kill switch.” - Audit: Cloud KMS generates detailed audit logs for encrypt/decrypt and key access, supporting compliance evidence. - Performance: CMEK is designed for cloud-scale automation; MIG instance creation remains fast because key operations are handled by Google infrastructure with KMS integration, avoiding per-instance key material injection. Common Misconceptions: CSEK can look like “more control” because you supply the key, but it shifts heavy operational burden to you (secure distribution to every create/attach call) and is poorly suited to bursty autoscaling. Google-managed keys are simplest but fail the “full control” and “immediate disablement” requirements. Pre-encrypting files doesn’t address boot disk encryption or VM lifecycle controls. Exam Tips: For Compute Engine disks: choose CMEK (Cloud KMS) when you need centralized lifecycle management, audit logs, and the ability to disable access quickly at scale. Choose CSEK only when you must provide raw key material per request and can tolerate significant automation complexity—rare for autoscaling MIG scenarios.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You are designing the key management strategy for a U.S.-based fintech launching a payment tokenization API on Google Cloud. Requirements:

  • Rotate the root (key-encryption) key at least every 45 days via a managed schedule.
  • The system that stores the root key must be FIPS 140-2 Level 3 validated.
  • Keep the root key within multiple regions in the United States for redundancy (US-only residency). Which option best satisfies these requirements?

Customer-managed encryption keys with Cloud Key Management Service (software-backed CMEK) supports IAM-controlled key usage and automated rotation schedules. However, software-backed Cloud KMS does not meet the explicit requirement for FIPS 140-2 Level 3 validated key storage/operations. It can be multi-region (e.g., "us") and can rotate on a schedule, but it fails the Level 3 HSM requirement, which is the deciding constraint.

Customer-managed encryption keys with Cloud HSM is the best fit because Cloud HSM provides FIPS 140-2 Level 3 validated hardware protection for key material and cryptographic operations. It is managed through Cloud KMS, so you can configure automatic rotation (e.g., every 45 days). By creating the key in the US multi-region location ("us"), you keep residency within the United States while achieving multi-region redundancy.

Customer-supplied encryption keys (CSEK) require you to generate, store, distribute, and rotate keys yourself. This conflicts with the requirement for a managed rotation schedule. Additionally, CSEK does not guarantee that the system storing the root key is FIPS 140-2 Level 3 validated on Google’s side, because the key material is supplied at request time and you bear the burden of compliant storage and operational controls.

Google-managed encryption keys provide the least operational overhead, but you do not control rotation frequency (certainly not a guaranteed 45-day schedule) and you cannot assert that your specific root/KEK is stored in a FIPS 140-2 Level 3 validated system under your control. While Google-managed encryption is secure and compliant for many workloads, it does not satisfy explicit fintech requirements for customer-managed, HSM Level 3 root key control.

Question Analysis

Core Concept: This question tests selecting the right key management service for regulatory-grade protection of a key-encryption key (KEK/root key): FIPS 140-2 Level 3 validation, automated rotation, and US-only multi-region redundancy. In Google Cloud, the key decision is between software-backed Cloud KMS (generally FIPS 140-2 Level 1/2 boundary) and hardware-backed Cloud HSM (FIPS 140-2 Level 3). Why the Answer is Correct: Cloud HSM provides customer-managed keys whose cryptographic operations and key material are protected by FIPS 140-2 Level 3 validated HSMs. This directly satisfies the requirement that “the system that stores the root key must be FIPS 140-2 Level 3 validated.” Cloud HSM keys are created and managed through Cloud KMS APIs, so you still get managed lifecycle features such as scheduled rotation. For US-only residency with redundancy, you can create the key in a US multi-region location (e.g., "us") so key material is replicated across multiple US regions, aligning with data residency and availability goals. Key Features / Configurations: - Create a key ring and key in a US multi-region location ("us") to keep key material within the United States while providing multi-region redundancy. - Use a Cloud HSM-protected key (protection level: HSM) for the KEK/root key. - Configure rotation period (45 days or less) and set next rotation time; Cloud KMS will manage rotation on schedule. - Apply least privilege with IAM (e.g., separation of duties between key admins and key users) and enable Cloud Audit Logs for key access/administration. Common Misconceptions: Many assume “CMEK with Cloud KMS” automatically meets Level 3 because it’s a managed key service. However, Cloud KMS software-backed keys do not provide FIPS 140-2 Level 3 HSM protection. Another trap is choosing customer-supplied keys (CSEK) to “control” keys; CSEK shifts rotation and secure storage to you and does not inherently provide HSM Level 3 validation. Exam Tips: - If the question explicitly requires FIPS 140-2 Level 3, think Cloud HSM (or External Key Manager with a compliant HSM, but that’s not an option here). - For “multiple regions in the US,” look for multi-region locations like "us" rather than a single region. - “Managed rotation schedule” points to Cloud KMS/Cloud HSM rotation features, not CSEK. - Map requirements to: protection level (HSM), location (multi-region US), and lifecycle (rotation policy).

7
Question 7

You are the security lead for a fintech company with PCI DSS scope operating across 3 Google Cloud projects under a single organization. An external assessor requests downloadable evidence (CSV or JSON) listing who currently has which permissions to all 58 Cloud Storage buckets and 12 BigQuery datasets in the prod folder, including access granted via groups and inherited roles, as of today; you must produce this access review without changing any policies and be able to filter by principals and permissions for the audit. Which Google Cloud tool should you use?

Policy Troubleshooter answers targeted questions like “Does principal P have permission Z on resource R right now?” and provides an explanation path (which binding grants/denies access). It is excellent for debugging access issues, but it is not intended to generate a comprehensive, exportable inventory of all principals and permissions across many Cloud Storage buckets and BigQuery datasets for an audit access review.

Policy Analyzer is built for access reviews and compliance reporting. It analyzes IAM policies to determine who has access to what across a scope (org/folder/project), including access via Google Groups and inherited roles. It supports filtering by principal, role/permission, and resource, and it can export/download results (CSV/JSON), making it the best tool to produce auditor-ready evidence without changing policies.

IAM Recommender (IAM Recommender / Policy Intelligence recommendations) identifies overly broad permissions and suggests role reductions based on observed usage. While useful for least privilege and hardening, it is not a reporting tool for producing a point-in-time list of effective access across resources. It also implicitly drives policy changes, which the question explicitly says you must not do.

Policy Simulator is used to test the impact of proposed IAM policy changes (“what if we add/remove this binding?”) before applying them. It is valuable for change management and preventing outages, but it does not primarily serve as an access review/export mechanism for current-state evidence. The question requires reporting existing effective permissions without changing policies, which aligns better with Policy Analyzer.

Question Analysis

Core Concept: This question tests IAM visibility for compliance evidence: producing an access review that enumerates effective access (including group membership and inheritance) across many resources, exportable for auditors, without modifying policies. In Google Cloud, this is addressed by IAM Policy Analyzer (part of Policy Intelligence), which can analyze who has access to what across a scope (project/folder/org) and supports filtering and exporting results. Why the Answer is Correct: Policy Analyzer is designed for access reviews and compliance reporting. It can answer “who has what access” to resources such as Cloud Storage buckets and BigQuery datasets by evaluating IAM policies, including access granted through Google Groups and inherited roles from higher in the resource hierarchy (organization/folder/project). It supports querying by principal, role/permission, and resource, and it can produce downloadable outputs (CSV/JSON) suitable as point-in-time evidence for PCI DSS audits. Importantly, it is read-only analysis: it does not require changing IAM policies. Key Features: - Organization/folder/project scoping: ideal for a fintech with multiple projects under one org and a “prod” folder. - Effective access analysis: includes direct bindings, group-derived access, and inherited bindings. - Filtering: auditors often ask to filter by principal (user/service account/group) and by permission/role; Policy Analyzer supports these dimensions. - Exportable evidence: results can be exported/downloaded (CSV/JSON) for audit artifacts. - Aligns with Google Cloud Architecture Framework (Security, Governance, Risk & Compliance): continuous visibility, least privilege validation, and auditability without operational disruption. Common Misconceptions: Policy Troubleshooter is often confused with Policy Analyzer because both are in Policy Intelligence. Troubleshooter is optimized for answering a single “can principal X access resource Y” question and explaining why, not for generating a comprehensive inventory across dozens of buckets/datasets. IAM Recommender suggests least-privilege changes (and would imply policy changes), and Policy Simulator is for testing hypothetical policy changes—both are not appropriate for producing current-state evidence. Exam Tips: When the prompt emphasizes “downloadable evidence,” “access review,” “who currently has which permissions,” “including groups and inherited roles,” and “no policy changes,” think Policy Analyzer. Use Troubleshooter for one-off access debugging, Simulator for what-if testing, and Recommender for rightsizing permissions. Also note the compliance framing (PCI DSS) and the need to operate at folder/org scope—strong signals for Policy Analyzer.

8
Question 8

Your company is onboarding a construction subcontractor for a 4-month engagement; the subcontractor uses an external SAML 2.0/OIDC IdP (e.g., Okta) for its 180 users, and you must provide them least-privilege access to two Google Cloud projects via both the Google Cloud Console and gcloud while strictly avoiding creation or synchronization of any subcontractor identities in Cloud Identity/Google Workspace, preventing any password storage/replication in your environment, and allowing the subcontractor to retain full user lifecycle control; what is the most secure way to enable SSO under these constraints?

Incorrect. Google Cloud Directory Sync imports/synchronizes identities into Cloud Identity/Google Workspace, violating the requirement to strictly avoid creation or synchronization of subcontractor identities. It also shifts lifecycle management into your environment (or at least duplicates it), increasing operational overhead and risk. While it can enable SSO, it does not meet the stated constraints and is not the least-identity approach.

Incorrect. Building a custom authentication/provisioning service is unnecessary and risky. It increases attack surface, requires ongoing maintenance, and typically results in some form of account provisioning or token brokering that can violate the “no identity creation” and “no password replication” intent. It also deviates from Google-recommended patterns and is unlikely to be considered the most secure approach on an exam.

Correct. Workforce Identity Federation allows trusting an external SAML/OIDC IdP and granting IAM roles to federated principals without creating Cloud Identity/Workspace users or storing passwords. Access uses short-lived credentials for both Cloud Console and gcloud, and the subcontractor retains full lifecycle control in their IdP. IAM bindings can be applied to groups/attributes for least privilege across the two projects.

Incorrect. Creating individual Google accounts and synchronizing passwords directly violates the requirement to prevent password storage/replication in your environment and increases credential compromise risk. It also creates local identities you must manage, undermining the subcontractor’s full lifecycle control. This is an anti-pattern compared to federation with short-lived tokens.

Question Analysis

Core concept: This question tests identity federation for external users without creating local identities, specifically Google Cloud Workforce Identity Federation (WIF) for console and gcloud access. It aligns with the Google Cloud Architecture Framework security principles of centralized identity, least privilege, and reducing credential risk. Why the answer is correct: Workforce Identity Federation lets you trust an external SAML 2.0 or OIDC IdP (such as Okta) and allow users to obtain short-lived Google credentials to access Google Cloud resources. Crucially, you do not create or sync subcontractor users into Cloud Identity/Google Workspace, and you do not store or replicate passwords in your environment. The subcontractor retains full lifecycle control (joiner/mover/leaver) in their IdP; when a user is disabled there, federation immediately prevents new token issuance. You then grant IAM roles directly to federated principals (workforce pool subjects/groups/attributes) at the project level for least privilege across the two projects. Key features / configuration notes: 1) Create a workforce identity pool and provider that trusts the subcontractor IdP (SAML or OIDC) and configure attribute mapping (for example, map IdP groups to google.subject or custom attributes). 2) Use IAM principal identifiers like principalSet://iam.googleapis.com/locations/global/workforcePools/POOL_ID/group/… or attribute-based access to bind roles to only the needed users/groups. 3) Users access the Cloud Console via workforce federation sign-in and use gcloud via workforce identity federation login flows, receiving short-lived tokens rather than long-lived passwords/keys. 4) Apply least privilege with predefined roles, conditions (IAM Conditions) if needed, and separate bindings per project. Common misconceptions: Many assume SSO requires Cloud Identity/Workspace accounts. That’s true for some legacy patterns, but WIF is designed specifically to avoid local account provisioning and password handling. Another misconception is that service accounts are needed for human access; for external workforce users, workforce federation is the correct pattern. Exam tips: If requirements include: no user provisioning/sync, no password replication, external IdP remains source of truth, and access via console + gcloud, think “Workforce Identity Federation.” If the users are from another Google Cloud organization (partners) you might consider cross-org IAM, but for non-Google identities with SAML/OIDC, WIF is the secure, modern approach.

9
Question 9

You are investigating 403 access denied errors when Compute Engine instances in a service project (svc-proj-200) attached to a Shared VPC in a host project (host-proj-100) attempt to read objects from a Cloud Storage bucket (gs://org-logs-bucket) located in a data project (data-proj-300). The data project is protected by a VPC Service Controls service perimeter named perimeter-data that currently includes only data-proj-300 and restricts Cloud Storage and BigQuery. The instances have roles/storage.objectViewer, the subnet has Private Google Access enabled, and egress firewall rules allow Google APIs. You must resolve the errors without weakening the perimeter's protections. What should you do?

Adding only the Shared VPC host project (host-proj-100) does not address the core VPC-SC check. VPC-SC evaluates the project associated with the API caller (typically the service project where the workload and service account are used), not the network host project providing subnets. The VMs still run under svc-proj-200 context, so requests would continue to be seen as originating outside the perimeter and remain denied.

Correct. Add svc-proj-200 to perimeter-data so the Compute Engine workloads’ API calls to Cloud Storage are considered intra-perimeter. This resolves the 403s while maintaining the perimeter’s protections because access remains restricted to projects inside the perimeter. This aligns with VPC-SC’s design: allow trusted projects/workloads inside the perimeter to access protected services without creating broad exceptions.

Creating a new separate perimeter including host and service projects does not automatically grant access to resources in perimeter-data. Two separate perimeters are isolated from each other by default; access between them would still be blocked unless you introduce bridging or other exceptions. This adds complexity and operational overhead and does not directly solve access to the bucket protected by the existing perimeter.

A perimeter bridge is used to allow controlled communication between two distinct service perimeters (e.g., perimeter-A and perimeter-B). Here, svc-proj-200 is not in any perimeter; bridging does not apply to “outside-to-inside” access. Even if you created another perimeter for svc-proj-200, bridging would expand the trust boundary between perimeters and is generally a broader change than simply adding the legitimate caller project to the existing perimeter.

Question Analysis

Core Concept: This question tests VPC Service Controls (VPC-SC) service perimeters and how they enforce data exfiltration boundaries for Google-managed services like Cloud Storage. VPC-SC evaluates the “project of the caller” (the project associated with the credentials/access token) and whether that project is inside the same perimeter as the protected resource. Why the Answer is Correct: The Cloud Storage bucket is in data-proj-300, which is inside perimeter-data. The Compute Engine VMs run in svc-proj-200 and access the bucket using identities (VM service account or attached service account) that belong to the service project context. Even though networking uses a Shared VPC from host-proj-100 and Private Google Access is enabled, VPC-SC is not a network firewall; it is an API-level boundary. If svc-proj-200 is outside the perimeter, requests to Cloud Storage in the perimeter are treated as coming from outside and are denied (403) unless an explicit perimeter exception applies. To fix access without weakening protections, you should add svc-proj-200 to the existing perimeter so the caller project and the protected bucket project are within the same perimeter. Key Features / Best Practices: - Service perimeters protect supported services (here: Cloud Storage and BigQuery) by restricting access based on perimeter membership and Access Context Manager policies. - Shared VPC host project membership does not automatically make service projects “inside” the perimeter for API access decisions. - Private Google Access and egress firewall rules only ensure connectivity to Google APIs; they do not satisfy VPC-SC perimeter checks. - Adding the service project to the same perimeter preserves the boundary while enabling legitimate intra-perimeter access. Common Misconceptions: A common trap is assuming the Shared VPC host project controls the security boundary for API access (option A). Another is thinking a perimeter bridge is needed for any cross-project access (option D); bridges are for connecting two separate perimeters, not for allowing access from outside into a perimeter. Exam Tips: When you see 403s with VPC-SC and a protected resource, identify (1) which project holds the resource, (2) which project the caller identity is associated with, and (3) whether both are in the same perimeter. If not, the typical fix is to add the caller project to the perimeter (or redesign perimeters), rather than changing IAM, routes, or firewall rules.

10
Question 10

You administer 8 production projects under a parent folder named prod-services in organization org-123. Compliance requires centralizing all audit and application logs from those projects with a 400-day retention period. Analysts must query these logs using Logs Explorer from a dedicated project named sec-logging without being granted direct access to the production projects. What should you do?

A Cloud Monitoring workspace is for metrics/uptime/SLO-style observability across projects, not for centralizing Cloud Logging data with custom retention. Adding projects to a workspace doesn’t route logs into a single log bucket, doesn’t enforce 400-day retention centrally, and doesn’t solve the requirement to let analysts query logs without access to production projects. It addresses monitoring visibility, not compliant log aggregation and retention.

Querying at the organization level can provide broad visibility, but it generally requires granting analysts permissions that effectively allow log viewing across the production projects (or at least at org/folder scope), which violates the requirement to avoid direct access to production projects. It also doesn’t explicitly implement centralized retention; retention would remain per-project unless separately configured, making compliance harder to enforce consistently.

An aggregated sink exporting to a Cloud Storage bucket can satisfy long-term retention, but Cloud Storage exports are not queryable with Logs Explorer. Analysts would need to use other tools (BigQuery, Cloud Storage + external processing, or SIEM ingestion) to analyze those logs. This option fails the explicit requirement that analysts must query using Logs Explorer, and it also complicates structured log analytics and access patterns.

This is the correct pattern: create an aggregated sink at the prod-services folder so all child-project logs are routed automatically, and send them to a centralized Cloud Logging log bucket in sec-logging. Configure the bucket’s retention to 400 days to meet compliance. Grant analysts access on the destination bucket/project so they can use Logs Explorer to query centralized logs without any IAM bindings on the production projects, satisfying least privilege and separation of duties.

Question Analysis

Core Concept: This question tests centralized logging architecture using Cloud Logging: aggregated sinks (folder/org scope), centralized Log Analytics via log buckets, retention configuration, and IAM that enables analysts to query logs without granting access to source projects. Why the Answer is Correct: An aggregated sink at the prod-services folder captures logs from all descendant projects (the 8 production projects) without needing per-project sinks. Routing to a centralized log bucket in the sec-logging project enables a single place to store and analyze logs. Setting the log bucket retention to 400 days satisfies compliance. Analysts can be granted permissions (for example, roles/logging.viewer or roles/logging.viewAccessor scoped to the destination bucket/project, and optionally Log Analytics roles depending on configuration) to use Logs Explorer against the centralized bucket, without any IAM bindings on the production projects. This meets the requirement: centralized audit + application logs, long retention, and query access isolated from production. Key Features / Configurations: - Aggregated sink at folder scope (prod-services) to collect from all child projects. - Destination: Cloud Logging log bucket in sec-logging (not just a storage export). - Configure bucket retention to 400 days (bucket-level retention is the correct control for log retention in Cloud Logging). - Grant analysts least-privilege access on the destination (sec-logging project and/or specific log bucket) so they can query via Logs Explorer. - Ensure inclusion of Admin Activity, Data Access (if required), and application logs as needed; note that some audit log types may have special considerations, but routing to a central bucket is the standard approach. Common Misconceptions: - Confusing Monitoring workspaces with Logging centralization: Monitoring workspaces don’t centralize log storage/retention. - Assuming org-level Logs Explorer access is sufficient: it typically requires broad permissions across production projects, violating the isolation requirement. - Exporting to Cloud Storage for retention: Cloud Storage exports are not queryable in Logs Explorer and shift analysis to other tools. Exam Tips: When you see “centralize logs,” “long retention,” and “query in Logs Explorer without access to source projects,” think: aggregated sink (folder/org) -> centralized log bucket with configured retention -> IAM on destination only. This aligns with Google Cloud Architecture Framework principles of centralized governance, least privilege, and auditability for compliance.

Success Stories(6)

P
P***********Nov 25, 2025

Study period: 2 months

I used Cloud Pass during my last week of study, and it helped reinforce everything from beyondcorp principles to securing workloads. It’s straightforward, easy to use, and genuinely helps you understand security trade-offs.

길
길**Nov 23, 2025

Study period: 1 month

문제 다 풀고 시험에 응했는데 바로 합격했어요! 시험이랑 문제는 비슷한게 40% 조금 넘었던거 같고, 처음 보는 유형은 제 개념 이해를 바탕으로 풀었어요.

D
D***********Nov 12, 2025

Study period: 1 month

I would like to thanks the team of Cloud Pass for these greats materials. This helped me passing the exam last week. Most of the questions in exam as the sample questions and some were almost similar. Thank you again Cloud Pass

O
O**********Oct 29, 2025

Study period: 1 month

Absolutely invaluable resource to prepare for the exam. Explanations and questions are spot on to give you a sense of what is expected from you on the actual test.

O
O**********Oct 29, 2025

Study period: 1 month

I realized I was weak in log-based alerts and access boundary configurations. Solving questions here helped me quickly identify and fix those gaps. The question style wasn’t identical to the exam, but the concepts were spot-on.

Other Practice Tests

Practice Test #2

50 Questions·120 min·Pass 700/1000

Practice Test #3

50 Questions·120 min·Pass 700/1000
← View All Google Professional Cloud Security Engineer Questions

Start Practicing Now

Download Cloud Pass and start practicing all Google Professional Cloud Security Engineer exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.