CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Developer
Google Professional Cloud Developer

Practice Test #3

Simulate the real exam experience with 50 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.

50Questions120Minutes700/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

Your IoT analytics company runs a multi-tenant data pipeline on a Google Kubernetes Engine (GKE) Autopilot cluster in us-central1 for 120 production customers, and you promote releases with Cloud Deploy; during a 2-week refactor of a telemetry aggregator service (container image <250 MB), developers will edit code every 5–10 minutes on laptops with 8 GB RAM and limited CPU and must validate changes locally before pushing to the remote repository. Requirements: • Automatically rebuild and redeploy on local code changes (hot-reload loop ≤ 10 seconds). • Local Kubernetes deployment should closely emulate the production GKE manifests and deployment flow. • Use minimal local resources and avoid requiring a remote container registry for inner-loop builds. Which tools should you choose to build and run the container locally on a developer laptop while meeting these constraints?

Docker Compose and dockerd can provide a fast local rebuild/restart loop, but it does not closely emulate Kubernetes manifests or the Kubernetes deployment flow used with GKE and Cloud Deploy. Compose uses a different configuration model (docker-compose.yml) and lacks native concepts like Deployments, Services, ConfigMaps, and RBAC. This fails the requirement to closely emulate production Kubernetes manifests and deployment behavior.

Terraform and kubeadm are oriented toward provisioning infrastructure and bootstrapping Kubernetes clusters, not rapid inner-loop development. kubeadm is heavyweight and operationally complex for a laptop workflow, and Terraform adds infrastructure management overhead rather than enabling hot-reload. Neither provides an automatic file-watching rebuild/redeploy loop comparable to Skaffold, so the ≤10-second iteration requirement is not met.

Minikube provides a lightweight local Kubernetes cluster suitable for laptops and can use a local image store so developers don’t need to push to a remote registry. Skaffold automates the inner loop by watching for code changes and rebuilding/redeploying to Kubernetes using the same manifests (or Kustomize/Helm) as production. Together they best satisfy fast hot-reload, Kubernetes fidelity, and minimal local/remote dependencies.

kaniko and Tekton are primarily designed for building images and running pipelines in Kubernetes (CI/CD), not for a fast local developer inner loop on resource-constrained laptops. kaniko commonly pushes to a registry, and Tekton requires a Kubernetes cluster and pipeline setup overhead. This combination is better suited to remote build systems (e.g., in-cluster CI) than local hot-reload development without a registry.

Question Analysis

Core concept: This question tests inner-loop developer workflows for Kubernetes apps: fast local build+deploy on file changes, Kubernetes-manifest fidelity, and avoiding remote dependencies. In Google Cloud–aligned workflows, Skaffold is the canonical tool for continuous local development with Kubernetes, and a lightweight local cluster (Minikube) provides a close approximation of GKE manifests. Why the answer is correct: Minikube runs a single-node Kubernetes cluster locally with relatively low overhead and supports using the local Docker daemon (or containerd) so images can be built and used without pushing to a remote registry. Skaffold provides an automated “edit-build-deploy” loop: it watches source files, rebuilds the container, and redeploys to the local cluster. With file sync (manual sync rules) and/or fast incremental builds, Skaffold can achieve sub-10-second iteration for many code changes, meeting the hot-reload/inner-loop requirement. Skaffold also deploys using the same Kubernetes manifests (or Kustomize/Helm) you use in production, closely emulating the GKE deployment shape. Key features / best practices: - Skaffold “dev” mode: file watching + continuous build + deploy. - Local builds without registry: use Minikube’s Docker environment (e.g., building directly into the cluster’s daemon) or configure Skaffold local builder with “push: false”. - Manifest fidelity: reuse production YAML/Kustomize overlays; keep environment-specific differences in overlays rather than separate tooling. - Resource constraints: Minikube can be tuned (CPU/memory limits) and is typically lighter than running a full multi-node local cluster. Common misconceptions: - Docker Compose is fast, but it does not emulate Kubernetes objects (Deployments, Services, ConfigMaps, RBAC) or the same deployment flow. - kaniko and Tekton are primarily CI/CD build primitives; they are not optimized for laptop inner-loop hot reload and usually assume a registry and cluster-based execution. - Terraform/kubeadm are for provisioning clusters/infrastructure, not rapid local iteration. Exam tips: For Professional Cloud Developer, map requirements to the developer lifecycle: inner loop (Skaffold, local K8s like Minikube/kind) vs outer loop (Cloud Build/Cloud Deploy). When you see “hot reload”, “watch files”, “Kubernetes manifests”, and “no remote registry”, Skaffold + a local Kubernetes cluster is the standard answer. Also note Autopilot is production; local emulation focuses on Kubernetes API compatibility, not Autopilot-specific scheduling behavior.

2
Question 2

Your mobile health platform currently stores per-user workout telemetry and personalized settings in a single PostgreSQL instance; records vary widely by user and evolve frequently as new device firmware adds fields (e.g., heart-rate variability, sleep stages), resulting in weekly schema migrations, downtime risks, and high operational overhead; you expect up to 8 million users, peak 45,000 writes/second concentrated by userId, simple key-based reads per user, and only per-user transactional consistency is required, not multi-user joins or complex cross-entity transactions. To simplify development and scaling while accommodating highly user-specific, evolving state without rigid schemas, which Google Cloud storage option should you choose?

Cloud SQL (PostgreSQL) is a managed relational database but still requires schema management and careful capacity planning. Weekly schema migrations and downtime risk are exactly the pain points described. At 45,000 writes/second, a single instance will not suffice; scaling typically requires read replicas (not for writes) or application-level sharding, increasing operational overhead. It also doesn’t naturally support highly variable per-user records without frequent migrations.

Cloud Storage is an object store for blobs (files) with very high durability and throughput, but it is not a database optimized for low-latency key-based reads/writes with per-user transactional consistency. You could store per-user JSON objects, but you’d lose efficient querying, atomic updates across multiple related items, and database-like concurrency controls. It’s better for raw telemetry archives, exports, or batch analytics inputs.

Cloud Spanner provides horizontally scalable relational SQL with strong consistency and high availability, and it can handle very high write throughput. However, it is schema-based and best when you need relational modeling, SQL joins, and cross-row/table transactions at scale. The scenario explicitly wants to avoid rigid schemas and frequent migrations, and only needs per-user transactional consistency with simple key-based reads—making Spanner unnecessarily complex and costly.

Cloud Datastore/Firestore is purpose-built for scalable, low-ops NoSQL document storage with flexible schemas. It fits per-user partitioning (userId as document key), supports high write rates with automatic scaling, and avoids weekly schema migrations because documents can evolve field-by-field. It also supports transactions/batched writes for per-user consistency without requiring multi-entity relational joins. This directly addresses scaling, agility, and operational overhead concerns.

Question Analysis

Core concept: This question tests choosing a storage system for massive scale, high write throughput, and rapidly evolving per-entity data without frequent schema migrations. In Google Cloud, the primary fit is a schemaless (or schema-flexible) document/NoSQL database with automatic scaling and per-entity transactional semantics. Why the answer is correct: Cloud Datastore/Firestore (Firestore in Native mode) is designed for key-based access patterns and document-style data that can evolve over time (new fields can be added without migrations). Your workload is strongly partitionable by userId, needs simple reads/writes per user, and only requires transactional consistency within a user’s data. Firestore supports atomic operations and transactions within a set of documents, and it naturally models “per-user” documents/subcollections. It also scales horizontally to very high request rates without managing instances, reducing operational overhead and downtime risk. Key features / best practices: - Flexible schema: documents can contain varying fields per user and evolve as firmware adds telemetry attributes. - High scalability: automatic sharding/partitioning and managed operations align with 8M users and bursty writes. - Data modeling: use a top-level users collection keyed by userId; store settings in a user document and telemetry in subcollections (e.g., users/{userId}/telemetry/{eventId}). - Consistency/transactions: use transactions/batched writes for per-user invariants; avoid cross-user transactional requirements. - Architecture Framework alignment: operational excellence (managed service), reliability (multi-zone replication), and performance efficiency (low-latency key lookups). Common misconceptions: Spanner is often chosen for “scale,” but it’s relational and schema-based; frequent schema changes and document-like variability increase friction. Cloud SQL is familiar but won’t meet the scaling/ops goals at 45k writes/sec without significant sharding and operational complexity. Cloud Storage is durable and cheap but not a low-latency database for key-based reads/writes with transactional semantics. Exam tips: When you see: (1) rapidly changing fields, (2) per-entity access by key, (3) massive scale with minimal ops, and (4) only entity-level transactions, think Firestore/Datastore. Choose Spanner when you need relational modeling, SQL, and strong consistency across rows/tables at global scale; choose Cloud SQL for traditional relational workloads with moderate scale; choose Cloud Storage for blobs/objects, not operational database access patterns.

3
Question 3
(Select 2)

You are rolling out an internal reporting service on a fleet of e2-standard-4 Compute Engine VMs in us-central1-a using Terraform; one VM restored from a snapshot has been stuck in 'Starting' for 12 minutes and the serial console shows repeated boot attempts—what two investigations should you prioritize to resolve the launch failure? (Choose two.)

Correct. Repeated boot attempts with serial console output strongly suggests an OS-level boot failure. A common cause after restoring from a snapshot is filesystem inconsistency/corruption (especially if the snapshot was taken while the source VM was running). Attaching the boot disk to a rescue VM to inspect logs and run fsck is a standard, high-signal investigation and aligns with Compute Engine troubleshooting best practices.

Incorrect. If the VM is stuck in “Starting” and repeatedly rebooting, the OS likely never reaches a stable multi-user state where sshd is running and accepting logins. Changing the Linux username does not address bootloader/kernel/filesystem failures. SSH troubleshooting is appropriate when the instance is RUNNING and serial console indicates the OS booted successfully.

Incorrect. VPC firewall rules and routes affect network connectivity to/from an instance, but they do not prevent the guest OS from booting. A VM can fully boot even with no network access. The symptom described (serial console shows repeated boot attempts and the instance remains in Starting) points to disk/OS boot issues rather than routing or firewall configuration.

Correct. A completely full boot disk can prevent essential boot-time writes (logs, temp files, systemd state, cloud-init, package scripts), causing services to fail and potentially triggering reboots/boot loops. A 10-GB boot disk is relatively small and increases the likelihood of filling up due to logs or application artifacts captured in the snapshot. Checking disk usage via rescue attach is a high-priority investigation.

Incorrect. Dropped network traffic (firewall/policy) can explain inability to SSH or reach the application, but it does not explain repeated boot attempts shown in the serial console or a VM stuck in “Starting.” Network policies operate after the OS boots and the NIC is configured; they do not typically cause kernel/init failures or reboot loops.

Question Analysis

Core concept: This scenario tests Compute Engine VM boot troubleshooting, especially failures after restoring from a snapshot. When an instance is stuck in “Starting” and the serial console shows repeated boot attempts, the problem is almost always inside the guest OS boot path (disk, filesystem, bootloader, init), not networking. The serial console is the primary signal because it works even when the OS never reaches a state where SSH or agents start. Why the answer is correct: A (root filesystem corruption) is a top-priority investigation because snapshot restores can capture an inconsistent filesystem if the source VM wasn’t cleanly shut down or if there were pending writes. Corruption can cause kernel panic, initramfs drops, or systemd failing and rebooting in a loop, which matches “repeated boot attempts.” The standard remediation is to detach/attach the boot disk to a known-good rescue VM, inspect logs, and run filesystem repair (fsck) or recover from backups. D (boot disk full) is also a high-probability cause of boot loops. If the root partition is 100% full, critical boot-time writes can fail (journald, systemd units, package triggers, cloud-init, log rotation), leading to services failing and watchdog/reboot behavior. A small 10-GB boot disk increases this risk, especially for internal reporting services that may write logs or caches. Key features / best practices: Use serial console output and “get-serial-port-output” to pinpoint the failure stage. Use a rescue workflow: stop the VM, detach the boot disk, attach to another VM, mount read-only first, check free space, run fsck, and review /var/log and journal files. Prefer larger boot disks or separate data disks; ensure clean shutdowns before snapshotting; consider filesystem types and journaling. Common misconceptions: Networking/firewall issues (C/E) can block SSH and application traffic but do not prevent the VM from booting; the platform “Starting” state with boot loops is not explained by dropped packets. Trying a different SSH user (B) is irrelevant if the OS never reaches sshd startup. Exam tips: When you see “stuck in Starting” plus serial console boot loops, prioritize disk/OS integrity checks (filesystem corruption, full disk, bad fstab/UUID, bootloader). Network checks are secondary and mainly apply when the VM is RUNNING but unreachable. Map symptoms to layers: platform/boot vs connectivity vs application.

4
Question 4

You are building a Rust-based microservice for a logistics analytics platform on Google Cloud that must be packaged as a container image; the service links against two in-house native .so libraries and requires OpenSSL 1.1 during build, exposes HTTP on port 8080, must autoscale from 0 instances to handle bursts up to 400 requests per second, and needs cold starts under 2 seconds; your team does not want to provision, patch, or manage any servers or clusters. How should you deploy the microservice?

Cloud Functions is serverless and can autoscale, but it is primarily event/function oriented and historically constrained around runtimes and packaging. Even with newer container support, the question explicitly frames a containerized microservice with native .so libraries and OpenSSL 1.1 build requirements, which is a more natural fit for Cloud Run’s “run any container” model and HTTP service semantics (port 8080, request routing, concurrency controls).

Cloud Build can build a custom container image using a Dockerfile that installs OpenSSL 1.1 during build and links/copies the in-house .so libraries. Deploying that image to Cloud Run satisfies the requirements: fully managed (no servers/clusters), HTTP on port 8080, autoscaling from 0, and rapid horizontal scaling to handle bursts (configure concurrency and max instances). Cold starts can be optimized via image size and runtime tuning.

A Container-Optimized OS VM on Compute Engine can run containers, but it still requires provisioning and managing VMs (patching strategy, instance groups, autoscaling configuration, OS lifecycle, monitoring, and capacity planning). It also will not naturally scale to zero without additional orchestration and would typically have slower operational iteration than Cloud Run. This violates the “does not want to manage servers” requirement.

GKE is powerful for container orchestration, but it introduces cluster management overhead (node pools, upgrades, patching, scaling, networking policies), even in Autopilot mode. While GKE can autoscale, scaling to zero for an HTTP service and achieving consistently low cold starts is more complex and typically involves Knative/KEDA patterns. The question explicitly asks to avoid managing clusters, making Cloud Run the better match.

Question Analysis

Core Concept: This question tests selecting a fully managed, container-based compute platform that supports HTTP microservices, scales to zero, and avoids server/cluster management. The key services are Cloud Build (to build container images with custom build dependencies like OpenSSL 1.1 and native .so libraries) and Cloud Run (to run containers serverlessly with autoscaling). Why the Answer is Correct: Cloud Run is designed for stateless HTTP containers, exposes a single HTTP port (8080 by default), and can autoscale from 0 instances to many instances based on incoming requests. It meets the requirement of “no servers or clusters to provision/patch/manage” because Google manages the underlying infrastructure. Building with Cloud Build allows you to define a Dockerfile that installs OpenSSL 1.1 during the build stage and copies in/links your in-house .so libraries, producing a deployable container image. For bursts up to ~400 RPS, Cloud Run can scale horizontally; you tune concurrency and max instances to meet throughput and latency goals. Key Features / Configurations: - Cloud Run: scale-to-zero, request-based autoscaling, configurable concurrency (requests per instance), min instances (set to 0 for scale-to-zero), max instances, CPU/memory sizing. - Cold starts: choose an appropriate CPU/memory, keep the container lean (multi-stage builds), and optimize Rust startup. If strict sub-2s is hard under all conditions, consider Cloud Run “startup CPU boost” and/or a small min instances value (though that conflicts with “from 0”). - Cloud Build: reproducible builds, private dependencies, Artifact Registry integration, and CI triggers. - Architecture Framework alignment: operational excellence (managed platform), reliability (autoscaling), performance (right-sizing, concurrency), and security (Artifact Registry, IAM). Common Misconceptions: Cloud Functions is serverless but not ideal when you must package and run a custom container with native shared libraries and specific build-time dependencies; while newer generations support containers, the question’s emphasis on a containerized microservice and native linking aligns more directly with Cloud Run. GKE/Compute Engine can run containers but require cluster/VM management, patching, and capacity planning—explicitly disallowed. Exam Tips: When you see “container image,” “HTTP on 8080,” “scale to zero,” and “no server/cluster management,” default to Cloud Run. Pair it with Cloud Build (and Artifact Registry) when custom build steps or native dependencies are required. Always mention concurrency/max instances for RPS scaling and note cold-start tradeoffs with scale-to-zero.

5
Question 5

You inherit a public-facing marketing microsite hosted on a managed instance group of 3 Compute Engine VMs behind an external HTTPS load balancer (https://promo.example.com), and before a campaign launch you must automatically crawl up to 500 pages to verify whether any bundled client-side libraries have known vulnerabilities and whether the site is susceptible to reflected or stored XSS; which Google Cloud service should you use to run this security scan and generate a findings report?

Google Cloud Armor is a security enforcement service (WAF and DDoS protection) for applications behind Cloud Load Balancing. It helps mitigate threats using preconfigured WAF rules (e.g., OWASP CRS), custom rules, and rate limiting. However, it does not crawl your site or generate a vulnerability findings report from active scanning. It’s for prevention and mitigation, not discovery/assessment.

Cloud Debugger (part of Cloud Operations) helps inspect application state in production by capturing snapshots and adding logpoints without stopping the app. It is used for troubleshooting and debugging code issues, not for security scanning. It does not crawl web pages, test for XSS, or report vulnerable client-side libraries.

Web Security Scanner is Google Cloud’s DAST solution that crawls web applications and scans for common vulnerabilities, including reflected and stored XSS. It is designed to run against public endpoints (like an HTTPS load balancer URL) and produces structured findings reports. This directly matches the requirement to automatically crawl up to hundreds of pages and generate a security findings report before launch.

Error Reporting aggregates and groups application errors/exceptions and provides notifications and dashboards to help developers prioritize fixes. It is valuable for reliability and debugging production issues, but it does not perform web vulnerability scanning, does not crawl pages, and will not assess XSS exposure or vulnerable client-side libraries.

Question Analysis

Core Concept: This question tests knowledge of Google Cloud’s application security testing tools for web apps, specifically dynamic application security testing (DAST) that crawls a site and checks for common web vulnerabilities such as XSS. Why the Answer is Correct: Web Security Scanner is the Google Cloud service designed to automatically crawl public web applications (including those behind an external HTTPS Load Balancer) and scan for vulnerabilities. It can crawl up to a configured limit (the question mentions up to 500 pages) and produces a findings report identifying issues such as reflected and stored cross-site scripting (XSS) and the use of vulnerable client-side libraries (when detectable via the scanner’s checks). This matches the requirement to “automatically crawl” pages and “generate a findings report” prior to a campaign launch. Key Features / How You’d Use It: You configure a scan target (e.g., https://promo.example.com), set authentication if needed (not required here since it’s public-facing), define the maximum crawl depth/limits, and schedule or run scans on demand. Findings are reported in the Web Security Scanner UI and can be exported/consumed via Security Command Center integrations or APIs for tracking and remediation workflows. From an Architecture Framework perspective, this supports the Security, Reliability, and Operational Excellence pillars by proactively identifying exploitable issues before traffic spikes. Common Misconceptions: Cloud Armor is often confused as a “scanner,” but it is a protection/control plane (WAF, DDoS defense) rather than a vulnerability discovery tool. Debugger and Error Reporting are observability tools; they help diagnose code behavior and crashes, not security posture. The key phrase “crawl pages” and “scan for XSS” points directly to Web Security Scanner. Exam Tips: On the Professional Cloud Developer exam, map requirements to service intent: scanning/crawling for OWASP-style web vulnerabilities → Web Security Scanner (DAST). Blocking/mitigating attacks at the edge → Cloud Armor. Code-level troubleshooting without redeploy → Debugger. Aggregating exceptions and stack traces → Error Reporting. Also note that Web Security Scanner targets externally reachable apps; internal-only apps typically require different approaches (e.g., private scanning setups or third-party tools).

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You operate a Python microservice on Cloud Run in us-central1 that writes time-series telemetry to Firestore (Native mode) and must sustain 8,000 document writes per minute with p95 write latency under 40 ms while using Application Default Credentials, retries with backoff, deadlines, and connection pooling per Google best practices. To optimize performance and minimize boilerplate, how should the service write to Firestore?

Cloud Client Libraries are the recommended approach for Firestore in Python. They integrate seamlessly with ADC on Cloud Run, use optimized transports (typically gRPC), and provide built-in retry and timeout controls consistent with Google best practices. They also reduce boilerplate while enabling performance tuning (client reuse, batching, concurrency). This best matches the requirement for low p95 latency and sustained write throughput.

Google API Client Libraries are more generic and often oriented around REST/HTTP with discovery-based APIs. While they can authenticate with ADC, they typically require more manual configuration to achieve the same retry, deadline, and connection reuse behavior as the purpose-built Cloud Client Libraries. For Firestore performance-sensitive workloads, they add unnecessary boilerplate and may not deliver optimal latency characteristics.

A custom gRPC client could theoretically achieve high performance, but it significantly increases implementation complexity and risk. You would need to correctly implement authentication (ADC token plumbing), retries with proper idempotency semantics, deadlines, channel pooling, and error handling. This contradicts the requirement to minimize boilerplate and is not the recommended approach for exam scenarios unless a feature is unavailable in official libraries.

A third-party HTTP client library would require you to manually implement Firestore’s API calls, authentication, retries/backoff, deadlines, and connection pooling. It is easy to get retry behavior wrong (e.g., retrying non-idempotent writes) and to miss Google’s recommended client configurations. This option increases boilerplate and operational risk and is unlikely to be the best path to consistent low-latency writes.

Question Analysis

Core Concept: This question tests how to integrate a Cloud Run Python service with Firestore (Native mode) using Google-recommended client patterns: Application Default Credentials (ADC), efficient transport (gRPC), retries with exponential backoff, deadlines/timeouts, and connection pooling/channel reuse to meet throughput and latency SLOs. Why the Answer is Correct: The Cloud Client Libraries (google-cloud-firestore) are the intended, high-level SDKs for Firestore. They natively support ADC on Cloud Run, use the optimized underlying transport (gRPC for Firestore), and provide built-in retry and timeout configuration aligned with Google API best practices. They also manage channel pooling/reuse via the underlying gRPC stack, which is critical for keeping p95 latency low under sustained write load. Using the Cloud Client Libraries minimizes boilerplate while still allowing performance tuning (batching, async/concurrency, and per-call deadlines). Key Features / Best Practices: - ADC: On Cloud Run, the library automatically uses the service account identity without manual key handling. - Retries and deadlines: Client libraries expose per-method retry/timeout settings; you can set reasonable deadlines (e.g., tens to hundreds of ms depending on SLO) and rely on idempotency-aware retries. - Connection management: gRPC channels are reused; avoid creating a new client per request. Instantiate the Firestore client once per container instance. - Throughput: 8,000 writes/min (~133 writes/sec) is achievable with concurrent requests and/or batched writes (WriteBatch/BatchWrite) while respecting Firestore limits and document hot-spotting considerations. Common Misconceptions: Google API Client Libraries (generic discovery-based REST clients) may seem “official,” but they are lower-level, often REST-oriented, and typically require more manual work to match gRPC performance characteristics and to correctly implement retries/timeouts/channel reuse. Custom gRPC or third-party HTTP clients can be tuned, but they increase risk and boilerplate and can easily violate best practices (incorrect retry semantics, poor pooling, missing auth flows). Exam Tips: For Google-managed services (Firestore, Pub/Sub, Storage), prefer Cloud Client Libraries unless you have a specific unsupported feature need. In Cloud Run, create clients globally (outside request handlers) to reuse connections. Know that meeting latency SLOs often depends as much on transport/channel reuse and correct timeouts as on raw service capacity. Also remember Firestore best practices for high-write workloads: avoid hot documents/collections and consider batching where appropriate.

7
Question 7

Your microservices application named fintrack-ui runs on three GKE clusters—fintrack-ui-dev, fintrack-ui-uat, and fintrack-ui-prod—in us-central1, and your security team requires that only container images signed by a designated release attestor (using a Cloud KMS key in projects/123456/locations/us/keyRings/prod/cryptoKeys/release) are allowed to be deployed to the fintrack-ui-prod cluster while dev and uat remain permissive; following Google-recommended practices, how should you implement this so that enforcement applies only to the production cluster?

Correct. Binary Authorization is the native GKE admission control for enforcing image attestations. Using clusterAdmissionRules lets you apply strict REQUIRE_ATTESTATION (with the release attestor backed by the specified Cloud KMS key) only to fintrack-ui-prod, while setting dev/uat to ALWAYS_ALLOW or dryrun. This is the recommended, policy-driven, cluster-scoped enforcement model for production-only controls.

Incorrect. Binary Authorization policies are evaluated at deployment time by the cluster’s admission controller; you don’t practically “exempt images not deployed to prod” because the same image could be deployed to any cluster. The requirement is cluster-specific enforcement, not image-specific exemptions. The correct approach is to scope the rule to the prod cluster via clusterAdmissionRules.

Incorrect. Vulnerability scanning and tagging images as untrusted is not the same as cryptographic signing/attestation enforcement. Scanning can inform risk decisions, but by itself it does not block deployments to GKE. To enforce “only signed by release attestor,” you need Binary Authorization with an attestor and an admission policy, not just Artifact Registry scanning metadata.

Incorrect. A Cloud Functions-based gate in the build pipeline is not the recommended control for cluster admission, and it doesn’t guarantee enforcement if someone deploys an image from another pipeline or registry path. The requirement is deploy-time enforcement on the prod cluster. Binary Authorization provides centralized, auditable, cluster-enforced policy independent of how images were built or stored.

Question Analysis

Core Concept: This question tests Binary Authorization (BinAuthZ) for GKE: a deploy-time policy enforcement control that only allows images meeting specified attestation requirements to be admitted to a cluster. It integrates with Cloud KMS-backed attestors and is the Google-recommended approach for enforcing “only signed images run in prod.” Why the Answer is Correct: You must enforce signature/attestation only on fintrack-ui-prod while keeping dev/uat permissive. Binary Authorization supports per-cluster policy via clusterAdmissionRules, allowing different enforcement modes and attestation requirements by cluster (identified by cluster location/name). You create an attestor whose signing key is the specified Cloud KMS key (projects/123456/locations/us/keyRings/prod/cryptoKeys/release), then configure the policy so the prod cluster requires that attestation, while dev/uat use an always-allow (or dryrun) rule. Key Features / Configuration: - Attestor: configured to use the Cloud KMS key version(s) for signing/verification. - Policy: admissionWhitelistPatterns (optional) and defaultAdmissionRule plus clusterAdmissionRules. - For fintrack-ui-prod: set enforcement to REQUIRE_ATTESTATION (or equivalent) and require the release attestor. - For dev/uat: set enforcement to ALWAYS_ALLOW (or use dryrun for gradual rollout). This aligns with the Google Cloud Architecture Framework security principle of policy-based, automated controls at deploy time, reducing human error. Common Misconceptions: Many confuse vulnerability scanning (Artifact Analysis/Container Scanning) with admission control. Scanning identifies issues but does not prevent deployment unless coupled with an admission controller like Binary Authorization. Another misconception is trying to “exempt images not deployed to prod”; policies are evaluated at admission time per cluster, not by predicting deployment destinations. Exam Tips: When you see “only signed/attested images can be deployed to GKE,” think Binary Authorization. When the requirement is “only in prod,” look specifically for clusterAdmissionRules (per-cluster overrides) rather than global defaults. Also note that KMS key location does not need to match the cluster region, but IAM on the key and attestor must allow the signing/verification workflow. Consider rollout patterns: start with dryrun in non-prod, then enforce in prod.

8
Question 8

You are monitoring a media transcoding microservice written in Node.js that runs on Cloud Run (fully managed). Each revision is configured with 2 vCPUs and 1.5 GiB memory, and Cloud Monitoring shows sustained spikes of ~90% CPU and ~1.3 GiB memory for 15-minute intervals during peak traffic (~800 RPS). You must identify which function is consuming the most CPU cycles and heap memory with minimal overhead (<1% CPU) and without adding significant latency in production. What should you do?

Incorrect. Adding console.log before/after every function call is extremely intrusive: it adds synchronous/async overhead, increases CPU usage, and can significantly increase latency and log volume/cost at 800 RPS. It also creates noisy data and operational risk in production. This violates the requirement for minimal overhead (<1% CPU) and “without adding significant latency.”

Incorrect. Aggregating request logs and computing handler durations can identify slow requests, but it cannot reliably attribute CPU cycles or heap usage to specific functions. Latency can be dominated by I/O waits, network calls, or downstream dependencies. Additionally, request logs typically don’t provide function-level granularity, and inferring CPU hotspots from timestamps is imprecise and misleading.

Incorrect. OpenTelemetry + Cloud Trace is excellent for distributed tracing and identifying high-latency spans across services, but it is not a CPU/heap profiler. Tracing shows where time is spent (including waiting), not which functions consume CPU cycles or allocate heap. Also, high-cardinality tracing at 800 RPS can add overhead unless carefully sampled, and still won’t produce flame graphs for CPU/heap.

Correct. Cloud Profiler (via @google-cloud/profiler) provides sampling-based CPU and heap profiling suitable for production with low overhead. It generates CPU and heap flame graphs in the Cloud console, directly showing the hottest functions and memory allocation/retention patterns. This meets the requirement to identify CPU- and heap-intensive functions with minimal overhead and minimal added latency.

Question Analysis

Core Concept: This question is about choosing the right observability tool to identify function-level CPU hotspots and heap memory usage in a production Node.js service with very low overhead. The requirement is not just to find slow requests, but to determine which functions consume the most CPU cycles and heap memory. Why correct: A sampling profiler is the correct tool for this job because it periodically samples stack traces and memory usage instead of instrumenting every function call. In Google Cloud, the Profiler agent for Node.js is designed to surface CPU and heap flame graphs that show which functions are hottest over time. That directly answers both parts of the question while keeping runtime overhead low enough for production use. Key features: Cloud Profiler provides CPU and heap profiling, visualizes results as flame graphs, and is intended for continuous profiling rather than one-off debugging. Sampling-based profiling is much lighter than pervasive logging or custom timing instrumentation. It is specifically suited to finding code-level hotspots that are not visible from request logs alone. Common misconceptions: Tracing and logging can help explain latency, but they do not directly measure CPU consumption or heap allocation by function. A slow span may be waiting on I/O rather than burning CPU, and request logs only show handler-level timing unless you add heavy custom instrumentation. Manual logging around every function call is especially unsuitable in a high-throughput production service. Exam tips: When a question asks for hottest functions, CPU cycles, heap memory, flame graphs, or low-overhead production diagnostics, think Profiler. When it asks for request path latency across services, think Trace. When it asks for events or debugging output, think Logging.

9
Question 9

You are rolling out a reporting microservice on a Compute Engine VM (10.10.2.4) in the analytics-vpc (CIDR 10.10.0.0/16, region us-central1) that must connect to a Cloud SQL for PostgreSQL instance via the Cloud SQL Auth Proxy. The Cloud SQL instance resides in a separate db-vpc (CIDR 10.20.0.0/16) and has both public and private IPs; its private address is 10.20.3.5. For compliance, all database traffic must use the private IP. In testing, connections to the instance’s public IP succeed, but connections to 10.20.3.5 time out even though firewall rules in both VPCs allow TCP:5432 from their respective CIDR ranges and the proxy is started with --private-ip. How should you fix this issue?

Running the Cloud SQL Auth Proxy as a background service can improve reliability and startup behavior, but it does not change network reachability. The timeout to 10.20.3.5 is caused by lack of L3 routing between analytics-vpc and db-vpc, not by how the proxy process is managed. Even as a daemon, the proxy still cannot reach a private IP in another VPC without peering/VPN/Interconnect.

The scenario already states the proxy is started with --private-ip. That flag only instructs the proxy to connect to the instance’s private address instead of the public one. It does not create connectivity between VPCs. If the client VM cannot route to 10.20.0.0/16, using --private-ip will simply result in timeouts, exactly as observed.

VPC Network Peering between analytics-vpc (10.10.0.0/16) and db-vpc (10.20.0.0/16) is required so the VM can route traffic to the Cloud SQL instance’s private IP (10.20.3.5). Firewall rules alone don’t provide reachability; peering provides the necessary routes and private connectivity. This aligns with the compliance requirement to keep database traffic on private IP rather than using the public endpoint.

The Cloud SQL Client IAM role is required for the Auth Proxy to obtain credentials and connect to Cloud SQL, but the evidence shows public IP connections already succeed, indicating IAM is likely correct. IAM permissions do not affect VPC routing to a private IP. Without peering (or VPN/Interconnect), the VM still cannot reach 10.20.3.5, so granting roles will not resolve the timeout.

Question Analysis

Core Concept: This question tests Cloud SQL private IP connectivity across networks. A Cloud SQL instance with private IP is reachable only from networks that have private connectivity to the VPC network where the instance’s private services access (PSA) range is allocated. The Cloud SQL Auth Proxy does not “tunnel” private IP traffic across unrelated VPCs; it simply chooses which instance address (public vs private) to connect to. Why the Answer is Correct: Connections to the public IP succeed because public IP access uses Google’s public endpoint and does not require L3 routing between analytics-vpc and db-vpc. Connections to 10.20.3.5 time out because the VM (10.10.2.4) has no route to the 10.20.0.0/16 network. Even with firewall rules allowing TCP:5432 and the proxy started with --private-ip, packets cannot reach the private IP without network connectivity between the VPCs. Configuring VPC Network Peering between analytics-vpc and db-vpc establishes private RFC1918 routing between 10.10.0.0/16 and 10.20.0.0/16, enabling the proxy (or direct clients) to reach 10.20.3.5. Key Features / Best Practices: - Cloud SQL private IP requires private network connectivity (routing) from the client VPC to the instance’s VPC. - VPC Network Peering provides low-latency, private connectivity without transitive routing; CIDR ranges must not overlap. - Firewall rules are necessary but not sufficient; you also need routes. Peering creates the routes automatically. - Ensure the Cloud SQL instance is configured for private IP in its VPC (PSA already exists since it has a private IP). Common Misconceptions: Many assume --private-ip on the Auth Proxy “forces private connectivity” regardless of network topology. In reality, it only selects the private address; it cannot overcome missing routes. Similarly, opening firewalls in both VPCs doesn’t help if there is no peering/VPN/Interconnect path. Exam Tips: When you see “private IP times out” across different VPCs, think routing first (peering/VPN/Interconnect), then firewall, then IAM/proxy flags. Public IP working is a strong clue that IAM and proxy basics are fine, and the missing piece is private network connectivity. Also remember peering is non-transitive; if more networks are involved, you may need additional peering or a hub-and-spoke with Cloud VPN/Interconnect and Cloud Router.

10
Question 10

Your company runs a fleet-telemetry API on Cloud Run in the us-central1 region under the production project fleet-prod. For every release candidate, you must spin up an ephemeral QA environment that is fully isolated from production (separate Google Cloud project for networking/IAM/billing), is created and torn down automatically by your CI pipeline on each pull request, mirrors production settings (region: us-central1, CPU: 1, min instances: 0), and completes provisioning in under 10 minutes without sending any traffic to production. You want the approach that provides full automation with the least ongoing effort while enabling automated end-to-end tests. What should you do?

Correct. Terraform in Cloud Build can create a new project and deploy an identical Cloud Run service in us-central1 with CPU=1 and min-instances=0, then destroy it after tests. This meets strict isolation (IAM/networking/billing) and avoids any production traffic because the QA service has its own URL in a different project. IaC reduces drift and ongoing maintenance compared to imperative scripts.

Incorrect. Deploying a new revision in the existing production project and using traffic splitting does not provide full isolation. IAM, networking, quotas, and potentially data access remain shared with production, and misconfiguration can send traffic to production. Even if you route test traffic to a new revision, you still risk interacting with production dependencies and violate the “separate project” requirement.

Partially viable but not best. gcloud commands can create a project and deploy Cloud Run, but this is imperative automation that tends to be harder to maintain, less reusable, and more prone to configuration drift than Terraform. As requirements evolve (IAM bindings, API enablement, org policies, networking), scripts become brittle. The question asks for least ongoing effort, which favors Terraform-based IaC.

Incorrect. Like option B, this keeps everything in the production project and relies on traffic splitting. It fails the explicit requirement for a separate project and increases risk of accidental production impact. It also doesn’t guarantee “without sending any traffic to production,” because shared routing/configuration mistakes or shared backends could still affect production systems.

Question Analysis

Core concept: This question tests automated environment provisioning for CI/CD using Infrastructure as Code (IaC) and Cloud Run deployment patterns. The key requirements are full isolation (separate project for IAM/networking/billing), fast ephemeral creation/teardown per PR, mirroring production Cloud Run settings, and ensuring no traffic reaches production. Why the answer is correct: Option A uses Cloud Build to run Terraform that (1) creates a brand-new Google Cloud project and (2) deploys the Cloud Run service into that project, then runs end-to-end tests and destroys everything. This directly satisfies “fully isolated from production” because project boundaries isolate IAM policies, VPC/networking, quotas, and billing. Terraform provides repeatable, declarative provisioning that is easy to automate and maintain over time, which minimizes ongoing effort compared to imperative scripting. With proper module design, provisioning a project plus a small set of resources (APIs enablement, service accounts, Cloud Run service) can fit within the 10-minute goal. Key features / best practices: Terraform supports Google Cloud project creation, API enablement, IAM bindings, and Cloud Run resources, enabling a single pipeline to create and tear down environments consistently. You can parameterize region (us-central1), CPU (1), and min instances (0) to mirror production. Using separate projects aligns with the Google Cloud Architecture Framework’s security and governance principles (strong isolation, least privilege, clear resource boundaries). Cloud Build triggers per PR provide full automation. Common misconceptions: Traffic splitting (options B/D) can look attractive for “quick QA,” but it violates the requirement of full isolation and risks accidental production interaction (shared IAM, shared networking, shared quotas, and potential data access). Imperative gcloud scripting (option C) can work, but it typically increases long-term maintenance burden and drift risk compared to IaC, especially as environments evolve. Exam tips: When you see “fully isolated” and “separate project,” prefer project-per-environment plus IaC (Terraform) over in-project revisions/traffic splitting. Also, “least ongoing effort” usually points to declarative IaC rather than custom command scripts. Finally, ensure the approach prevents any production traffic by deploying to a separate endpoint in a separate project and running tests against that endpoint only.

Success Stories(6)

V
V***********Nov 24, 2025

Study period: 2 months

The scenarios in this app were extremely useful. The explanations made even the tricky deployment questions easy to understand. Definitely worth using.

B
B************Nov 21, 2025

Study period: 2 months

The questions weren’t just easy recalls — they taught me how to approach real developer scenarios. I passed this week thanks to these practice sets.

철
철**Nov 17, 2025

Study period: 1 month

1개월 구독하니 빠르게 풀어야 한다는 강박이 생기면서 더 열심히 공부하게 됐던거 같아요. 다행히 문제들이 비슷해서 쉽게 풀 수 있었네요

이
이**Nov 15, 2025

Study period: 1 month

이 앱의 문제들과 실제 시험 문제들이 많이 비슷해서 쉽게 풀었어요! 첫 시험만에 합격하니 좋네요 굿굿

R
R***********Nov 6, 2025

Study period: 1 month

I prepared for three weeks using Cloud Pass and the improvement was huge. The difficulty level was close to the real Cloud Developer exam, and the explanations helped me fill in my knowledge gaps quickly.

Other Practice Tests

Practice Test #1

50 Questions·120 min·Pass 700/1000

Practice Test #2

50 Questions·120 min·Pass 700/1000

Practice Test #4

50 Questions·120 min·Pass 700/1000
← View All Google Professional Cloud Developer Questions

Start Practicing Now

Download Cloud Pass and start practicing all Google Professional Cloud Developer exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.