CloudPass LogoCloud Pass
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Certifications
AWSGoogle CloudMicrosoftCiscoCompTIADatabricks
Google Professional Cloud Developer
Google Professional Cloud Developer

Practice Test #1

Simulate the real exam experience with 50 questions and a 120-minute time limit. Practice with AI-verified answers and detailed explanations.

50Questions120Minutes700/1000Passing Score
Browse Practice Questions

AI-Powered

Triple AI-Verified Answers & Explanations

Every answer is cross-verified by 3 leading AI models to ensure maximum accuracy. Get detailed per-option explanations and in-depth question analysis.

GPT Pro
Claude Opus
Gemini Pro
Per-option explanations
In-depth question analysis
3-model consensus accuracy

Practice Questions

1
Question 1

You are designing a tablet app for municipal tree inspectors that must store hierarchical observations (city -> district -> park -> tree -> inspection) with up to 5 nested levels and support offline work for up to 72 hours; upon reconnect, the app must automatically sync local changes and handle conflicts gracefully. A backend on Cloud Run will use a dedicated service account to enrich the same records (e.g., geocoding, policy tags) directly in the database, performing up to 5,000 writes per minute at peak. The solution must scale securely to 250,000 monthly active users in the first quarter and provide client SDKs with built-in offline caching and synchronization. Which database and IAM role should you assign to the backend service account?

Cloud SQL supports relational modeling for hierarchies, but it does not provide Google client SDKs with built-in offline persistence and automatic sync for mobile/tablet apps. You would need to implement local storage, change tracking, conflict resolution, and sync logic yourself. roles/cloudsql.editor also doesn’t directly grant database-level DML permissions; it manages instances and is broader than needed for app-level writes.

Bigtable can handle high write throughput and large scale, but it is a wide-column store optimized for time-series/analytics-style access patterns, not hierarchical document navigation with offline-first mobile sync. There are no Firestore-like client SDKs for offline caching/synchronization. Additionally, roles/bigtable.viewer is read-only, so it cannot support the backend requirement to enrich records with writes.

Firestore in Native mode matches the requirements for hierarchical observations and offline-first operation via Firebase/Firestore client SDKs (offline persistence, local cache, automatic sync on reconnect). It scales to large user bases and supports high write rates when designed to avoid hot spots. roles/datastore.user provides the backend service account the necessary read/write access to Firestore to perform enrichment updates securely.

Firestore in Datastore mode is primarily for Datastore API compatibility and does not align as directly with Firebase-style offline synchronization expectations for client apps. Even if the database choice were acceptable, roles/datastore.viewer is read-only and cannot support the Cloud Run backend’s requirement to perform up to 5,000 writes per minute to enrich records.

Question Analysis

Core concept: This question tests selecting a database that supports hierarchical data modeling plus mobile/offline-first synchronization using Google-provided client SDKs, and choosing the least-privilege IAM role for a Cloud Run service account that must write to that database. Why the answer is correct: Firestore in Native mode is the best fit because it is the database behind Firebase/Firestore client SDKs that provide built-in offline persistence, local caching, and automatic synchronization when connectivity returns—exactly matching the 72-hour offline requirement and “sync + conflict handling” requirement. Firestore’s document/collection model naturally represents hierarchical entities (city/district/park/tree/inspection) via subcollections or by storing references/IDs, and it supports up to 100 levels of subcollections, so 5 nested levels is well within limits. For backend enrichment from Cloud Run, the service account needs read/write access to Firestore; roles/datastore.user grants the ability to read and write entities/documents (and is the common least-privilege baseline for server-side access), enabling the backend to perform up to 5,000 writes/minute. Key features and best practices: - Offline-first: Firestore SDKs (Android/iOS/Web) support offline persistence and automatic sync; conflicts are typically handled via “last write wins” semantics and can be improved with transactions, server timestamps, and custom merge logic. - Scale: Firestore is serverless and designed for high concurrency and large user bases; it aligns with Google Cloud Architecture Framework principles for scalability and operational excellence. - Security: Use Firebase Authentication/Identity Platform for end-user auth and Firestore Security Rules for client access; use a dedicated service account for Cloud Run with IAM-based access. - Throughput: 5,000 writes/min (~83 writes/sec) is generally feasible; design for hot-spot avoidance (spread writes across document keys, avoid single-document contention). Common misconceptions: - Cloud SQL can store hierarchical data but does not provide offline sync SDKs; you would need to build custom sync/conflict resolution. - Bigtable is highly scalable but lacks mobile offline sync SDKs and is not ideal for hierarchical document access patterns. - Firestore in Datastore mode is oriented toward Datastore APIs and does not align as directly with Firebase offline sync expectations. Exam tips: When you see “client SDKs with offline caching and automatic synchronization,” think Firestore (Native mode) / Firebase. Then choose an IAM role that enables required operations (read/write) for the backend service account; viewer roles won’t work for writes, and overly broad roles should be avoided.

2
Question 2

Your team uses Cloud Build to run CI for a Go microservice stored in a private GitHub repository mirrored to Cloud Source Repositories; one of the build steps requires a specific static analysis tool (version 3.7.2) that is not present in the default Cloud Build environment, the tool is ~120 MB and must be available within 5 seconds at step start to keep total build time under a 10-minute SLA, outbound internet access during builds is restricted, and you need reproducible results across ~50 builds per day—what should you do?

Downloading the binary during the build is fragile and typically fails the stated constraints. Outbound internet access is restricted, so the download may be blocked entirely. Even if allowed, network variability and upstream availability can easily exceed the 5-second startup requirement and threaten the 10-minute SLA. It also reduces reproducibility unless you pin checksums and handle mirrors, which adds complexity and risk.

A custom Cloud Build builder image is the best fit: it packages the exact tool version (3.7.2) into the container image so it is available immediately when the step starts. This avoids outbound internet access, improves performance (no runtime download), and ensures reproducible results by pinning the image tag or, better, the image digest. Storing the image in Artifact Registry supports controlled, auditable CI dependencies.

Committing a 120 MB binary into the source repository can make builds work without internet, but it is a poor practice for CI/CD. It bloats the repo, slows cloning and mirroring, complicates code review, and can create supply-chain governance issues (binary provenance, scanning, licensing). It also couples tool distribution to source changes and can be error-prone when multiple services or pipelines need the same tool.

Filing a feature request does not solve the immediate requirement and is not a reliable strategy for meeting SLAs. Default Cloud Build environments change on Google’s schedule and may not include niche tools or specific versions. Even if added, you still need version pinning for reproducibility. On exams, “wait for the platform to add it” is almost never the correct operational answer when you have clear constraints.

Question Analysis

Core Concept: This question tests Cloud Build execution environments and how to supply deterministic, fast, offline dependencies for build steps. In Cloud Build, each step runs in a container image. If a required tool is not in the default builders, you can either fetch it at build time or package it into a custom builder image. Why the Answer is Correct: A custom Cloud Build builder image that already contains the static analysis tool (v3.7.2) ensures the tool is available immediately when the step starts (meeting the 5-second requirement) and avoids outbound internet access (which is restricted). It also guarantees reproducibility across ~50 builds/day because the tool version is pinned in the image. This aligns with CI best practices: immutable, versioned build environments that reduce variability and external dependencies. Key Features / Configurations / Best Practices: - Create a custom builder image (e.g., based on a minimal Linux image or an official Go builder) and bake the 120 MB binary into the image layer. - Store the image in Artifact Registry (or Container Registry) and reference it in cloudbuild.yaml steps via the image name. - Pin by immutable digest (e.g., image@sha256:...) for maximum reproducibility. - Use Cloud Build private pools if you need tighter network egress control; however, even with restricted egress, prepackaged images avoid runtime downloads. - This approach supports Google Cloud Architecture Framework principles: reliability (consistent builds), operational excellence (repeatable pipelines), and performance efficiency (fast step startup). Common Misconceptions: Downloading during the build (Option A) seems simple, but it violates the restricted outbound internet requirement and introduces latency/availability risk that can break the 10-minute SLA. Committing binaries (Option C) can appear reproducible, but it bloats repos, complicates reviews, and is generally an anti-pattern for supply-chain and maintainability. Requesting default support (Option D) is not an actionable solution for an exam scenario and does not meet immediate SLA needs. Exam Tips: For Cloud Build questions involving “tool not available,” “no internet,” “fast startup,” and “reproducible builds,” the expected pattern is: build a custom builder image, store it in Artifact Registry, and reference it in build steps (often pinned by digest). Prefer immutable, versioned artifacts over runtime downloads or committing large binaries into source control.

3
Question 3

You are building an external review portal for a film festival that stores high‑bitrate video dailies in Cloud Storage, and you must let reviewers—some of whom do not have Google accounts—securely access only their assigned files with the ability to read, upload replacements, or delete them for a strict 24-hour window; how should you provide access to the objects?

Correct. V4 signed URLs let your application grant temporary access to specific Cloud Storage objects without requiring the reviewer to authenticate with a Google account. This matches the requirement for external users and supports least-privilege access by generating URLs only for the assigned files. A 24-hour expiration is supported, making signed URLs the best fit for secure, time-bound object access in this scenario.

Incorrect. Granting Service Account Token Creator enables impersonation, which is powerful and risky for external users. It effectively gives reviewers a pathway to obtain credentials and potentially access beyond assigned objects, depending on permissions. It also assumes reviewers can use Google IAM-based flows and identities, conflicting with “some do not have Google accounts,” and is not least privilege.

Incorrect. Distributing a service account key to external reviewers is a major security anti-pattern: keys can be copied, reused, and exfiltrated. Also, service account keys do not support a native “expire after 24 hours” setting in the way implied; you would need to rotate/delete keys manually. This approach grants broad access and violates best practices for credential management.

Incorrect. IAM roles (even with IAM Conditions that expire) require binding to a principal identity. Reviewers without Google accounts cannot be directly granted Storage Object User on the bucket. Additionally, bucket-level role assignment is broader than per-object access unless combined with complex conditional logic; signed URLs are simpler and more precise for object-level, time-bound external access.

Question Analysis

Core Concept: This question tests secure, time-bound access to Cloud Storage objects for external users who may not have Google identities. The key concept is using Cloud Storage signed URLs to delegate temporary access to specific objects without granting IAM permissions directly to the end user. Why the Answer is Correct: Because some reviewers do not have Google accounts, IAM role bindings are not a good fit. Signed URLs let your application grant access to individual objects for a limited time window, which aligns with the requirement to restrict each reviewer to only assigned files for 24 hours. This is the standard Google Cloud pattern for temporary external access to Cloud Storage objects. Key Features / Best Practices: - Use V4 signed URLs with a 24-hour expiration to provide temporary object access without requiring Google authentication. - Generate URLs per object so each reviewer receives access only to the files assigned to them. - Use signed URLs for object retrieval and uploads, and avoid distributing service account keys or broad IAM permissions. - Prefer application-controlled signing using managed service account credentials rather than exporting long-lived keys. Common Misconceptions: A common trap is assuming IAM Conditions solve all temporary-access problems; they still require a valid principal identity. Another misconception is that sharing service account credentials is an acceptable shortcut for external access, when it is actually a major security risk. Also, bucket-level IAM is usually too broad when the requirement is object-specific delegation. Exam Tips: When a question mentions external users without Google accounts and temporary access to Cloud Storage, signed URLs are usually the best answer. Focus on least privilege, object-level scope, and short-lived access. Eliminate options that require distributing credentials or assigning broad IAM roles to external parties.

4
Question 4

You are setting up a new workstation to provision Google Cloud infrastructure with Terraform for a video analytics project at AuroraStream. Company policy requires that all resources be created by a dedicated deployment service account (tf-deployer@proj.example.iam.gserviceaccount.com) and forbids downloading long-lived service account keys. Your Cloud Identity user has the iam.serviceAccountTokenCreator role on that service account and the necessary project permissions to run Terraform. You will configure the Terraform Google provider to use impersonate_service_account pointing to the deployment service account. Following Google-recommended best practices, what should you do on your workstation to authenticate Terraform?

Incorrect. Downloading a service account JSON key violates the stated policy (no long-lived keys) and is explicitly discouraged by Google best practices due to key exfiltration risk and operational overhead (rotation, revocation). With impersonation available, Terraform does not need a key file for the deployment service account; it should mint short-lived tokens instead.

Incorrect. While gcloud auth/impersonate_service_account and exporting GOOGLE_OAUTH_ACCESS_TOKEN can work in some ad-hoc scenarios, it is fragile for Terraform because tokens are short-lived and must be refreshed manually. It also bypasses the standard ADC flow that Terraform and Google libraries expect, making automation and repeatability worse than using ADC + provider impersonation.

Correct. gcloud auth application-default login creates user-based Application Default Credentials on the workstation. Terraform can use these ADC as the source credentials and then impersonate the deployment service account via impersonate_service_account, ensuring all resource operations occur as tf-deployer@… without ever downloading a service account key. This aligns with Google’s recommended keyless, short-lived credential approach.

Incorrect. Even if Vault delivers the key dynamically, it still depends on a service account JSON key, which is a long-lived credential and explicitly forbidden by policy. This approach adds complexity and operational burden (Vault integration, secret lifecycle) without addressing the core requirement. Google’s preferred solution is keyless impersonation using short-lived tokens.

Question Analysis

Core concept: This question tests Google Cloud authentication best practices for Infrastructure as Code tools (Terraform) when using service account impersonation and avoiding long-lived keys. The key services/concepts are Application Default Credentials (ADC), IAM service account impersonation (iam.serviceAccountTokenCreator), and the Terraform Google provider’s impersonate_service_account setting. Why the answer is correct: When Terraform is configured with impersonate_service_account, it still needs an initial “source” credential to call the IAM Credentials API (generateAccessToken) and mint short-lived tokens for the target deployment service account. Google-recommended best practice on a developer workstation is to use user-based ADC via gcloud auth application-default login. This creates local ADC that Terraform can automatically discover, and then Terraform uses those user credentials only to impersonate tf-deployer@… for all resource creation. This satisfies the policy (resources created by the dedicated service account) and avoids downloading long-lived service account keys. Key features / configurations: - ADC provides a standard credential discovery mechanism used by Google client libraries and the Terraform Google provider. - Service account impersonation produces short-lived access tokens, reducing blast radius and aligning with the Google Cloud Architecture Framework security principle of “use short-lived credentials” and “least privilege.” - Required IAM: your user needs iam.serviceAccountTokenCreator on the target service account, and the target service account needs the project roles required to create/manage resources. Common misconceptions: - People often think Terraform must authenticate directly as the service account via a JSON key (it does not when impersonation is available). - Exporting access tokens manually seems workable but is brittle and not the recommended workflow for Terraform; tokens expire and automation becomes error-prone. - Storing keys in Vault still relies on long-lived keys, which the policy explicitly forbids. Exam tips: For workstation-based Terraform with impersonation, remember the pattern: (1) obtain ADC as a user (gcloud auth application-default login), (2) configure provider impersonate_service_account, (3) ensure iam.serviceAccountTokenCreator and required project permissions are in place. Prefer ADC + impersonation over service account keys whenever possible.

5
Question 5

You are preparing nightly releases of a serverless event-processing platform on Cloud Run across two regions. Each day at 18:00 UTC, your CI/CD pipeline builds and pushes 30–40 distinct Linux-based container images to a single Artifact Registry Docker repository, and the production rollout begins at 20:00 UTC. Security requires that you be alerted to any known OS-level vulnerabilities in the newly pushed images before the rollout, and you want to follow Google‑recommended best practices without adding custom scanning code to your pipeline. What should you do?

Incorrect. Using the gcloud CLI to invoke scans introduces an explicit operational step to trigger scanning for each image, which is unnecessary when Artifact Registry is already integrated with automatic vulnerability scanning. That approach adds avoidable pipeline complexity and does not align with the requirement to avoid custom scanning logic. Google best practice here is to rely on managed scanning on image push rather than manually orchestrating scans.

Correct. Enabling Container Analysis for Artifact Registry allows supported images to be scanned automatically when they are pushed, which directly fits the nightly build-and-push workflow. This satisfies the requirement to identify known OS-level vulnerabilities before the 20:00 rollout without adding custom scanning code or explicit scan-triggering steps to the CI/CD pipeline. Reviewing the reported vulnerability results before deployment is the managed, Google-recommended approach for release validation in this scenario.

Incorrect. Although enabling Container Analysis and relying on automatic scanning is the right foundation, reviewing only CRITICAL vulnerabilities is too limited for the stated requirement. The prompt says you must be alerted to any known OS-level vulnerabilities in the newly pushed images before rollout, which implies considering all reported vulnerability findings rather than filtering only the highest severity. Severity-based triage can be useful operationally, but it does not fully satisfy the wording of this question.

Incorrect. Calling the Container Analysis REST API to trigger scans per image requires custom integration and orchestration logic in the pipeline, which the question explicitly wants to avoid. It is also unnecessary because Artifact Registry can automatically produce vulnerability findings for pushed images through its managed integration. For exam purposes, manual API-driven triggering is less preferable than enabling the built-in scanning workflow.

Question Analysis

Core concept: Artifact Registry integrates with Container Analysis to automatically scan supported container images for OS package vulnerabilities when images are pushed. Why correct: the requirement is to detect known OS-level vulnerabilities in newly pushed images before deployment, while avoiding custom scanning code, so the managed automatic scanning workflow is the recommended approach. Key features: automatic scanning on push, vulnerability metadata stored and viewable through Google Cloud tooling, and no need to manually trigger scans through gcloud or REST. Common misconceptions: you do not need to invoke scans yourself, and limiting review only to CRITICAL findings does not satisfy a requirement to catch any known vulnerabilities. Exam tips: when a question emphasizes Google-recommended best practices and no custom pipeline logic, prefer the fully managed integration in Artifact Registry and Container Analysis over manual API or CLI orchestration.

Want to practice all questions on the go?

Download Cloud Pass — includes practice tests, progress tracking & more.

6
Question 6

You are configuring a Cloud Build trigger for a Node.js REST service that builds a Docker image and pushes it to Artifact Registry (us-central1, repository 'apis') with the tag $SHORT_SHA. Compliance requires the pipeline to first run unit tests and then run integration tests against a disposable test database before the image is pushed. If any test fails, you must be able to tell from the Cloud Build history exactly which stage (unit or integration) failed without reading through logs. What should you do?

Running tests via RUN in the Dockerfile is a poor fit for CI stage reporting. Cloud Build would typically show a failure in the “docker build” step, not “unit” vs “integration,” so you’d need logs to know which test failed. It also bloats build time and can leak test tooling into image layers unless carefully multi-staged. Compliance wants explicit pipeline stages before pushing.

A single Cloud Build step that runs both unit and integration tests (e.g., one bash command) will only report one step status in Cloud Build history. If it fails, you cannot tell from the history whether unit or integration tests failed without reading logs or parsing exit codes. This violates the requirement for stage-level failure visibility.

Triggering separate Cloud Build pipelines for unit and integration tests increases complexity (chaining builds, passing source/metadata, handling approvals) and can fragment audit trails. It also doesn’t automatically guarantee clearer “which stage failed” within a single build history entry for the original trigger. The requirement is best met within one pipeline using multiple steps.

Separate Cloud Build steps with distinct IDs (e.g., id: unit-tests, id: integration-tests) provide clear, step-level status in Cloud Build history. Steps run in order by default, enforcing unit tests first, then integration tests against an ephemeral database, and only then building/pushing the Docker image to Artifact Registry with $SHORT_SHA. This directly satisfies compliance and observability requirements.

Question Analysis

Core Concept: This question tests Cloud Build pipeline design for CI quality gates: structuring build steps so test phases are enforced in order and failures are clearly attributable from Cloud Build history. It also touches Artifact Registry publishing best practices (only push artifacts after validation). Why the Answer is Correct: Using separate Cloud Build steps with distinct step IDs for unit tests and integration tests (and ordering them before the Docker build/push) provides explicit stage boundaries in the Cloud Build UI and build history. Cloud Build shows each step’s status (success/failure) and the failing step ID without requiring log inspection. Because steps run sequentially by default (unless you use waitFor), placing “unit-tests” first and “integration-tests” second guarantees the required order. Only after both succeed should you run the Docker build and push to Artifact Registry with the $SHORT_SHA tag. Key Features / Best Practices: - Step granularity: Each step is a first-class entity in Cloud Build history; distinct IDs make compliance auditing and troubleshooting faster. - Ordered execution: Default sequential execution enforces unit then integration tests; you can also explicitly set waitFor to avoid accidental parallelism. - Disposable test database: Integration tests can spin up ephemeral dependencies (e.g., Cloud SQL via Auth Proxy, a temporary database/schema, or a containerized DB) within a dedicated step, then tear down in the same step to control cost and avoid state leakage. - Artifact promotion: Build and push only after tests pass aligns with supply-chain and release hygiene practices. Common Misconceptions: - Putting tests in a Dockerfile (option A) conflates image build with CI validation and makes failures appear as a generic “docker build failed,” obscuring which test stage failed. - Combining tests into one step (option B) still hides which phase failed at the Cloud Build step level; you’d need logs to distinguish unit vs integration failures. - Splitting into separate pipelines (option C) adds orchestration complexity and doesn’t inherently improve per-build stage visibility; it can also complicate passing artifacts/SHAs and enforcing a single audited flow. Exam Tips: For Cloud Build questions requiring visibility of which phase failed, prefer multiple steps with clear IDs. For “don’t publish if tests fail,” ensure push steps occur after test steps. Remember Cloud Build history surfaces step-level status, not sub-command granularity inside a single step.

7
Question 7

Your organization runs a Cloud Build CI/CD pipeline with 4 build steps for a Python API: (1) run unit tests, (2) generate a 6 KB text report containing the commit SHA and changed files, (3) build and push a container image, and (4) run a security gate that consumes the report; the report must be accessible to steps 3 and 4 within the same build and must not be persisted after the build; the pipeline executes up to 30 times per hour. How should you store the report so that all required build steps can access it?

Compute Engine instance metadata is associated with a VM instance, not with Cloud Build step-to-step artifact sharing. Cloud Build runs managed build steps in containers and does not provide instance metadata as a supported scratch space for passing generated files between steps. Metadata is also intended for configuration and bootstrap information, not for repeatedly writing transient CI artifacts. Using it here would be unsupported, operationally awkward, and contrary to normal Cloud Build design patterns.

Correct. In Cloud Build, /workspace is a shared filesystem path mounted into each step of the same build, so a file written there in step 2 can be read directly by steps 3 and 4. This directly satisfies the requirement that the report be available to multiple later steps without any extra transfer mechanism. Because the report is only needed during the current build, using /workspace avoids unnecessary persistence in external systems such as Cloud Storage. It is also the simplest and most reliable design because it minimizes latency, IAM dependencies, and cleanup logic.

Cloud Storage is an external object store, so uploading the report there would persist it outside the build unless you add explicit deletion or lifecycle management. That conflicts with the requirement that the report should not be persisted after the build as part of the normal design. It also adds extra network operations, IAM permissions, and possible failure points for a very small file that only needs to be shared within one build. Cloud Storage is better suited for artifacts that must be retained or shared beyond the current build execution.

Posting the report to an external web service introduces an unnecessary dependency for a simple intra-build handoff. The service would need authentication, availability, and possibly its own storage semantics, all of which increase complexity and failure modes. It also risks persisting the report outside the build, which does not align with the requirement. For a small temporary file needed only by later steps in the same build, Cloud Build’s shared workspace is the intended and much simpler solution.

Question Analysis

Core Concept: Cloud Build executes a build as a sequence of steps that share a common working directory mounted at /workspace. Files written to /workspace by one step are available to subsequent steps in the same build, but are ephemeral and discarded when the build finishes. This is the standard pattern for passing small artifacts between steps without persisting them externally. Why the Answer is Correct: The requirement is that the 6 KB report must be accessible to steps 3 and 4 within the same build and must not be persisted after the build. Writing the report to /workspace in step 2 and reading it from the same path in steps 3 and 4 satisfies both: it is shared across steps, requires no external service, and is automatically cleaned up at build completion. It also scales naturally to 30 executions per hour because each build has its own isolated /workspace, avoiding cross-build contention. Key Features / Best Practices: - /workspace is the shared volume across all build steps; use it for intermediate artifacts (reports, compiled assets, generated configs). - Keep artifacts small and local when they are only needed within the build; this reduces latency, cost, and failure modes. - This aligns with Google Cloud Architecture Framework principles (reliability and operational excellence): fewer external dependencies and simpler data flow improves pipeline robustness. Common Misconceptions: A common mistake is to use Cloud Storage for any artifact. While Cloud Storage works, it persists data unless you add lifecycle rules or explicit deletion, which violates “must not be persisted after the build” and adds extra steps and potential cleanup failures. Another misconception is that instance metadata can be used as a general scratchpad; Cloud Build steps are containers, not a long-lived VM you control. Exam Tips: For Cloud Build questions, remember: (1) /workspace is the shared filesystem across steps, (2) each step runs in its own container but shares /workspace, and (3) use external storage (Cloud Storage/Artifact Registry) only when artifacts must outlive the build or be shared across builds/projects. If the prompt emphasizes “within the same build” and “not persisted,” /workspace is usually the intended solution.

8
Question 8

You operate a city-wide food delivery platform on Google Cloud. An order-processing microservice currently invokes an HTTP Cloud Function to send SMS status updates (order accepted, courier en route) through a third-party SMS gateway. After launching a 30%-off promotion, peak load spiked to about 18,000 notifications per minute, and the Cloud Function intermittently returns HTTP 500 errors. Some customers report missing SMS updates, and logs show the sender aborts on 500 responses without persisting the messages. You need to change how SMS messages are handled to minimize message loss without significantly increasing operational complexity. What should you do?

Increasing the Cloud Function timeout does not address the core reliability problem: the sender is making synchronous HTTP calls and drops messages when it receives a 500 response. Even if some failures are caused by slow downstream responses, a longer timeout does not provide durable buffering, retry semantics, or decoupling during traffic spikes. The system would still lose messages if the function fails transiently or the sender gives up before persisting the payload. This option treats a symptom of overload rather than redesigning the flow for reliable delivery.

Publishing SMS payloads to Pub/Sub decouples the sender from the Cloud Function and provides durable buffering during traffic spikes. Pub/Sub offers at-least-once delivery with retries/redelivery when the function fails, greatly reducing message loss. A Pub/Sub-triggered Cloud Function scales automatically and can be paired with dead-letter topics and idempotent processing to handle repeated failures safely with minimal operational complexity.

Using Memorystore (Redis) as an intermediate store/queue increases operational complexity and risk. Redis is primarily an in-memory cache; while it can be used as a queue, you must manage persistence settings, eviction policies, failover behavior, and consumer coordination. It’s not as straightforward or durable for event ingestion as Pub/Sub, and it introduces additional components to operate and monitor.

Retrying the HTTP Cloud Function every second can create a retry storm under load, increasing concurrency and making 500 errors more likely. It still doesn’t provide durable storage if the sender crashes or is redeployed mid-retry, and it can cause duplicate sends without careful idempotency. Client-side retries are useful, but they are not a substitute for a managed queue when reliability and spike buffering are required.

Question Analysis

Core Concept: This question tests designing reliable, scalable, cloud-native integrations by decoupling producers and consumers using asynchronous messaging (Pub/Sub) instead of synchronous HTTP calls to Cloud Functions. Why the Answer is Correct: At ~18,000 notifications/min (~300/sec), the HTTP-invoked Cloud Function intermittently returns 500s (often due to transient scaling limits, downstream latency, or concurrency pressure). Because the sender aborts on 500 and does not persist messages, notifications are lost. Publishing each SMS payload to a Pub/Sub topic makes the sender’s responsibility “enqueue and move on,” while Pub/Sub durably stores messages and delivers them to subscribers with at-least-once delivery. The Cloud Function becomes an event-driven consumer triggered by a Pub/Sub subscription, allowing automatic scaling and buffering during spikes. If the function or SMS gateway fails transiently, Pub/Sub redelivers, dramatically reducing message loss without adding significant operational burden. Key Features / Best Practices: Pub/Sub provides durable message retention, backpressure buffering, and retry/redelivery with acknowledgement deadlines. Cloud Functions triggered by Pub/Sub integrate natively and scale horizontally. You can configure dead-letter topics for messages that repeatedly fail, and use idempotency (e.g., message IDs) to handle at-least-once delivery. This aligns with the Google Cloud Architecture Framework’s reliability principles: decouple components, design for failure, and use managed services to reduce ops. Common Misconceptions: It’s tempting to “tune” the function (timeout) or add client retries. However, timeouts don’t address lost messages, and aggressive retries can amplify load (retry storms) and still lose messages if the sender crashes. Using Redis as a queue adds operational complexity and requires careful durability/eviction handling. Exam Tips: When you see: (1) synchronous HTTP calls, (2) spikes, (3) intermittent 5xx, and (4) message loss because the producer doesn’t persist—choose an asynchronous, durable queue (Pub/Sub) between services. For notification pipelines, also think about idempotency, dead-lettering, and monitoring subscription backlog/age as SLO indicators.

9
Question 9

You are launching a single App Engine Standard application (service: default, region: us-central1) and must make it accessible only at http://www.northwind.news/; your DNS is hosted in Cloud DNS (zone: public-zone, TTL: 300s), the domain is not yet verified, and you do not require any path- or service-based routing—what should you do?

Correct. App Engine custom domains require domain ownership verification first. For a subdomain like www.northwind.news, the standard DNS configuration is a CNAME pointing to ghs.googlehosted.com. This leverages Google’s managed front end and avoids IP management. With only the default service and no routing needs, no dispatch.yaml rules are required.

Incorrect. App Engine custom domain mapping for subdomains is typically done with a CNAME to ghs.googlehosted.com, not an A record to a “single global App Engine IP.” A-record-to-single-IP patterns align more with using an external HTTP(S) load balancer with a reserved global static IP, not direct App Engine domain mapping.

Incorrect. While a CNAME to ghs.googlehosted.com is the right DNS approach, dispatch.yaml is unnecessary here. dispatch.yaml is used for routing requests across multiple App Engine services (or versions) based on host/path rules. With only one service (default) and no routing requirements, adding dispatch rules adds complexity without benefit.

Incorrect. It combines two wrong ideas for this scenario: dispatch.yaml is not needed for a single-service app with no routing requirements, and using an A record to a single global App Engine IP is not the recommended/typical way to map a subdomain to App Engine. The correct approach is verification + CNAME to ghs.googlehosted.com.

Question Analysis

Core Concept: This question tests how to map a custom domain to an App Engine Standard app using Cloud DNS, including the required domain verification step and the correct DNS record type. It also checks when dispatch.yaml is needed (host/path routing) versus when App Engine’s default service mapping is sufficient. Why the Answer is Correct: To serve an App Engine app at a custom hostname like www.northwind.news, you must (1) verify ownership of the domain with Google (commonly via Google Search Console / Webmaster Central verification) and (2) configure DNS to point the hostname to App Engine. For App Engine, the standard and recommended approach for a subdomain (www) is a CNAME record pointing to ghs.googlehosted.com. App Engine then terminates the request and routes it to the correct application based on the verified domain mapping. Because you are using only the default service and do not need host- or path-based routing, you do not need dispatch.yaml. Key Features / Best Practices: - Domain verification is mandatory before App Engine will accept the custom domain mapping; it prevents hijacking. - For subdomains, use CNAME to ghs.googlehosted.com (Google-managed front end). This avoids managing IPs and supports Google’s global edge. - Cloud DNS TTL (300s) affects propagation speed; expect up to several minutes plus external resolver caching. - App Engine Standard is globally served even if the region is us-central1; the custom domain mapping is not tied to a single regional IP. Common Misconceptions: A frequent trap is thinking you can point an A record to “a single global App Engine IP.” App Engine’s recommended configuration for custom domains is CNAME for subdomains; fixed IP A records are typically associated with external HTTP(S) load balancers using reserved global static IPs, not direct App Engine custom domain mapping. Another misconception is that dispatch.yaml is required for hostnames; it’s only needed when you have multiple services and want routing rules. Exam Tips: - If the hostname is a subdomain (www), expect CNAME to ghs.googlehosted.com. - If the requirement includes a stable IP or advanced routing, think “External HTTP(S) Load Balancer + serverless NEG,” not direct App Engine A records. - If there’s no path/service routing requirement and only one service, dispatch.yaml is unnecessary. - Always include domain verification as a prerequisite step when the domain is not yet verified.

10
Question 10

You are the lead developer for a city transit incident dashboard running on Cloud Run (512 MiB memory per instance, max 80 concurrent requests) backed by Firestore in Native mode; the web UI provides infinite scroll so users can browse all incident reports, and three months after launch you observe that during 07:30–09:30 peak hours several Cloud Run instances return HTTP 500 with out-of-memory errors while Firestore read throughput spikes to ~250 QPS. You need to stop Cloud Run from crashing and reduce Firestore reads using a performance-optimized approach without merely increasing resource limits; what should you do?

Cursor-based pagination (startAfter/startAt) with a stable orderBy and a fixed limit is the performance-optimized Firestore pattern for infinite scroll. Each page reads only the next N documents, bounding memory usage in Cloud Run and preventing read amplification as users scroll deeper. Using a stable sort key (e.g., timestamp plus doc ID) avoids duplicates and ensures consistent paging under concurrent writes.

A composite index may be required for certain multi-field filters/orderBy combinations and can reduce query latency or enable the query at all. However, it does not address the core issue described: unbounded pagination causing large responses and repeated reads. If the app is already able to query, adding an index alone won’t stop OOM or reduce read QPS driven by inefficient paging.

Offset-based pagination with limit is inefficient in Firestore because the backend must still scan/skip documents up to the offset. As the offset grows, each request becomes more expensive (more reads, higher latency), which can spike Firestore throughput and increase Cloud Run memory/CPU due to larger processing overhead. This is a common anti-pattern for deep pagination in Firestore.

Increasing Cloud Run memory from 512 MiB to 1 GiB can reduce immediate OOM crashes, but it’s explicitly disallowed by the prompt (“without merely increasing resource limits”) and it doesn’t reduce Firestore reads. It also increases cost per instance and may still fail under peak concurrency if requests continue to build large in-memory result sets or payloads.

Question Analysis

Core Concept: This question tests scalable data-access patterns for serverless apps (Cloud Run) backed by Firestore, specifically how pagination strategy affects memory usage, latency, and Firestore read amplification. It aligns with the Google Cloud Architecture Framework’s performance optimization and cost optimization pillars. Why the Answer is Correct: Infinite scroll often leads to repeated “load more” calls. If the backend uses offset-based pagination or otherwise re-reads large portions of the collection, each subsequent request can scan and return increasingly large result sets, driving Firestore reads up and causing the Cloud Run handler to allocate large in-memory arrays/JSON payloads. Cursor-based pagination (startAfter/startAt) with a stable sort key (for example, orderBy(timestamp, docId) and limit 50) ensures each request reads only the next page. This bounds per-request memory and response size, reduces Firestore reads per user action, and stabilizes performance during peak traffic. Key Features / Best Practices: Use Firestore query cursors with orderBy on an indexed field and a deterministic tie-breaker (often document ID) to avoid duplicates/holes when multiple incidents share the same timestamp. Keep a fixed limit (e.g., 50) and return a “nextPageToken” (cursor values) to the UI. This pattern is the recommended approach for large collections and infinite scroll because it avoids O(n) work as users scroll deeper. Common Misconceptions: Creating indexes (option B) can improve query feasibility/latency but does not inherently reduce reads caused by poor pagination. Offset pagination (option C) is easy conceptually but is inefficient in Firestore: larger offsets require scanning/skipping many documents, increasing read operations and latency. Increasing Cloud Run memory (option D) treats the symptom (OOM) but not the cause (unbounded result processing and read amplification), and it increases cost without guaranteeing peak stability. Exam Tips: When you see Firestore + “infinite scroll” + spikes in reads/latency/OOM, think “cursor-based pagination with limit.” Avoid offsets in Firestore for deep pagination. Also remember Cloud Run concurrency means one instance can handle many simultaneous requests; if each request builds large responses, memory pressure multiplies quickly. Prefer designs that bound per-request work and payload size rather than scaling resources.

Success Stories(6)

V
V***********Nov 24, 2025

Study period: 2 months

The scenarios in this app were extremely useful. The explanations made even the tricky deployment questions easy to understand. Definitely worth using.

B
B************Nov 21, 2025

Study period: 2 months

The questions weren’t just easy recalls — they taught me how to approach real developer scenarios. I passed this week thanks to these practice sets.

철
철**Nov 17, 2025

Study period: 1 month

1개월 구독하니 빠르게 풀어야 한다는 강박이 생기면서 더 열심히 공부하게 됐던거 같아요. 다행히 문제들이 비슷해서 쉽게 풀 수 있었네요

이
이**Nov 15, 2025

Study period: 1 month

이 앱의 문제들과 실제 시험 문제들이 많이 비슷해서 쉽게 풀었어요! 첫 시험만에 합격하니 좋네요 굿굿

R
R***********Nov 6, 2025

Study period: 1 month

I prepared for three weeks using Cloud Pass and the improvement was huge. The difficulty level was close to the real Cloud Developer exam, and the explanations helped me fill in my knowledge gaps quickly.

Other Practice Tests

Practice Test #2

50 Questions·120 min·Pass 700/1000

Practice Test #3

50 Questions·120 min·Pass 700/1000

Practice Test #4

50 Questions·120 min·Pass 700/1000
← View All Google Professional Cloud Developer Questions

Start Practicing Now

Download Cloud Pass and start practicing all Google Professional Cloud Developer exam questions.

Get it on Google PlayDownload on the App Store
Cloud PassCloud Pass

IT Certification Practice App

Get it on Google PlayDownload on the App Store

Certifications

AWSGCPMicrosoftCiscoCompTIADatabricks

Legal

FAQPrivacy PolicyTerms of Service

Company

ContactDelete Account

© Copyright 2026 Cloud Pass, All rights reserved.

Want to practice all questions on the go?

Get the app

Download Cloud Pass — includes practice tests, progress tracking & more.