Google runs two overlapping platforms under one auth system โ Google Cloud (GCP) for infrastructure, Google Workspace for productivity APIs. The same service-account identity that grants BigQuery access can also, with domain-wide delegation, act on behalf of every Gmail user in your org. In the 2nth stack GCP is the heavy-lift tier beside AWS โ Vertex AI for Gemini + Claude, BigQuery for analytics, Cloud Run for ERPNext โ while Workspace powers Penny briefings, intake flows, and diary orchestration.
Seven skill nodes under tech/google/cloud/*. Three at production depth (Compute, AI, Data) โ the reasons most teams actually reach for GCP instead of AWS. Four still stubs (Security, Storage, Database, Networking) with fast-start snippets and the right decision trees. The pattern: Cloudflare at the edge โ Cloud Run for the heavy lift โ BigQuery / Vertex AI for data + AI.
Cloud Run as the default (pay-per-request, scale to zero), Cloud Functions v2 for glue, GKE Autopilot when K8s is warranted, GCE for stateful or GPU. Concurrency as the single biggest cost lever, cold-wake patterns for 2nth_erp, SigV4-equivalent Google-OIDC ID-token signing from Cloudflare Workers.
Vertex AI for Gemini (2.5 Pro/Flash) plus Claude via Model Garden โ Anthropic models billed on the GCP invoice when vendor consolidation matters. Function calling loop, embeddings with task-type asymmetry, grounding via Google Search or a private Vertex AI Search datastore, context caching past 32k tokens.
BigQuery with require_partition_filter as the cost guardrail, zero-ETL Pub/Sub โ BQ subscriptions (no Dataflow, no code), Pub/Sub push + pull with dead-letter and exactly-once, Dataflow templates for the common shapes, Datastream CDC from Cloud SQL.
Cloud IAM, Secret Manager, KMS with CMEK, Cloud Armor, VPC Service Controls. Decision trees for identity, secrets, encryption, edge defence, and data-perimeter controls.
Cloud Storage classes (Standard โ Archive), lifecycle rules, signed URLs, Filestore for NFS. The R2-alternative when egress to other GCP services needs to be free and in-region.
Cloud SQL for most relational work (Postgres / MySQL for Frappe), AlloyDB when Postgres-compat OLTP needs ~4x perf, Spanner for global writes, Firestore for serverless docs, Bigtable for petabyte wide-column.
Custom-mode VPCs, global external HTTPS LB with Cloud CDN + Armor, Cloud DNS, Cloud NAT. Direct VPC Egress from Cloud Run โ the newer, cheaper alternative to Serverless VPC Access.
Five skill nodes under tech/google/workspace/*. Gmail is at production depth โ the engine behind the Penny briefing pattern (Watch โ Pub/Sub โ classify โ draft reply โ label). Drive, Sheets, Calendar, and Admin SDK are stubs with the common operations and the auth gotchas. The whole subtree runs on OAuth 2.0 for user-delegated access or service-account domain-wide delegation (DWD) for server jobs that act as any user in the tenant.
List, search, fetch with correct MIME parsing (base64url, recursive multipart/* walk). Send with threaded replies that actually thread in Outlook and iOS. Watch โ Pub/Sub โ historyId diff for real-time inbox events. The full Penny briefing loop with Workers AI classification and Claude draft generation.
Drive API v3 โ files, folders, permissions, shared drives. drive.file as the scope-minimisation win (no verification audit for full-Drive access).
Sheets API v4 โ values.get / values.append / batchUpdate. The Sheets-as-CMS pattern for ops dashboards and client-editable config tables.
Events with auto-generated Meet links (conferenceDataVersion: 1), freebusy.query for multi-attendee availability, push notifications for change streams.
Directory API for users + groups, Reports API for audit (login, admin, drive, gmail activities). Super-admin DWD is required for writes โ suspend โ transfer โ delete as the offboarding pattern.
Cloud and Workspace look like different platforms. They're not โ both sit on Cloud IAM, both use OAuth 2.0 or service accounts, both grant access through role bindings against principals. The difference is what's being accessed (project-owned infra vs user-owned data) and who consents (the end user, or a Workspace admin once, via domain-wide delegation).
Pick the credential by what you're accessing and who's present. The same service account can wear all four hats depending on how it's wired.
# Decision table โ which credential for what User OAuth 2.0 # Access ONE user's Gmail / Drive (they click "Allow") # Stores refresh_token per user; access_token ~1h Service account # Server code touching PROJECT-owned GCP resources # (BigQuery, GCS, Vertex AI, Pub/Sub) # JSON key locally OR Workload Identity at runtime SA + DWD # Server job acting as ANY user in a Workspace domain # (Gmail, Drive, Calendar, Admin SDK) # Workspace admin grants scopes to the SA's client_id # JWT sub = user@domain.com โ impersonation token Workload Identity # GKE / Cloud Run / Functions reach GCP APIs with Federation # NO static JSON key. External IdPs (AWS, GitHub OIDC, # Okta) federate into an SA via short-lived tokens. # The same service account, with the right bindings, can do all four.
The cross-cloud trick: a Cloudflare Worker with one service-account private key stored as a Worker secret mints JWTs via Web Crypto, exchanges them for Google tokens, and reaches Gmail, Drive, Sheets, Calendar, and BigQuery in the trusted tenant โ all without Node SDKs. That's the workspace-bridge pattern documented in the skill node. One secret, one Worker, any Workspace API in a trusted domain.
Google opened a Johannesburg region in 2024. For SA-based clients it's the POPIA-fit default โ Cloud Run, Compute Engine, GCS, Cloud SQL, GKE, BigQuery regional, Pub/Sub all land there. Service-availability gaps are the thing to watch: Vertex AI Gemini endpoints, some Dataflow templates, and a handful of newer managed services still ship to europe-west2 (London) or us-central1 first. The design pattern: SA data lives in africa-south1, inference calls hop to europe-west2 with a documented transfer basis, and cross-continent egress stays async via Pub/Sub or scheduled Dataflow โ never per-request.
Google sits beside AWS as the second compute substrate, next to Cloudflare as the edge. Vertex AI is the reason many builds pick GCP over AWS; Workspace is the reason Penny briefings and intake automations have anywhere to live. Every leaf here has a twin in the AWS branch and an edge-side counterpart in Cloudflare.