Mocking and emulating AWS and GCP services
Unstopped
Modern software-as-a-service (SaaS) offerings are highly dependent on external cloud providers to run and work reliably, but nothing comes without conditions or comes free, especially public cloud services. Nowadays, public cloud services are company cost centers, with strict measures in place to control costs.
Like all modern distributed systems, all major cloud service providers go through major downtime now and then. Both cost-control measures and downtime hamper internal development, quality assurance (QA), and other dependent stakeholders by blocking productivity. These reasons are enough to consider emulating the required cloud services to unblock internal teams, keep them working on their goals with full control, and avoid any resistance or other unnecessary blocking requirements such as money, access policies, and so on. Cloud providers also started realizing these needs and started providing users the capabilities to launch or mock important services locally.
Local GCP with Emulators
Google Cloud Platform (GCP) provides libraries and tools to interact with their products and services through the Cloud software development kit (SDK). Pub/Sub, Spanner, Bigtable, Datastore, and Firestore cloud services have official emulators, exposed through the Google Cloud command-line interface (gcloud
CLI) by Cloud SDK.
These emulators could be run natively on your laptop if the required dependencies are installed first, but I find that Docker containers are the best way to run these emulators out of the box because GCP provides various flavors of the official cloud-sdk
base image. Therefore, you'll see various code snippets in the next sections to run various GCP cloud service emulators and their respective quick test routines in Docker containers.
Google Cloud Pub/Sub
Modern mass-scale web applications are mostly asynchronous and built around an asynchronous publish-subscribe pattern that requires some message-oriented middleware or broker service. With the help of such middleware or services, a producer can publish and a consumer can subscribe to events and take the necessary actions to process those events. Google Cloud Pub/Sub is such a hosted service with very low latency to craft modern microservices-based products.
To run the GCP Pub/Sub service locally through its emulator and test it to show basic functionality, without leaving your laptop or spending a single dime on a GCP bill, you need to create a Docker network:
docker network create gcpemulators-demo
To begin, create a Docker image to launch the Pub/Sub service locally after creating the Dockerfile in Listing 1 [1]:
docker build . -f Dockerfile_GCPCPSEMU -t gcpcpsemu
Listing 1
Dockerfile_GCPCPSEMU
FROM google/cloud-sdk:alpine EXPOSE 8085 RUN apk add --no-cache openjdk8-jre && gcloud components install beta pubsub-emulator --quiet ENTRYPOINT ["gcloud","beta","emulators","pubsub","start","--project=demo"] CMD ["--host-port=0.0.0.0:8085"]
Now, create the script in Listing 2 for the Pub/Sub service to create a topic and a subscription to the topic, to publish messages to the topic, and to retrieve messages from the topic. Please note that because of the current limitations of the Pub/Sub emulator, the test logic is implemented with Cloud Client Libraries in Python, not gcloud pubsub
commands.
Listing 2
gcpemu_cpstst.sh
#! /bin/sh echo echo ' <Start of Cloud PubSub Quick Test>' dockerize -wait tcp://gcpcpsemu:8085 cd python-pubsub/samples/snippets python3 publisher.py demo create demo python3 subscriber.py demo create demo demo python3 publisher.py demo publish demo python3 subscriber.py demo receive demo 20 echo ' <End of Cloud PubSub Quick Test>' echo
Next, create the Dockerfile in Listing 3 and execute the command
docker build . -f Dockerfile_GCPCPSTST -t gcpcpstst
Listing 3
Dockerfile_GCPCPSTST
FROM alpine:3.19 AS dockerize ENV DOCKERIZE_VERSION v0.7.0 RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && echo "**** fix for host id mapping error ****" && chown root:root /usr/local/bin/dockerize FROM google/cloud-sdk:alpine SHELL ["/bin/ash", "-o", "pipefail", "-c"] RUN apk add --no-cache --virtual .build-deps alpine-sdk libffi-dev openssl-dev python3-dev py3-pip && gcloud config configurations create emulator --quiet && git clone https://github.com/googleapis/python-pubsub.git && cd python-pubsub/samples/snippets && pip3 install -r requirements.txt COPY --from=dockerize /usr/local/bin/dockerize /usr/local/bin/ COPY gcpemu_cpstst.sh /usr/local/bin/run.sh ENTRYPOINT ["run.sh"]
to create a Docker image containing the previously shown logic to run the quick test against the locally running Pub/Sub service. Finally, create the compose file in Listing 4, to start up the local Pub/Sub service, and execute the test logic against it:
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./gcp_emulators_cpsemu.yml:/etc/compose/gcp_emulators_cpsemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cpsemu.yml up -d
Listing 4
gcp_emulators_cpsemu.yml
services: gcpcpsemu: image: gcpcpsemu container_name: gcpcpsemu hostname: gcpcpsemu ports: - "28085:8085" restart: unless-stopped gcpcpstst: image: gcpcpstst container_name: gcpcpstst hostname: gcpcpstst environment: - "PUBSUB_EMULATOR_HOST=gcpcpsemu:8085" - "PUBSUB_PROJECT_ID=demo" - "CLOUDSDK_AUTH_DISABLE_CREDENTIALS=true" - "CLOUDSDK_CORE_PROJECT=demo" depends_on: - gcpcpsemu networks: default: name: gcpemulators-demo external: true
You can see the log messages about the local Pub/Sub service and about the test logic that makes use of it with the respective commands in Listing 5. Figures 1 and 2 were taken on my laptop after executing these logging commands.
Listing 5
Logging Commands
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./gcp_emulators_cpsemu.yml:/etc/compose/gcp_emulators_cpsemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cpsemu.yml logs gcpcpsemu docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./gcp_emulators_cpsemu.yml:/etc/compose/gcp_emulators_cpsemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cpsemu.yml logs gcpcpst
Now you have everything to make use of the local Pub/Sub service; just point your application to local port 28085 and reap the benefits of the emulator. Please feel free to explore the Pub/Sub emulator further through the online documentation [2]. Once done, use the command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./gcp_emulators_cpsemu.yml:/etc/compose/gcp_emulators_cpsemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cpsemu.yml down
to clean up.
Google Cloud Bigtable
Data services – especially databases – are the bread and butter of modern web-scale cloud-based products. On the other hand, the cloud-hosted versions of various kinds of databases are a huge cost center and dependency for any company running in the public cloud, so you should be interested in options provided by the GCP emulators to run databases services.
Indeed, the emulators provide local functionality to run Cloud Bigtable and Cloud Spanner services, as well. Cloud Bigtable is a key-value and wide-column store, ideal for low latency and fast access to structured, semi-structured, or unstructured data. Cloud Spanner, on the other hand, is an enterprise-grade, globally distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of the relational database structure with non-relational horizontal scale.
First things first: Create the Dockerfile in Listing 6 and execute the command
docker build . -f Dockerfile_GCPCBTEMU -t gcpcbtemu
Listing 6
Dockerfile_GCPCBTEMU
FROM google/cloud-sdk:alpine EXPOSE 8086 RUN gcloud components install beta bigtable --quiet ENTRYPOINT ["gcloud","beta","emulators","bigtable","start","--quiet"] CMD ["--host-port=0.0.0.0:8086"]
to create a Docker image for running a local Cloud Bigtable service.
Next, create the script in Listing 7 containing quick test logic to execute against the locally running Bigtable service. You can clearly see that the cbt
command-line utility is fully capable of executing various functionalities against the local Bigtable, unlike the local Pub/Sub limitation of using a language library only.
Listing 7
gcpemu_cbttst.sh
#! /bin/sh ( echo ' <Start of Cloud Big Table Quick Test>' dockerize -wait tcp://gcpcbtemu:8086 cbt createtable cbt-run-demo cbt ls cbt createfamily cbt-run-demo cf1 cbt ls cbt-run-demo cbt set cbt-run-demo r1 cf1:c1=test-value cbt read cbt-run-demo cbt deletetable cbt-run-demo cbt deleteinstance demo-instance echo ' <End of Cloud Big Table Quick Test>' echo ) 2>/dev/null
After creating the Dockerfile in Listing 8, create the test image with
docker build . -f Dockerfile_GCPCBTTST -t gcpcbttst
Listing 8
Dockerfile_GCPCBTTST
FROM alpine:3.19 AS dockerize ENV DOCKERIZE_VERSION v0.7.0 RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz && echo "**** fix for host id mapping error ****" && chown root:root /usr/local/bin/dockerize FROM google/cloud-sdk:alpine SHELL ["/bin/ash", "-o", "pipefail", "-c"] RUN gcloud components install cbt --quiet && printf "%s\n%s\n" "project=demo" "instance=demo"|tee ~/.cbtrc && gcloud config configurations create emulator --quiet COPY --from=dockerize /usr/local/bin/dockerize /usr/local/bin/ COPY gcpemu_cbttst.sh /usr/local/bin/run.sh ENTRYPOINT ["run.sh"]
Finally, create the YAML file in Listing 9 to bring up the local Bigtable service, as well as the quick test routine, with the command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro-v ./gcp_emulators_cbtemu.yml:/etc/compose/gcp_emulators_cbtemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cbtemu.ymlup -d
Listing 9
gcp_emulators_cbtemu.yml
services: gcpcbtemu: image: gcpcbtemu container_name: gcpcbtemu hostname: gcpcbtemu ports: - "28086:8086" restart: unless-stopped gcpcbttst: image: gcpcbttst container_name: gcpcbttst hostname: gcpcbttst environment: - "BIGTABLE_EMULATOR_HOST=gcpcbtemu:8086" - "CLOUDSDK_AUTH_DISABLE_CREDENTIALS=true" - "CLOUDSDK_CORE_PROJECT=demo" depends_on: - gcpcbtemu networks: default: name: gcpemulators-demo external: true
You can see the log messages for the local Bigtable service and the test logic making use of it with the command
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./gcp_emulators_cbtemu.yml:/etc/compose/gcp_emulators_cbtemu.yml:ro docker docker compose -f /etc/compose/gcp_emulators_cbtemu.yml logs
The screenshot in Figure 3 was taken on my laptop after executing this logging command. Now you have everything to make use of the Bigtable service, just point your application to local port 28086 and reap the benefits of the emulator. Please feel free to explore Cloud Bigtable further through the documentation [3].
Buy this article as PDF
(incl. VAT)