
What you are aiming for is to trigger a build from a commit to the GitHub repo. However, you can trigger the build manually. The build will still run on Google Cloud but will be using your local source code as input. There used to be the option of running a build locally using a Cloud Build Emulator, but this has been deprecated.
When you run using local code, the $COMMIT_SHA and other similar variables will not be set automatically. Instead, you can pass them in using the substitutions flag. This is the same mechanism for providing custom substitutions used in the build configuration file.
Here _REPOSITORY, _REGION, _SERVICE_NAME and _IMAGE_NAME are custom substitution variables and COMMIT_SHA is overriding a built-in variable. Running a build locally in this way is useful for testing that your cloudbuild.yaml is doing what you expect.
Adding continuous deployment to the Cloud Build pipeline
At the moment, the pipeline will build the container and push it to Artifact Registry. Although a new container is ready, it will not deploy to Cloud Run. The next steps handle that deployment, taking the pipeline from a continuous integration (CI) pipeline to a continuous deployment (CD) pipeline.
The final step in the pipeline is to deploy the container to Cloud Run. The step that does this from Cloud Build simply uses a container that includes the gcloud CLI cloud-builders/gcloud and then uses the same gcloud run deploy command you used to deploy manually in previous chapters.
As the deployment to Cloud Run will be to a different project than the one running the build, you will need to pass the –project flag to the gcloud run deploy command.
You will also need to use the TARGET_PROJECT_ID environment variable with the ID of the project you want to deploy to. This is the project you were working with in previous chapters, for example skillsmapper-development.
As the build will be running in the management prod project, you will need to grant to the Cloud Build service account in the management project the Cloud Run Admin role in the destination project.
Tricks like this are useful for automating what you do in the console.
For completeness, cloudbuild.yaml configurations are also included for the fact service and profile service in their respective directories in the code that accompanies this book.
Deploying Infrastructure
In the Appendix A, there are instructions for deploying a complete version of the entire SkillMapper application using the Terraform and infrastructure as code tool. Almost everything that can be achieved on the gcloud CLI can be defined in code and applied automatically.
It is also possible to automate that deployment using Cloud Build, too, so any changes to the Terraform configuration are applied automatically by Cloud Build. This is a technique known as GitOps, as operations effectively become controlled by the content of a Git repository.
How Much Will This Cost?
Cloud Build has a free tier where builds are performed on a machine with 1 vCPU and 4 GB RAM. At the time of writing, builds are free for the first 120 minutes per day and $0.003 per minute after that. If you would like to speed up your builds, you can use a machine with more CPU and RAM. The cost of this will depend on the machine type you choose. You can find more information on the pricing page for Cloud Build.
Artifact Registry has a free tier where you can store up to 0.5 GB of data and transfer a certain amount of data. After that, there are monthly costs for storage and data transfer. You can find more information on the pricing page for Artifact Registry.
Summary
In this chapter, you created a Cloud Build pipeline that builds a container and pushes it to Artifact Registry. You also added a step to deploy the container to Cloud Run.
To create this facility, you used the following services directly:
- Cloud Build is used to create pipelines for building and deploying services
- Artifact Registry is used to store the container images
While you now have the factory for building and deploying a Cloud Run service from a GitHub repo, you should also understand how it is running and how to debug it. In Chapter 13, you will learn how to monitor and debug services by adding the observatory.