
For the past few weeks I’ve been working to complete the Cloud Resume Challenge. The challenge is intended for people (such as myself) who are looking to move into a more technical role in Cloud specifically, and have technical skills but not the most techie of CVs! The challenge outlines a number of technologies and practices you must use in order to build a resume site with a functioning visitor counter. It’s designed to encompass a number of the features and principles that characterise modern cloud-based applications; the idea being that you’re showing what you know, demonstrating your ability to learn quickly & Google well, and are left with something tangible at the end.
The last step of the challenge is to write a blog post about the process; I guess this is technically that but, as I believe the best way to ensure you’ve learned something well is to write about it, I’ve taken this a step further and have been blogging about each step. Here’s the approach I took, with links to each post:
- Set up Source Control
- Built a static site with HTML
- Styled it with CSS
- Hosted it with Google Cloud Storage
- Secured it with HTTPS
- Pointed a domain name to it with Cloud DNS
- Created a database (Firestore in Datastore mode) to store visit counts
- Built an a visitor counter web service in Python and deployed to Cloud Run
- Defined and secured the API on Cloud API Gateway
- Wrote the JavaScript to call the API and display the visitor counter on-page
- Created e2e tests for both the front and back ends, with Cypress
- Created Terraform configs for both the back and front end infrastructure
- Built a CI/CD pipeline for the front and back ends with GitHub Actions
(The other step is to pass one of the GCP certs but I’d done that already!)
What I’ve Learned
I was already pretty comfortable with some of technologies used above and had at least a passing familiarity with most of them. I hadn’t used Python much before and found learnpython.org to be a great resource to get comfortable with the syntax and some of the Python-specific ways of doing things. Cypress was new to me as well, and it’s so easy to set up and intuitive to use that I actually really enjoyed creating my tests.
The main revelation to me was Terraform; apparently I love Terraform! 😃 It’s a bit intimidating at first, learning the syntax and how to think of your project resources in terms of IAC but, once it clicks, it’s just an incredibly powerful tool. Having all the disparate technologies, permissions, and settings that comprise a project exist as a single easily navigable, editable, portable, and deployable entity makes you wonder how you ever managed without it. I will admit to destroying and recreating all my resources more often than was absolutely necessary, just for fun (OK, that did get boring eventually!) I would maybe describe the way it handles state as finicky but it’s fine once you get used to the way Terraform wants you to work (and much easier to manage once you move the state file to a remote backend).
One other thing that occurred to me at several points was the many ways there are to accomplish the same thing with Cloud, and how often security and access plays into the eventual decision. I was in two minds, for example, on whether to have a single Artifact Registry for my containers (and whether to keep it in either my QA or Prod environments, or else in its own project entirely) or one per environment. I came to the decision that having a separate one each for QA and Prod was the right way to go, primarily because it meant not having to unnecessarily grant access between different projects.
Finally, it’s worth taking the time to parameterise any project-specific values in your GitHub actions YAML workflow files, even for things that aren’t creds etc that need to be kept secret. The reason being that, once you have a CI/CD pipeline and methodology built out that you’re happy with, these files will be highly reusable, so it’s worth taking the time to make them generic enough to be copy-paste-able for use in future projects (such that you just need to add the required secrets and environment variables).
Tips
Use the gcloud / gsutil command line tools as much as possible – This might slow you down at first but it’s much quicker than using the console once you learn the core commands. As a bonus, while you’re creating resources from the command line, you’re unknowingly learning Terraform, Mr Miyagi style. As I say, Terraform can be a bit imposing to begin with but there’s almost always a 1:1 relationship between the gcloud command to create a resource and the equivalent Terraform code. Creating resources in the console UI is often more intuitive but it adds a layer of abstraction that lets you implement things without necessarily understanding what you did. That’s fine but it’s the opposite of the approach required by IaC.
Learn Identity Access Management – There isn’t an explicit step in the CRC around IAM but it’s at the core of how GCP works. Within a project, GCP resources will often use the Compute Engine default (super-user) service account to authenticate with each other. This lends a convenient “plug and play” aspect to building apps but as soon as you need to interact with them from external sources (like Terraform & GitHub Actions) you’ll run into issues if you don’t know how to assign the correct permissions to allow them to authenticate with your GCP projects. It’s tempting to circumvent this issue by just maxing out the permissions granted to everything but this is not secure and not best practice.
Being stuck is usually about finding the right question – Generally speaking, any issue you might run into has probably already been solved by somebody somewhere and if you can’t find the answer, it’s likely that you just haven’t yet understood the problem well enough to know what the right question is! Keep breaking the problem down into smaller parts and you’ll eventually get to the root of what you’re really trying to find out. The answer will be out there somewhere.
Don’t be tempted to cheat! – I’m sure the challenge is popular enough that I could have been sneaky and copied what someone else did but I’m certain I learned more from the definitely-non-zero amount of times I had to crawl my way through The Desert of Despair than I would have from any explicit guide to the project. I can recommend (and did shell out for) the official Cloud Resume Challenge book. The book is great for things like discussing what technology might be best for a particular step, but never actually tells you how to do things (its approach is a bit like if you asked someone how to get to London and they pointed vaguely south and told you to build a car. This is not a criticism!)
Next Steps
I’m eager to keep learning and I’m definitely interested in investigating other cloud platforms but I’m sticking with GCP for now. Having technically touched on Kubernetes via Cloud Run, I’m keen to try deploying / managing a cluster on GKE and I have a couple of ideas of projects to do that with. I’ll definitely also keep blogging about it, so please keep coming back if you’re interested to see how I get on 🙂
In the meantime, I can heartily recommend the Cloud Resume Challenge to anyone looking to learn more about architecting with Cloud and who wants to have something to show for it when they’re done. Please feel to check out my completed resume site here, and the GitHub repos for my front and back ends.
Happy clouding!