In this step of the challenge, I’m required to host the newly minted resume online, specifically by using Google Cloud Storage.
Cloud Storage doesn’t keep data in a file & folder hierarchy (ie file storage), or as chunks on a disk (block storage.) Rather, data is stored as objects in a packaged format, whereby the package contains the binary form of the data itself, and the relevant associated metadata (creation date, type, author etc…) Each object is accessible via a globally unique identifier, in the form of a URL. Objects are stored in “buckets”, each of which has a globally unique ID, and is physically located in a specific geographic region. As these objects / buckets can be made externally accessible via URL, it makes for a great way to easily serve website content, which is what I’m doing today 🙂
GCP offers various methods of transferring objects into Cloud Storage; from drag & drop in the GCP Console UI, to a physical device which you can lease and ship back to Google with 300TB on it. Since we are, at the moment, only moving over two tiny files, the latter would be a bit excessive and, to be honest, the former is a bit needlessly fiddly. Instead, we’ll use the gcloud and gsutil command line interface tools.
Once you get used to using the CLI tools provided by GCP, they really are the fastest and most efficient way to do most things with the platform. I won’t get into how to set these tools up (not least because it’ll be different depending on which platform you use) but there’s great documentation for that process here. Assuming we’re already set up, my first step is to ensure I’m creating the bucket in the correct project, using:
$ gcloud config set project PROJECT_ID
Then it’s a simple matter of using the gsutil “make bucket” (mb) command to create the bucket itself:
$ gsutil mb -l europe-west2 gs://my-bucket-name
It’s possible to specify various options at the stage of bucket creation, including setting for how long the data should be retained, and the “class” of storage (which affects how often the data should be accessed, and has impact on storage costs.) It’s also possible to set the project the bucket should be created in (but I’d already done that.) Here, I’ve just used the -l option to set the bucket’s physical location to the europe-west2 region (which is London.)
Transferring files into the bucket works in much the same way as copying files locally, using the cp command. Whilst in the local folder containing my html and CSS files, I can use gsutil to transfer these files directly into the cloud:
$ gsutil cp index.html style.css gs://my-bucket-name
Note that multiple files can be uploaded at the same time. Now, as long as you know the bucket and file names, the webpage should be accessible via URL in the standard format of:
https://storage.googleapis.com/my-bucket-name/index.html
But that there’s one more thing I need to do. By default, buckets and their objects are not accessible externally. In order to make them publicly available on the internet, we need to use Identity Access Management (IAM.) IAM is GCP’s way method of allowing admins to control exactly who can do exactly what with each GCP product and feature. It can offer incredibly fine-grained control and access (including to individual files in a Cloud Storage bucket) but here it’s easier and safe enough to make the whole bucket readable (but obviously not writeable!) by anyone. I can do this using gsutil, like so:
$ gsutil iam ch allUsers:objectViewer gs://my-bucket-name
…in other words “use IAM to grant viewer access to objects in this bucket to everyone”.
Great! I now have a static web page withe my resume on it that can be viewed by anyone. Next step is to ensure that visitors can reach my site securely, via HTTPS…