Checkpoint #2, 11/07/2024 - My first 'DevOps' project - Part 2

Learning how to upload files to the cloud

After creating a GitHub repo, the next step would be to test the uploading and downloading of files to and from Azure's servers i.e. cloud storage. The goal is to learn how to do this using Azure CLI.

  • First, installing Azure CLI using the command

      $ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
    

    The curl command is used to transfer data from or to a server that supports various protocols such as HTTPS or FTP. The -sL flags combine 2 flags, the -s flag which allows for silent execution of the curl command i.e. with no progress bars or error messages, and the -L flag which follows any HTTP redirects.

    Data is transferred from aka.ms/InstallAzureCLIDeb. The data transferred is a batch of shell commands, and the pipe symbol provides these commands as input to the sudo bash command, which executes these in order to install AzureCLI.

  • We can confirm AzureCLI installation using

      $ which az
    

    Which outputs the installation directory.

  • Following this, we log in using

      $ az login --use-device-code
    

    This displays a code and asks the user to login to microsoft.com/devicelogin and input the code, then sign in with their Azure account. The user should have a subscription, either the free trial or Pay-As-You-Go.

  • To upload a file, first we need to create a container, which requires a storage account, which requires a resource group. To create a resource group, we need to specify a location i.e. a region. We can query the list of regions using

      $ az account list-locations --query '[].name'
    

    By default, az account list-locations returns a detailed list of locations with multiple parameters in JSON format. But all we need is the name of these locations, so the --query '[].name' flag helps with that. We just get a list of locations with this.

  • Now, we can create a resource group within the subscription. To do so, we use the command

      $ az group create \
      >--name <Enter desired name of resource group here> \
      >--location <Enter desired location from list>
    

    The backslash may be used to break up a command into multiple lines.

  • Thus the resource group is created. We can confirm the same using the command

      $ az group list
    
  • Next, we need to create a storage account using the command

      $ az storage account create \
      >--name <Desired storage account name> --resource-group <resource group from before> \
      >--location <location entered before> --encryption-services blob
    

    Here there was a roadblock, as we received the error SubscriptionNotFound, even though querying the subscriptions using

      $ az account list
    

    showed a Pay-As-You-Go subscription. The issue turned out to be in the Microsoft.Storage resource provider not being registered. This was confirmed with:

      $ az provider list --query "[?registrationState=='Registered']" \
      >--output table
    

    This command queries the providers from the list and returns the ones whose 'registrationState' field contains the value 'Registered', and the --output table flag organizes it in a table for easier readability. This Microsoft.Storage was not a part of this table, hence confirming it was not registered. That was taken care of with

      $ az provider register -n 'Microsoft.Storage'
    

    The -n flag is mandatory, as it stands for namespace i.e. name of resource provider we wish to register. This process took a few minutes, but after that the registration could be confirmed by running the above query again. And then, we were finally able to create a storage account using the above creation command, and confirm its creation by running the command

      $ az storage account list
    
  • Next, we must create a container, as blobs (unorganized files of any type) are stored in containers. It is similar to a box in the real world holding different types of things such as books, electronic devices, stationery etc. Before that, we must first explicitly assign ourselves the role of Storage Blob Data Contributor. This is done via

      $ az ad signed-in-user show --query id -o tsv | az role assignment create \
      >--role "Storage Blob Data Contributor" --assignee @- \
      >--scope \
      >--"/subscriptions/<Subscription ID>/resourceGroups/<RG name>/providers/Microsoft.Storage/storageAccounts/<Storage Account name>"
    
  • Now that the role assignment has been done, we can create a container using

      $ az storage container create --account-name <Storage Account name> \
      >--name <Enter desired container name here> --auth-mode login
    

    And the same is confirmed using

      $ az storage container list --account-name <Storage Account name>
    
  • Now we can test whether we can upload a blob to this container. First we'll create a small sample file using the Vim editor:

      $ vim HelloWorld.txt
    

    We press 'i' to enter insert mode and type HelloWorld, then press escape to exit insert mode and 'ZZ' (important to hold shift) to save the file and exit vim. Then, we upload this file using the command

      $ az storage blob upload --account-name <storage-account> \
      >--container-name <container> --name HelloWorld.txt --file HelloWorld.txt \
      >--auth-mode login
    

    The above --flag determines the name of the file once it is uploaded to the cloud.

  • Now, we can confirm the files in the container using

      $ az storage blob list --account-name <storage account> \
      >--container-name <container name> --output table --auth-mode login
    
  • Now we check whether our file can be downloaded:

      $ $ az storage blob download --account-name <Storage Account name> \
      >--container-name <Container name> --name HelloWorld.txt \
      >--file <Desired destination directory>/<Desired file name after downloading> \
      >--auth-mode login
    

    For example, if we rename the file to HelloWorld2.txt and choose to save it in the ~/Downloads directory, the --file field i.e. destination would look like "~/Downloads/HelloWorld2.txt"

  • And after switching to that directory and checking the contents of the file using the cat command:

      $ cat HelloWorld2.txt
    

Hence, we confirm that the download worked, and also that we are able to create a storage account and upload and download files. Next step: use everything we learned here to create a bash script that will automate this entire process.

Resources:

After creating a GitHub repository, we explored how to upload and download files using Azure CLI. We installed Azure CLI with `curl`, logged in using `az login`, and created a resource group, storage account, and container. Challenges included registering the `Microsoft.Storage` resource provider. After resolving that, we tested file uploads and downloads using blob storage commands, confirming success through various `az` commands. This process laid the foundation for automating these tasks with a bash script.