United Kingdom: +44 (0)208 088 8978

Adopting F# Snippets, Part 2 – Challenges

F# Snippets has a new home

We're hiring Software Developers

Click here to find out more

In Part 1, we looked at what the state of Fssnip was when we took over. In this post, we'll look at the challenges faced and how they were solved.

Before I started getting my hands dirty with migrating and upgrading the site, I created a parallel copy of the fssnip-website repo in our Azure DevOps, which would be my scratchpad (I also knew I was going to set up deployment pipelines to experiment with later).

Challenge 1 - Automating the infrastructure

At Compositional IT we use Farmer to define our Azure resources in code; we wanted to add this to FSSnip in order to simplify deployments in future.

  • Deploy.fsx contains this deployment script. It largely replicates the Azure deployment setup that had been manually created for the original FSSnip site with some minor changes (like moving from classic Application Insights to workspace-based Application Insights) as well as some more noteworthy changes such as switching to a Linux-based app service plan).
  • The solution now also contains upload-blobs.fsx, an F# script that can download the snippet data from the fssnip-data repository, extract it into a local folder and then upload any changes into Azure blob storage.
  • The global.json file needs to restrict dotnet to Core 3.1 for the app build process, but it needs to make itself scarce before the next step, which is the Azure deployment script. This is facilitated by renaming the file (when building/deploying from local) or by deleting it (in Azure DevOps pipelines).
  • The fssnip.ps1 is a PowerShell script that is useful for running the different commands in a development environment. It takes a parameter which is one of 4 commands to perform a specific task: build, run, deploy and upload (blob data). Anyone wanting to use it will first need to populate some keys/secrets in the environment variables section, as those were not committed, for obvious reasons.

Achieving a working deployment took a while. Apart from the switch to Linux, the fact that it was going to run in a more containerised manner on an out-of-support .NET Core 3.1 image made me a bit nervous. Sure enough, the app worked fine locally but would not "just run" up in Azure. This was for a different reason, though. Peering into the app service logs I found exceptions related to disk write access; this turned out to be caused by a particular setting on the App Service configuration: We were using the run_from_package setting, which means that the zip-deployed app code was being run from a read-only file share. This is normally totally fine for most cloud-native apps which don't write to the local file system, but is required for FSSnip. Nothing to do with Linux or outdated .NET Core runtime version after all!

Challenge 2 - Deployment pipelines

I haven’t done a lot of pipeline scripting before this, and this was a welcome opportunity for some hands-on learning. After the update was merged into the original repo, I created two Azure DevOps pipelines:

  • Pipeline 1: Build and Deploy
    This pipeline checks out the GitHub repo, builds the app and deploys it to our Azure subscription. A pipeline run is triggered by a push to the master branch. It’s actually a bit more complicated than that, and by complicated I may or may not mean hacky 🙂

    • Stage #1 of the pipeline cancels any run of the same pipeline currently in progress. It could be argued that this stage is next to obsolete now that the period of intense pipeline experimentation/testing on my part is over.
    • Stage #2 takes care of the checkout, restore, build and deploy steps. The deploy step executes the Farmer deploy script, which returns as output the storage connection string obtained from the storage account the deployment creates in Azure. This is written to a text file before the Farmer script exits. The pipeline then follows with a PowerShell script that reads the connection string from the text file and updates the group variable FSSNIP_STORAGE_CONNECTION_STRING with this value.
  • Pipeline 2: Provision Blob Data
    As you may have guessed, this one runs the upload-blobs.fsx script mentioned earlier. This script needs a working connection string for the target blob store if it is to succeed. But this is a discrete pipeline. What if the azure deployment was nuked meanwhile and redeployed, using the Build and Deploy pipeline? Incremental deployment of the same-ish Azure resources over their existing instances (we’re concerned about the storage account specifically here) will not change the storage keys/connection string. But if the resources get deleted and redeployed, the storage keys - and with those the connection strings - will change. If I defined the value as a fixed variable on the Provision Blob Data pipeline, this pipeline would then no longer work.

We're planning on removing the need for the Upload Blobs script / process (which acts as a backup / restore process) with Azure Backup, thus simplifying this process further.

At this point we need to talk about Variable Groups in Azure DevOps. As Microsoft documentation explains, “Variable groups store values and secrets that you might want to be passed into a YAML pipeline or make available across multiple pipelines.” This was the only way I was able to find to pass state between multiple pipelines. The current setup ensures that any run of the Build and Deploy pipeline leaves the group variable updated to the correct connection string, and that the Provision Blob Data pipeline will always find the correct connection string in this group variable, whether it runs somehow chained in sequence after the Build and Deploy pipeline or independently at any other time.

Custom domain / SSL bindings

When the app was successfully deployed and working in our subscription, we notified Tomas it was now a good time to redirect his registered domain fssnip.net from his web app to the one hosted by Compositional IT. When that was done, we were able to add SSL bindings on our end.

Stay tuned for Part 3, where we will discuss the future of F# Snippets and that role that you can play in it in the future!