United Kingdom: +44 (0)208 088 8978

Machine Learning in .NET pt 2 – TensorFlow

Ryan is back with part two of his adventures in machine learning with .NET. This time he takes a look at TensorFlow.

We're hiring Software Developers

Click here to find out more

As an absolute beginner in the world of machine learning, and someone who loves F#, I have been considering the various libraries available in .NET.

In part one of this blog we went 'full Microsoft' and tried ML.NET.

We have been using a common introductory problem, that of recognising handwritten digits.

This time around, we are going to try moving towards the broader ML community and experiment with a .NET binding of TensorFlow with its Keras API.


TensorFlow is, in its own words, "an end-to-end open source platform for machine learning".

Originally developed at Google, it is free to use and has a rich library of learning resources and documentation.

It provides a comprehensive suite of tools and has some very useful features such as the ability to train models on your GPU rather than the CPU, as the hardware is much more suited to the task.

This requires an NVIDIA GPU with CUDA features enabled


Keras is a high-level machine learning API that sits on top of TensorFlow.

It provides abstractions and implementations of common ML processes and algorithms, and a framework to easily combine and execute them.

Of particular note, it has two forms - you can use its Sequential model which provides a fluent style interface not disimilar in nature to ML.NET.

It also provides, however, a Functional API. This is a much better fit for languages such as F# as it is designed to work in a pipelined fashion.

The library includes access to many common datasets 'out of the box', and this helpfully includes our handwritten digit set.

Usage in .NET

Whilst the most commonly used language binding is Python, the libraries can be used from a variety of languages, including F# and C# with TensorFlow.NET.

There are ports of all the common examples for both C# and F# in the Github repo.


Whilst installation and usage is ostensibly simple, I did hit a couple of stumbling blocks that lost me a few hours and gained me a few grey hairs, so I'll point them out here to save you the same.

  1. You can't use it in a .NET Interactive VSCode notebook or script. Trying to do so provides glitchy and inconsistent output. As a result, you have to use a console app to run the code. I think this is related to, but probably not limited to, the way the library prints to the console as a side effect rather than a function output.

  2. There is a bug with .NET 6

If you use .NET 6, the code will execute and your model will get trained (i.e. no crashes etc), however it will always be completely broken, as in have a very very poor score and no predictive ability.

This confused me for a good while, as I was convinced I had done something wrong in my code.

So long as you start with a .NET 5 console app, you should be good to go!


Note: The solution below isn't exactly the same model as the one I demonstrated with ML.NET, but it is pretty close.

You will need to add three packages to your project via nuget -

open Tensorflow
open Tensorflow.Keras.Layers
open type Tensorflow.KerasApi

let main argv =

    // Load the handwritten digit dataset directly.
    // It is already split into training and test data for us.
    let (image_train, label_train, image_test, label_test) = 

    // The training images are 24 * 24 pixels, and there are 60,000 of them.
    // We need to reshape them into a single row of 784 pixels.
    // We also need to scale them from a range of 0-255 to a range of 0-1.
    let image_train_flat = 
        (image_train.reshape(Shape [| 60000; 784 |])) / 255f

    // Same here for the test images, although there are 10,000 of them.
    let image_test_flat = 
        (image_test.reshape(Shape [| 10000; 784 |])) / 255f

    // Here we define our model input shape
    let inputs = new Tensors([|keras.Input(Shape [| 784 |])|])

    let layers = LayersApi()

    // We are going to have a single layer of processing.
    // This will divide our inputs into one of ten groups.
    let outputs = 
         // You could insert many layers here, for example
         // |> layers.Dense(1024, activation = keras.activations.Relu).Apply
         |> layers.Dense(10).Apply

    // This is an alternative way to flatten and normalise the data as part of the model 
    // pipeline. 
    // If you took this approach you wouldn't need the reshape commands, you could use the
    // unflattened data directly.
    //let outputs = 
    //    inputs
    //    |> layers.Flatten().Apply
    //    |> layers.Rescaling(1.0f / 255f).Apply
    //    |> layers.Dense(10).Apply

    // We create our model by combining the inputs and outputs.
    let model = keras.Model(inputs, outputs, name = "Digit Recognition")

    // This will print a nice summary of the model structure to the console

    // Here we decide which optimiser and cost functions we wish to use
        // Stochastic Gradient Descent with a learning rate (alpha) of 0.1
        // https://keras.io/api/optimizers/
        // Categorisation specific loss function, with a flag to enable logistic normalisation.
        // https://keras.io/api/losses/probabilistic_losses/
        keras.losses.SparseCategoricalCrossentropy(from_logits = true),
        metrics = [| "accuracy" |])

    // Train the model
        image_train_flat.numpy(), // .numpy() converts to a Python NDArray.
        epochs = 1) // One pass over the data set

    // Check the model accuracy using the test data.
    model.evaluate(image_test_flat.numpy(), label_test, verbose = 2)



Model: Digit Recognition
Layer (type)                  Output Shape              Param #
input_1 (InputLayer)          (None, 784)               0
dense (Dense)                 (None, 10)                7850
Total params: 7850
Trainable params: 7850
Non-trainable params: 0
Epoch: 001/001, Step: 0001/1875, loss: 2.318694, accuracy: 0.062500
Epoch: 001/001, Step: 0002/1875, loss: 2.264834, accuracy: 0.171875


Epoch: 001/001, Step: 1874/1875, loss: 0.416384, accuracy: 0.886389
Epoch: 001/001, Step: 1875/1875, loss: 0.416256, accuracy: 0.886417

iterator: 1, loss: 0.31056058, accuracy: 0.9151


The experience of using TensorFlow was really interesting.

It took me a little while to understand the landscape of tools and how they fit together, and the issues getting started I mentioned earlier did cause a bit of frustration.

However, once those hurdles were crossed, the Keras API was really nice to use. The fact it has a functional-first approach really is a big plus when using it from F#. It would be great if ML.NET offered something similar.

There is also the huge bonus that TensorFlow has a massive community around it, along with loads of learning resources.

With all of that in mind I think I will probably stick with TensorFlow, rather than ML.NET, for the next part of my learning journey and see how it goes.

As usual I have popped an example project in the Github repo I use for these ML blogs.