United Kingdom: +44 (0)208 088 8978

Utilizing the OpenAI API with openai-fsharp

Dragos takes a look at the open-fsharp library used for interacting with the Open AI API in an convenient way!

We're hiring Software Developers

Click here to find out more

ChatGPT has been making significant waves in recent times, and its popularity isn't waning! As developers utilizing F#, how might we leverage it? The answer is the OpenAI F# library!

OpenAI F#, a nifty tool, allows you to engage with the OpenAI API in an effortless and seamless manner.

Overview

The library caters to two methods to utilize the APIs - either through function calls via the pipe operator, as illustrated below:

let client =
    Config(
        { Endpoint = "https://api.openai.com/v1"
          ApiKey = "your-api-key" },
        HttpRequester()
    )

let result = client |> models |> list

Or by employing a computation expression builder object, as demonstrated below:

let result =
    openAI {
        endPoint "https://api.openai.com/v1"
        apiKey "your-api-key"
        models
        list
    }

For further information, check out the OpenAI API reference.

This diverse API is powered by a variety of models with unique capabilities and varying price points. In addition, the API allows for minor customizations to the original base models to cater to your unique use-case through fine-tuning.

Code samples

Let's dive into some code to better understand how it operates!

type OpenAIComputed() =
    // Required - creates default "starting" values
    member _.Yield _ =
        Config({ Endpoint = ""; ApiKey = "" }, HttpRequester())

    [<CustomOperation "endPoint">]
    // Sets OpenAI end point
    member _.EndPoint(config: Config, endPoint: string) =
        Config(
            { Endpoint = endPoint
              ApiKey = config.ApiConfig.ApiKey },
            config.HttpRequester
        )

    [<CustomOperation "apiKey">]
    // Sets OpenAI API Key
    member _.ApiKey(config: Config, apiKey: string) =
        Config(
            { Endpoint = config.ApiConfig.Endpoint
              ApiKey = apiKey },
            config.HttpRequester
        )

    [<CustomOperation "models">]
    // Start OpenAI Models resource handling
    member _.Models(config: Config) = models config

    [<CustomOperation "list">]
    // Models List Endpoint
    member _.List(config: ConfigWithModelContext) : Models.ListModelsResponse = Models.list config

    [<CustomOperation "retrieve">]
    // Models List Endpoint
    member _.Retrieve(config: ConfigWithModelContext, modelName: string) : Models.ModelResponse =
        Models.retrieve modelName config

    [<CustomOperation "completions">]
    // Start OpenAI Completions resource handling
    member _.Completions(config: Config) = completions config

    [<CustomOperation "create">]
    // Completions Create Endpoint
    member _.Create
        (
            config: ConfigWithCompletionContext,
            request: Completions.CreateRequest
        ) : Completions.CreateResponse =
        Completions.create request config

    [<CustomOperation "chat">]
    // Start OpenAI Chat resource handling
    member _.Chat(config: Config) = chat config

    [<CustomOperation "create">]
    // Chat Create Endpoint
    member _.Create(config: ConfigWithChatContext, request: Chat.CreateRequest) : Chat.CreateResponse =
        Chat.create request config

    ...

    [<CustomOperation "listEvents">]
    // Fine-Tunes Cancel Endpoint
    member _.ListEvents(config: ConfigWithFineTuneContext, fineTuneId: string) : FineTunes.ListFineTuneEventsResponse =
        FineTunes.listEvents fineTuneId config

module Client =
    let sendRequest (config: Config) (data: (string * string) list) =
        config.HttpRequester.postRequest config.ApiConfig data

    let openAI = OpenAIComputed()

The OpenAI.fs file defines the computation expression that allows us to interact with the OpenAI API.

It provides a range of custom operations - denoted by the [<CustomOperation "operationName">] attribute - which each define different interactions with the OpenAI API.

Config is a type with two properties: ApiConfig and HttpRequester. ApiConfig (the one in Config) is an instance of the ApiConfig record, which includes the endpoint and API key. HttpRequester is an instance of an interface IHttpRequester, a contract (API) for a type that can perform HTTP requests, look here for a full definition of Config

Usage

Install it with:

dotnet add package OpenAI.Client

or Paket:

paket add OpenAI.Client

If you are writing a script use this:

#r "nuget: OpenAI.Client, 0.2.0"

Replace with the latest one according to Nuget

Make sure you get an API key by visiting this link

OpenAPI provides different models that differ in price and performance, you can see more here and also use the library to get a list of models, like this:

open OpenAI.Client

let result = client |> models |> list

You will get back an object with a property defining an array of models, with a model looking like this:

        { Id = "gpt-3.5-turbo"
         Object = "model"
         Created = 1677610602
         OwnedBy = "openai"
         Permission = [|{ Id = "modelperm-zy5TOjnE2zVaicIcKO9bQDgX"
                          Object = "model_permission"
                          Created = 1690864883
                          AllowCreateEngine = false
                          AllowSampling = true
                          AllowLogprobs = true
                          AllowSearchIndices = false
                          AllowView = true
                          AllowFineTuning = false
                          Organization = "*"
                          Group = None
                          IsBlocking = false }|]
         Root = "gpt-3.5-turbo"
         Parent = None }

Fun fact: ChatGPT uses the gpt-3.5

Let's ask "gpt-3.5" a question

open OpenAI.Chat
let answer =
    client
    |> chat
    |> create
        { Model = "gpt-3.5-turbo"
            Messages = [| {Role = "user"; Content = "What is F#?"} |] }

This is what we get back:

answer: CreateResponse =
  { Id = "chatcmpl-7osdGFd2jE6ODgiY8I6alXQJiSCaw"
    Object = "chat.completion"
    Created = 1692360762
    Model = "gpt-3.5-turbo-0613"
    Choices =
     [|{ Message =
          { Role = "assistant"
            Content =
             "F# is a functional-first programming language developed by Mi"+[580 chars] }
         Index = 0
         FinishReason = Some "stop" }|]
    Usage = { PromptTokens = 12
              CompletionTokens = Some 114
              TotalTokens = 126 } }

If you wish to focus on the answer:

printfn($"{answer.Choices[0].Message.Content}")

We can give it intructions:

client
    |> Edits.edits
    |> Edits.create
        { Model = "text-davinci-edit-001"
            Input = "What day of the wek is it?"
            Instruction = "Fix the spelling mistakes" }

And we get back:

  { Object = "edit"
    Created = 1692361505
    Choices = [|{ Text = "What day of the week is it?"
                  Index = 0 }|]
    Usage = { PromptTokens = 25
              CompletionTokens = Some 28
              TotalTokens = 53 } }

Conclusion

In this blog post, we explored and dissected the code intricacies of the OpenAPI library, we know how the Computation Expressions are being created and how it sends the HTTP requests. I trust this dive into the code has been enlightening for you. 🙂