Skip to content

Training a Simple LSTM

In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:

  1. Create custom Lux models.

  2. Become familiar with the Lux recurrent neural network API.

  3. Training using Optimisers.jl and Zygote.jl.

Package Imports

Note: If you wish to use AutoZygote() for automatic differentiation, add Zygote to your project dependencies and include using Zygote.

julia
using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, Random
Precompiling Reactant...
  94419.4 ms  ✓ Enzyme
   6897.4 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
  88888.9 ms  ✓ Reactant
  3 dependencies successfully precompiled in 191 seconds. 77 already precompiled.
Precompiling LuxEnzymeExt...
   6803.6 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
   6925.7 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
  14680.1 ms  ✓ Enzyme → EnzymeStaticArraysExt
  14741.6 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
   7964.3 ms  ✓ Lux → LuxEnzymeExt
  5 dependencies successfully precompiled in 22 seconds. 145 already precompiled.
Precompiling OptimisersReactantExt...
  16817.8 ms  ✓ Reactant → ReactantStatisticsExt
  19684.9 ms  ✓ Optimisers → OptimisersReactantExt
  2 dependencies successfully precompiled in 20 seconds. 88 already precompiled.
Precompiling LuxCoreReactantExt...
  17199.2 ms  ✓ LuxCore → LuxCoreReactantExt
  1 dependency successfully precompiled in 17 seconds. 85 already precompiled.
Precompiling MLDataDevicesReactantExt...
  17126.5 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
  1 dependency successfully precompiled in 17 seconds. 82 already precompiled.
Precompiling LuxLibReactantExt...
  17452.2 ms  ✓ Reactant → ReactantKernelAbstractionsExt
  17571.9 ms  ✓ Reactant → ReactantSpecialFunctionsExt
  17732.2 ms  ✓ LuxLib → LuxLibReactantExt
  16830.4 ms  ✓ Reactant → ReactantArrayInterfaceExt
  4 dependencies successfully precompiled in 35 seconds. 158 already precompiled.
Precompiling WeightInitializersReactantExt...
  16895.0 ms  ✓ WeightInitializers → WeightInitializersReactantExt
  1 dependency successfully precompiled in 17 seconds. 96 already precompiled.
Precompiling ReactantNNlibExt...
  19664.7 ms  ✓ Reactant → ReactantNNlibExt
  1 dependency successfully precompiled in 20 seconds. 103 already precompiled.
Precompiling LuxReactantExt...
  12141.4 ms  ✓ Lux → LuxReactantExt
  1 dependency successfully precompiled in 13 seconds. 180 already precompiled.

Dataset

We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

julia
function get_dataloaders(; dataset_size=1000, sequence_length=50)
    # Create the spirals
    data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
    # Get the labels
    labels = vcat(repeat([0.0f0], dataset_size ÷ 2), repeat([1.0f0], dataset_size ÷ 2))
    clockwise_spirals = [
        reshape(d[1][:, 1:sequence_length], :, sequence_length, 1) for
        d in data[1:(dataset_size ÷ 2)]
    ]
    anticlockwise_spirals = [
        reshape(d[1][:, (sequence_length + 1):end], :, sequence_length, 1) for
        d in data[((dataset_size ÷ 2) + 1):end]
    ]
    x_data = Float32.(cat(clockwise_spirals..., anticlockwise_spirals...; dims=3))
    # Split the dataset
    (x_train, y_train), (x_val, y_val) = splitobs((x_data, labels); at=0.8, shuffle=true)
    # Create DataLoaders
    return (
        # Use DataLoader to automatically minibatch and shuffle the data
        DataLoader(
            collect.((x_train, y_train)); batchsize=128, shuffle=true, partial=false
        ),
        # Don't shuffle the validation data
        DataLoader(collect.((x_val, y_val)); batchsize=128, shuffle=false, partial=false),
    )
end
get_dataloaders (generic function with 1 method)

Creating a Classifier

We will be extending the Lux.AbstractLuxContainerLayer type for our custom model since it will contain a lstm block and a classifier head.

We pass the fieldnames lstm_cell and classifier to the type to ensure that the parameters and states are automatically populated and we don't have to define Lux.initialparameters and Lux.initialstates.

To understand more about container layers, please look at Container Layer.

julia
struct SpiralClassifier{L,C} <: AbstractLuxContainerLayer{(:lstm_cell, :classifier)}
    lstm_cell::L
    classifier::C
end

We won't define the model from scratch but rather use the Lux.LSTMCell and Lux.Dense.

julia
function SpiralClassifier(in_dims, hidden_dims, out_dims)
    return SpiralClassifier(
        LSTMCell(in_dims => hidden_dims), Dense(hidden_dims => out_dims, sigmoid)
    )
end
Main.var"##230".SpiralClassifier

We can use default Lux blocks – Recurrence(LSTMCell(in_dims => hidden_dims) – instead of defining the following. But let's still do it for the sake of it.

Now we need to define the behavior of the Classifier when it is invoked.

julia
function (s::SpiralClassifier)(
    x::AbstractArray{T,3}, ps::NamedTuple, st::NamedTuple
) where {T}
    # First we will have to run the sequence through the LSTM Cell
    # The first call to LSTM Cell will create the initial hidden state
    # See that the parameters and states are automatically populated into a field called
    # `lstm_cell` We use `eachslice` to get the elements in the sequence without copying,
    # and `Iterators.peel` to split out the first element for LSTM initialization.
    x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
    (y, carry), st_lstm = s.lstm_cell(x_init, ps.lstm_cell, st.lstm_cell)
    # Now that we have the hidden state and memory in `carry` we will pass the input and
    # `carry` jointly
    for x in x_rest
        (y, carry), st_lstm = s.lstm_cell((x, carry), ps.lstm_cell, st_lstm)
    end
    # After running through the sequence we will pass the output through the classifier
    y, st_classifier = s.classifier(y, ps.classifier, st.classifier)
    # Finally remember to create the updated state
    st = merge(st, (classifier=st_classifier, lstm_cell=st_lstm))
    return vec(y), st
end

Using the @compact API

We can also define the model using the Lux.@compact API, which is a more concise way of defining models. This macro automatically handles the boilerplate code for you and as such we recommend this way of defining custom layers

julia
function SpiralClassifierCompact(in_dims, hidden_dims, out_dims)
    lstm_cell = LSTMCell(in_dims => hidden_dims)
    classifier = Dense(hidden_dims => out_dims, sigmoid)
    return @compact(; lstm_cell, classifier) do x::AbstractArray{T,3} where {T}
        x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
        y, carry = lstm_cell(x_init)
        for x in x_rest
            y, carry = lstm_cell((x, carry))
        end
        @return vec(classifier(y))
    end
end
SpiralClassifierCompact (generic function with 1 method)

Defining Accuracy, Loss and Optimiser

Now let's define the binarycrossentropy loss. Typically it is recommended to use logitbinarycrossentropy since it is more numerically stable, but for the sake of simplicity we will use binarycrossentropy.

julia
const lossfn = BinaryCrossEntropyLoss()

function compute_loss(model, ps, st, (x, y))
    ŷ, st_ = model(x, ps, st)
    loss = lossfn(ŷ, y)
    return loss, st_, (; y_pred=ŷ)
end

matches(y_pred, y_true) = sum((y_pred .> 0.5f0) .== y_true)
accuracy(y_pred, y_true) = matches(y_pred, y_true) / length(y_pred)
accuracy (generic function with 1 method)

Training the Model

julia
function main(model_type)
    dev = reactant_device()
    cdev = cpu_device()

    # Get the dataloaders
    train_loader, val_loader = dev(get_dataloaders())

    # Create the model
    model = model_type(2, 8, 1)
    ps, st = dev(Lux.setup(Random.default_rng(), model))

    train_state = Training.TrainState(model, ps, st, Adam(0.01f0))
    model_compiled = if dev isa ReactantDevice
        Reactant.with_config(;
            dot_general_precision=PrecisionConfig.HIGH,
            convolution_precision=PrecisionConfig.HIGH,
        ) do
            @compile model(first(train_loader)[1], ps, Lux.testmode(st))
        end
    else
        model
    end
    ad = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()

    for epoch in 1:25
        # Train the model
        total_loss = 0.0f0
        total_samples = 0
        for (x, y) in train_loader
            (_, loss, _, train_state) = Training.single_train_step!(
                ad, lossfn, (x, y), train_state
            )
            total_loss += loss * length(y)
            total_samples += length(y)
        end
        @printf "Epoch [%3d]: Loss %4.5f\n" epoch (total_loss / total_samples)

        # Validate the model
        total_acc = 0.0f0
        total_loss = 0.0f0
        total_samples = 0

        st_ = Lux.testmode(train_state.states)
        for (x, y) in val_loader
            ŷ, st_ = model_compiled(x, train_state.parameters, st_)
            ŷ, y = cdev(ŷ), cdev(y)
            total_acc += accuracy(ŷ, y) * length(y)
            total_loss += lossfn(ŷ, y) * length(y)
            total_samples += length(y)
        end

        @printf "Validation:\tLoss %4.5f\tAccuracy %4.5f\n" (total_loss / total_samples) (
            total_acc / total_samples
        )
    end

    return cpu_device()((train_state.parameters, train_state.states))
end

ps_trained, st_trained = main(SpiralClassifier)
┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
2025-07-14 00:08:15.575896: I external/xla/xla/service/service.cc:153] XLA service 0x11b87690 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2025-07-14 00:08:15.575930: I external/xla/xla/service/service.cc:161]   StreamExecutor device (0): NVIDIA A100-PCIE-40GB MIG 1g.5gb, Compute Capability 8.0
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1752451695.576756 2724921 se_gpu_pjrt_client.cc:1370] Using BFC allocator.
I0000 00:00:1752451695.576836 2724921 gpu_helpers.cc:136] XLA backend allocating 3825205248 bytes on device 0 for BFCAllocator.
I0000 00:00:1752451695.576904 2724921 gpu_helpers.cc:177] XLA backend will use up to 1275068416 bytes on device 0 for CollectiveBFCAllocator.
I0000 00:00:1752451695.590949 2724921 cuda_dnn.cc:471] Loaded cuDNN version 90800
Epoch [  1]: Loss 0.59114
Validation:	Loss 0.53502	Accuracy 1.00000
Epoch [  2]: Loss 0.50149
Validation:	Loss 0.43914	Accuracy 1.00000
Epoch [  3]: Loss 0.41522
Validation:	Loss 0.34856	Accuracy 1.00000
Epoch [  4]: Loss 0.33697
Validation:	Loss 0.27969	Accuracy 1.00000
Epoch [  5]: Loss 0.27521
Validation:	Loss 0.22387	Accuracy 1.00000
Epoch [  6]: Loss 0.22145
Validation:	Loss 0.17628	Accuracy 1.00000
Epoch [  7]: Loss 0.17292
Validation:	Loss 0.13671	Accuracy 1.00000
Epoch [  8]: Loss 0.13349
Validation:	Loss 0.10568	Accuracy 1.00000
Epoch [  9]: Loss 0.10233
Validation:	Loss 0.08232	Accuracy 1.00000
Epoch [ 10]: Loss 0.08034
Validation:	Loss 0.06535	Accuracy 1.00000
Epoch [ 11]: Loss 0.06389
Validation:	Loss 0.05309	Accuracy 1.00000
Epoch [ 12]: Loss 0.05202
Validation:	Loss 0.04424	Accuracy 1.00000
Epoch [ 13]: Loss 0.04367
Validation:	Loss 0.03760	Accuracy 1.00000
Epoch [ 14]: Loss 0.03725
Validation:	Loss 0.03238	Accuracy 1.00000
Epoch [ 15]: Loss 0.03207
Validation:	Loss 0.02826	Accuracy 1.00000
Epoch [ 16]: Loss 0.02806
Validation:	Loss 0.02501	Accuracy 1.00000
Epoch [ 17]: Loss 0.02498
Validation:	Loss 0.02244	Accuracy 1.00000
Epoch [ 18]: Loss 0.02251
Validation:	Loss 0.02035	Accuracy 1.00000
Epoch [ 19]: Loss 0.02049
Validation:	Loss 0.01861	Accuracy 1.00000
Epoch [ 20]: Loss 0.01881
Validation:	Loss 0.01710	Accuracy 1.00000
Epoch [ 21]: Loss 0.01728
Validation:	Loss 0.01575	Accuracy 1.00000
Epoch [ 22]: Loss 0.01594
Validation:	Loss 0.01449	Accuracy 1.00000
Epoch [ 23]: Loss 0.01475
Validation:	Loss 0.01333	Accuracy 1.00000
Epoch [ 24]: Loss 0.01366
Validation:	Loss 0.01237	Accuracy 1.00000
Epoch [ 25]: Loss 0.01278
Validation:	Loss 0.01162	Accuracy 1.00000

We can also train the compact model with the exact same code!

julia
ps_trained2, st_trained2 = main(SpiralClassifierCompact)
┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [  1]: Loss 0.48525
Validation:	Loss 0.45275	Accuracy 1.00000
Epoch [  2]: Loss 0.38605
Validation:	Loss 0.37046	Accuracy 1.00000
Epoch [  3]: Loss 0.31090
Validation:	Loss 0.30490	Accuracy 1.00000
Epoch [  4]: Loss 0.25221
Validation:	Loss 0.24660	Accuracy 1.00000
Epoch [  5]: Loss 0.19908
Validation:	Loss 0.19161	Accuracy 1.00000
Epoch [  6]: Loss 0.15079
Validation:	Loss 0.14143	Accuracy 1.00000
Epoch [  7]: Loss 0.11099
Validation:	Loss 0.10279	Accuracy 1.00000
Epoch [  8]: Loss 0.08207
Validation:	Loss 0.07589	Accuracy 1.00000
Epoch [  9]: Loss 0.05977
Validation:	Loss 0.05753	Accuracy 1.00000
Epoch [ 10]: Loss 0.04676
Validation:	Loss 0.04517	Accuracy 1.00000
Epoch [ 11]: Loss 0.03735
Validation:	Loss 0.03680	Accuracy 1.00000
Epoch [ 12]: Loss 0.03096
Validation:	Loss 0.03081	Accuracy 1.00000
Epoch [ 13]: Loss 0.02614
Validation:	Loss 0.02597	Accuracy 1.00000
Epoch [ 14]: Loss 0.02175
Validation:	Loss 0.02160	Accuracy 1.00000
Epoch [ 15]: Loss 0.01804
Validation:	Loss 0.01743	Accuracy 1.00000
Epoch [ 16]: Loss 0.01460
Validation:	Loss 0.01393	Accuracy 1.00000
Epoch [ 17]: Loss 0.01175
Validation:	Loss 0.01156	Accuracy 1.00000
Epoch [ 18]: Loss 0.01007
Validation:	Loss 0.01005	Accuracy 1.00000
Epoch [ 19]: Loss 0.00885
Validation:	Loss 0.00901	Accuracy 1.00000
Epoch [ 20]: Loss 0.00804
Validation:	Loss 0.00820	Accuracy 1.00000
Epoch [ 21]: Loss 0.00731
Validation:	Loss 0.00753	Accuracy 1.00000
Epoch [ 22]: Loss 0.00679
Validation:	Loss 0.00696	Accuracy 1.00000
Epoch [ 23]: Loss 0.00624
Validation:	Loss 0.00645	Accuracy 1.00000
Epoch [ 24]: Loss 0.00579
Validation:	Loss 0.00601	Accuracy 1.00000
Epoch [ 25]: Loss 0.00542
Validation:	Loss 0.00562	Accuracy 1.00000

Saving the Model

We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

julia
@save "trained_model.jld2" ps_trained st_trained

Let's try loading the model

julia
@load "trained_model.jld2" ps_trained st_trained
2-element Vector{Symbol}:
 :ps_trained
 :st_trained

Appendix

julia
using InteractiveUtils
InteractiveUtils.versioninfo()

if @isdefined(MLDataDevices)
    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
        println()
        CUDA.versioninfo()
    end

    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
        println()
        AMDGPU.versioninfo()
    end
end
Julia Version 1.11.6
Commit 9615af0f269 (2025-07-09 12:58 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 48 × AMD EPYC 7402 24-Core Processor
  WORD_SIZE: 64
  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
Environment:
  JULIA_CPU_THREADS = 2
  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
  JULIA_PKG_SERVER = 
  JULIA_NUM_THREADS = 48
  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
  JULIA_PKG_PRECOMPILE_AUTO = 0
  JULIA_DEBUG = Literate
  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6

This page was generated using Literate.jl.