Training a Simple LSTM
In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:
Create custom Lux models.
Become familiar with the Lux recurrent neural network API.
Training using Optimisers.jl and Zygote.jl.
Package Imports
Note: If you wish to use AutoZygote() for automatic differentiation, add Zygote to your project dependencies and include using Zygote.
using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, RandomDataset
We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.
function create_dataset(; dataset_size=1000, sequence_length=50)
# Create the spirals
data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
# Get the labels
labels = vcat(repeat([0.0f0], dataset_size ÷ 2), repeat([1.0f0], dataset_size ÷ 2))
clockwise_spirals = [
reshape(d[1][:, 1:sequence_length], :, sequence_length, 1) for
d in data[1:(dataset_size ÷ 2)]
]
anticlockwise_spirals = [
reshape(d[1][:, (sequence_length + 1):end], :, sequence_length, 1) for
d in data[((dataset_size ÷ 2) + 1):end]
]
x_data = Float32.(cat(clockwise_spirals..., anticlockwise_spirals...; dims=3))
return x_data, labels
end
function get_dataloaders(; dataset_size=1000, sequence_length=50)
x_data, labels = create_dataset(; dataset_size, sequence_length)
# Split the dataset
(x_train, y_train), (x_val, y_val) = splitobs((x_data, labels); at=0.8, shuffle=true)
# Create DataLoaders
return (
# Use DataLoader to automatically minibatch and shuffle the data
DataLoader(
collect.((x_train, y_train)); batchsize=128, shuffle=true, partial=false
),
# Don't shuffle the validation data
DataLoader(collect.((x_val, y_val)); batchsize=128, shuffle=false, partial=false),
)
endCreating a Classifier
We will be extending the Lux.AbstractLuxContainerLayer type for our custom model since it will contain a LSTM block and a classifier head.
We pass the field names lstm_cell and classifier to the type to ensure that the parameters and states are automatically populated and we don't have to define Lux.initialparameters and Lux.initialstates.
To understand more about container layers, please look at Container Layer.
struct SpiralClassifier{L,C} <: AbstractLuxContainerLayer{(:lstm_cell, :classifier)}
lstm_cell::L
classifier::C
endWe won't define the model from scratch but rather use the Lux.LSTMCell and Lux.Dense.
function SpiralClassifier(in_dims, hidden_dims, out_dims)
return SpiralClassifier(
LSTMCell(in_dims => hidden_dims), Dense(hidden_dims => out_dims, sigmoid)
)
endWe can use default Lux blocks – Recurrence(LSTMCell(in_dims => hidden_dims) – instead of defining the following. But let's still do it for the sake of it.
Now we need to define the behavior of the Classifier when it is invoked.
function (s::SpiralClassifier)(
x::AbstractArray{T,3}, ps::NamedTuple, st::NamedTuple
) where {T}
# First we will have to run the sequence through the LSTM Cell
# The first call to LSTM Cell will create the initial hidden state
# See that the parameters and states are automatically populated into a field called
# `lstm_cell` We use `eachslice` to get the elements in the sequence without copying,
# and `Iterators.peel` to split out the first element for LSTM initialization.
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
(y, carry), st_lstm = s.lstm_cell(x_init, ps.lstm_cell, st.lstm_cell)
# Now that we have the hidden state and memory in `carry` we will pass the input and
# `carry` jointly
for x in x_rest
(y, carry), st_lstm = s.lstm_cell((x, carry), ps.lstm_cell, st_lstm)
end
# After running through the sequence we will pass the output through the classifier
y, st_classifier = s.classifier(y, ps.classifier, st.classifier)
# Finally remember to create the updated state
st = merge(st, (classifier=st_classifier, lstm_cell=st_lstm))
return vec(y), st
endUsing the @compact API
We can also define the model using the Lux.@compact API, which is a more concise way of defining models. This macro automatically handles the boilerplate code for you and as such we recommend this way of defining custom layers
function SpiralClassifierCompact(in_dims, hidden_dims, out_dims)
lstm_cell = LSTMCell(in_dims => hidden_dims)
classifier = Dense(hidden_dims => out_dims, sigmoid)
return @compact(; lstm_cell, classifier) do x::AbstractArray{T,3} where {T}
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
y, carry = lstm_cell(x_init)
for x in x_rest
y, carry = lstm_cell((x, carry))
end
@return vec(classifier(y))
end
endDefining Accuracy, Loss and Optimiser
Now let's define the binary cross-entropy loss. Typically it is recommended to use logitbinarycrossentropy since it is more numerically stable, but for the sake of simplicity we will use binarycrossentropy.
const lossfn = BinaryCrossEntropyLoss()
function compute_loss(model, ps, st, (x, y))
ŷ, st_ = model(x, ps, st)
loss = lossfn(ŷ, y)
return loss, st_, (; y_pred=ŷ)
end
matches(y_pred, y_true) = sum((y_pred .> 0.5f0) .== y_true)
accuracy(y_pred, y_true) = matches(y_pred, y_true) / length(y_pred)Training the Model
function main(model_type)
dev = reactant_device()
cdev = cpu_device()
# Get the dataloaders
train_loader, val_loader = get_dataloaders() |> dev
# Create the model
model = model_type(2, 8, 1)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
train_state = Training.TrainState(model, ps, st, Adam(0.01f0))
model_compiled = if dev isa ReactantDevice
@compile model(first(train_loader)[1], ps, Lux.testmode(st))
else
model
end
ad = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()
for epoch in 1:25
# Train the model
total_loss = 0.0f0
total_samples = 0
for (x, y) in train_loader
(_, loss, _, train_state) = Training.single_train_step!(
ad, lossfn, (x, y), train_state
)
total_loss += loss * length(y)
total_samples += length(y)
end
@printf("Epoch [%3d]: Loss %4.5f\n", epoch, total_loss / total_samples)
# Validate the model
total_acc = 0.0f0
total_loss = 0.0f0
total_samples = 0
st_ = Lux.testmode(train_state.states)
for (x, y) in val_loader
ŷ, st_ = model_compiled(x, train_state.parameters, st_)
ŷ, y = cdev(ŷ), cdev(y)
total_acc += accuracy(ŷ, y) * length(y)
total_loss += lossfn(ŷ, y) * length(y)
total_samples += length(y)
end
@printf(
"Validation:\tLoss %4.5f\tAccuracy %4.5f\n",
total_loss / total_samples,
total_acc / total_samples
)
end
return (train_state.parameters, train_state.states) |> cdev
end
ps_trained, st_trained = main(SpiralClassifier)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.49186
Validation: Loss 0.44366 Accuracy 1.00000
Epoch [ 2]: Loss 0.40700
Validation: Loss 0.35775 Accuracy 1.00000
Epoch [ 3]: Loss 0.31977
Validation: Loss 0.26817 Accuracy 1.00000
Epoch [ 4]: Loss 0.23175
Validation: Loss 0.19112 Accuracy 1.00000
Epoch [ 5]: Loss 0.16322
Validation: Loss 0.13463 Accuracy 1.00000
Epoch [ 6]: Loss 0.11699
Validation: Loss 0.09471 Accuracy 1.00000
Epoch [ 7]: Loss 0.08252
Validation: Loss 0.06946 Accuracy 1.00000
Epoch [ 8]: Loss 0.06104
Validation: Loss 0.05038 Accuracy 1.00000
Epoch [ 9]: Loss 0.04478
Validation: Loss 0.03754 Accuracy 1.00000
Epoch [ 10]: Loss 0.03391
Validation: Loss 0.02972 Accuracy 1.00000
Epoch [ 11]: Loss 0.02735
Validation: Loss 0.02443 Accuracy 1.00000
Epoch [ 12]: Loss 0.02273
Validation: Loss 0.02055 Accuracy 1.00000
Epoch [ 13]: Loss 0.01915
Validation: Loss 0.01735 Accuracy 1.00000
Epoch [ 14]: Loss 0.01613
Validation: Loss 0.01459 Accuracy 1.00000
Epoch [ 15]: Loss 0.01353
Validation: Loss 0.01227 Accuracy 1.00000
Epoch [ 16]: Loss 0.01145
Validation: Loss 0.01044 Accuracy 1.00000
Epoch [ 17]: Loss 0.00984
Validation: Loss 0.00909 Accuracy 1.00000
Epoch [ 18]: Loss 0.00866
Validation: Loss 0.00811 Accuracy 1.00000
Epoch [ 19]: Loss 0.00778
Validation: Loss 0.00735 Accuracy 1.00000
Epoch [ 20]: Loss 0.00710
Validation: Loss 0.00674 Accuracy 1.00000
Epoch [ 21]: Loss 0.00652
Validation: Loss 0.00622 Accuracy 1.00000
Epoch [ 22]: Loss 0.00603
Validation: Loss 0.00578 Accuracy 1.00000
Epoch [ 23]: Loss 0.00562
Validation: Loss 0.00540 Accuracy 1.00000
Epoch [ 24]: Loss 0.00525
Validation: Loss 0.00505 Accuracy 1.00000
Epoch [ 25]: Loss 0.00492
Validation: Loss 0.00475 Accuracy 1.00000We can also train the compact model with the exact same code!
ps_trained2, st_trained2 = main(SpiralClassifierCompact)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.57609
Validation: Loss 0.49708 Accuracy 1.00000
Epoch [ 2]: Loss 0.46201
Validation: Loss 0.40478 Accuracy 1.00000
Epoch [ 3]: Loss 0.37232
Validation: Loss 0.33042 Accuracy 1.00000
Epoch [ 4]: Loss 0.29955
Validation: Loss 0.26404 Accuracy 1.00000
Epoch [ 5]: Loss 0.23541
Validation: Loss 0.20433 Accuracy 1.00000
Epoch [ 6]: Loss 0.18004
Validation: Loss 0.15316 Accuracy 1.00000
Epoch [ 7]: Loss 0.13447
Validation: Loss 0.11283 Accuracy 1.00000
Epoch [ 8]: Loss 0.09976
Validation: Loss 0.08405 Accuracy 1.00000
Epoch [ 9]: Loss 0.07567
Validation: Loss 0.06496 Accuracy 1.00000
Epoch [ 10]: Loss 0.05958
Validation: Loss 0.05220 Accuracy 1.00000
Epoch [ 11]: Loss 0.04842
Validation: Loss 0.04303 Accuracy 1.00000
Epoch [ 12]: Loss 0.04030
Validation: Loss 0.03616 Accuracy 1.00000
Epoch [ 13]: Loss 0.03418
Validation: Loss 0.03098 Accuracy 1.00000
Epoch [ 14]: Loss 0.02964
Validation: Loss 0.02712 Accuracy 1.00000
Epoch [ 15]: Loss 0.02615
Validation: Loss 0.02416 Accuracy 1.00000
Epoch [ 16]: Loss 0.02345
Validation: Loss 0.02179 Accuracy 1.00000
Epoch [ 17]: Loss 0.02126
Validation: Loss 0.01980 Accuracy 1.00000
Epoch [ 18]: Loss 0.01933
Validation: Loss 0.01803 Accuracy 1.00000
Epoch [ 19]: Loss 0.01763
Validation: Loss 0.01634 Accuracy 1.00000
Epoch [ 20]: Loss 0.01595
Validation: Loss 0.01464 Accuracy 1.00000
Epoch [ 21]: Loss 0.01430
Validation: Loss 0.01303 Accuracy 1.00000
Epoch [ 22]: Loss 0.01283
Validation: Loss 0.01166 Accuracy 1.00000
Epoch [ 23]: Loss 0.01157
Validation: Loss 0.01047 Accuracy 1.00000
Epoch [ 24]: Loss 0.01040
Validation: Loss 0.00942 Accuracy 1.00000
Epoch [ 25]: Loss 0.00942
Validation: Loss 0.00851 Accuracy 1.00000Saving the Model
We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.
@save "trained_model.jld2" ps_trained st_trainedLet's try loading the model
@load "trained_model.jld2" ps_trained st_trained2-element Vector{Symbol}:
:ps_trained
:st_trainedAppendix
using InteractiveUtils
InteractiveUtils.versioninfo()
if @isdefined(MLDataDevices)
if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
println()
CUDA.versioninfo()
end
if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
println()
AMDGPU.versioninfo()
end
endJulia Version 1.11.8
Commit cf1da5e20e3 (2025-11-06 17:49 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 4 × AMD EPYC 7763 64-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 4 default, 0 interactive, 2 GC (on 4 virtual cores)
Environment:
JULIA_DEBUG = Literate
LD_LIBRARY_PATH =
JULIA_NUM_THREADS = 4
JULIA_CPU_HARD_MEMORY_LIMIT = 100%
JULIA_PKG_PRECOMPILE_AUTO = 0This page was generated using Literate.jl.