Training a Simple LSTM
In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:
Create custom Lux models.
Become familiar with the Lux recurrent neural network API.
Training using Optimisers.jl and Zygote.jl.
Package Imports
Note: If you wish to use AutoZygote() for automatic differentiation, add Zygote to your project dependencies and include using Zygote.
using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, RandomDataset
We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.
function create_dataset(; dataset_size=1000, sequence_length=50)
# Create the spirals
data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
# Get the labels
labels = vcat(repeat([0.0f0], dataset_size ÷ 2), repeat([1.0f0], dataset_size ÷ 2))
clockwise_spirals = [
reshape(d[1][:, 1:sequence_length], :, sequence_length, 1) for
d in data[1:(dataset_size ÷ 2)]
]
anticlockwise_spirals = [
reshape(d[1][:, (sequence_length + 1):end], :, sequence_length, 1) for
d in data[((dataset_size ÷ 2) + 1):end]
]
x_data = Float32.(cat(clockwise_spirals..., anticlockwise_spirals...; dims=3))
return x_data, labels
end
function get_dataloaders(; dataset_size=1000, sequence_length=50)
x_data, labels = create_dataset(; dataset_size, sequence_length)
# Split the dataset
(x_train, y_train), (x_val, y_val) = splitobs((x_data, labels); at=0.8, shuffle=true)
# Create DataLoaders
return (
# Use DataLoader to automatically minibatch and shuffle the data
DataLoader(
collect.((x_train, y_train)); batchsize=128, shuffle=true, partial=false
),
# Don't shuffle the validation data
DataLoader(collect.((x_val, y_val)); batchsize=128, shuffle=false, partial=false),
)
endCreating a Classifier
We will be extending the Lux.AbstractLuxContainerLayer type for our custom model since it will contain a LSTM block and a classifier head.
We pass the field names lstm_cell and classifier to the type to ensure that the parameters and states are automatically populated and we don't have to define Lux.initialparameters and Lux.initialstates.
To understand more about container layers, please look at Container Layer.
struct SpiralClassifier{L,C} <: AbstractLuxContainerLayer{(:lstm_cell, :classifier)}
lstm_cell::L
classifier::C
endWe won't define the model from scratch but rather use the Lux.LSTMCell and Lux.Dense.
function SpiralClassifier(in_dims, hidden_dims, out_dims)
return SpiralClassifier(
LSTMCell(in_dims => hidden_dims), Dense(hidden_dims => out_dims, sigmoid)
)
endWe can use default Lux blocks – Recurrence(LSTMCell(in_dims => hidden_dims) – instead of defining the following. But let's still do it for the sake of it.
Now we need to define the behavior of the Classifier when it is invoked.
function (s::SpiralClassifier)(
x::AbstractArray{T,3}, ps::NamedTuple, st::NamedTuple
) where {T}
# First we will have to run the sequence through the LSTM Cell
# The first call to LSTM Cell will create the initial hidden state
# See that the parameters and states are automatically populated into a field called
# `lstm_cell` We use `eachslice` to get the elements in the sequence without copying,
# and `Iterators.peel` to split out the first element for LSTM initialization.
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
(y, carry), st_lstm = s.lstm_cell(x_init, ps.lstm_cell, st.lstm_cell)
# Now that we have the hidden state and memory in `carry` we will pass the input and
# `carry` jointly
for x in x_rest
(y, carry), st_lstm = s.lstm_cell((x, carry), ps.lstm_cell, st_lstm)
end
# After running through the sequence we will pass the output through the classifier
y, st_classifier = s.classifier(y, ps.classifier, st.classifier)
# Finally remember to create the updated state
st = merge(st, (classifier=st_classifier, lstm_cell=st_lstm))
return vec(y), st
endUsing the @compact API
We can also define the model using the Lux.@compact API, which is a more concise way of defining models. This macro automatically handles the boilerplate code for you and as such we recommend this way of defining custom layers
function SpiralClassifierCompact(in_dims, hidden_dims, out_dims)
lstm_cell = LSTMCell(in_dims => hidden_dims)
classifier = Dense(hidden_dims => out_dims, sigmoid)
return @compact(; lstm_cell, classifier) do x::AbstractArray{T,3} where {T}
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
y, carry = lstm_cell(x_init)
for x in x_rest
y, carry = lstm_cell((x, carry))
end
@return vec(classifier(y))
end
endDefining Accuracy, Loss and Optimiser
Now let's define the binary cross-entropy loss. Typically it is recommended to use logitbinarycrossentropy since it is more numerically stable, but for the sake of simplicity we will use binarycrossentropy.
const lossfn = BinaryCrossEntropyLoss()
function compute_loss(model, ps, st, (x, y))
ŷ, st_ = model(x, ps, st)
loss = lossfn(ŷ, y)
return loss, st_, (; y_pred=ŷ)
end
matches(y_pred, y_true) = sum((y_pred .> 0.5f0) .== y_true)
accuracy(y_pred, y_true) = matches(y_pred, y_true) / length(y_pred)Training the Model
function main(model_type)
dev = reactant_device()
cdev = cpu_device()
# Get the dataloaders
train_loader, val_loader = get_dataloaders() |> dev
# Create the model
model = model_type(2, 8, 1)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
train_state = Training.TrainState(model, ps, st, Adam(0.01f0))
model_compiled = if dev isa ReactantDevice
@compile model(first(train_loader)[1], ps, Lux.testmode(st))
else
model
end
ad = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()
for epoch in 1:25
# Train the model
total_loss = 0.0f0
total_samples = 0
for (x, y) in train_loader
(_, loss, _, train_state) = Training.single_train_step!(
ad, lossfn, (x, y), train_state
)
total_loss += loss * length(y)
total_samples += length(y)
end
@printf("Epoch [%3d]: Loss %4.5f\n", epoch, total_loss / total_samples)
# Validate the model
total_acc = 0.0f0
total_loss = 0.0f0
total_samples = 0
st_ = Lux.testmode(train_state.states)
for (x, y) in val_loader
ŷ, st_ = model_compiled(x, train_state.parameters, st_)
ŷ, y = cdev(ŷ), cdev(y)
total_acc += accuracy(ŷ, y) * length(y)
total_loss += lossfn(ŷ, y) * length(y)
total_samples += length(y)
end
@printf(
"Validation:\tLoss %4.5f\tAccuracy %4.5f\n",
total_loss / total_samples,
total_acc / total_samples
)
end
return (train_state.parameters, train_state.states) |> cdev
end
ps_trained, st_trained = main(SpiralClassifier)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.87522
Validation: Loss 0.78992 Accuracy 0.00000
Epoch [ 2]: Loss 0.75719
Validation: Loss 0.71213 Accuracy 0.36719
Epoch [ 3]: Loss 0.68375
Validation: Loss 0.65675 Accuracy 0.46094
Epoch [ 4]: Loss 0.62879
Validation: Loss 0.60832 Accuracy 0.46094
Epoch [ 5]: Loss 0.57977
Validation: Loss 0.56161 Accuracy 1.00000
Epoch [ 6]: Loss 0.52951
Validation: Loss 0.50644 Accuracy 1.00000
Epoch [ 7]: Loss 0.46817
Validation: Loss 0.44978 Accuracy 1.00000
Epoch [ 8]: Loss 0.41097
Validation: Loss 0.39480 Accuracy 1.00000
Epoch [ 9]: Loss 0.35755
Validation: Loss 0.34201 Accuracy 1.00000
Epoch [ 10]: Loss 0.30762
Validation: Loss 0.29316 Accuracy 1.00000
Epoch [ 11]: Loss 0.26208
Validation: Loss 0.25332 Accuracy 1.00000
Epoch [ 12]: Loss 0.22626
Validation: Loss 0.22281 Accuracy 1.00000
Epoch [ 13]: Loss 0.19777
Validation: Loss 0.19735 Accuracy 1.00000
Epoch [ 14]: Loss 0.17590
Validation: Loss 0.17602 Accuracy 1.00000
Epoch [ 15]: Loss 0.15807
Validation: Loss 0.15768 Accuracy 1.00000
Epoch [ 16]: Loss 0.14291
Validation: Loss 0.14193 Accuracy 1.00000
Epoch [ 17]: Loss 0.12718
Validation: Loss 0.12795 Accuracy 1.00000
Epoch [ 18]: Loss 0.11439
Validation: Loss 0.11482 Accuracy 1.00000
Epoch [ 19]: Loss 0.10161
Validation: Loss 0.10133 Accuracy 1.00000
Epoch [ 20]: Loss 0.08974
Validation: Loss 0.08561 Accuracy 1.00000
Epoch [ 21]: Loss 0.07398
Validation: Loss 0.06834 Accuracy 1.00000
Epoch [ 22]: Loss 0.05864
Validation: Loss 0.05537 Accuracy 1.00000
Epoch [ 23]: Loss 0.04873
Validation: Loss 0.04711 Accuracy 1.00000
Epoch [ 24]: Loss 0.04207
Validation: Loss 0.04080 Accuracy 1.00000
Epoch [ 25]: Loss 0.03617
Validation: Loss 0.03522 Accuracy 1.00000We can also train the compact model with the exact same code!
ps_trained2, st_trained2 = main(SpiralClassifierCompact)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.64236
Validation: Loss 0.60504 Accuracy 0.50000
Epoch [ 2]: Loss 0.57763
Validation: Loss 0.54052 Accuracy 1.00000
Epoch [ 3]: Loss 0.50355
Validation: Loss 0.44802 Accuracy 1.00000
Epoch [ 4]: Loss 0.40310
Validation: Loss 0.34010 Accuracy 1.00000
Epoch [ 5]: Loss 0.29222
Validation: Loss 0.22399 Accuracy 1.00000
Epoch [ 6]: Loss 0.18376
Validation: Loss 0.13786 Accuracy 1.00000
Epoch [ 7]: Loss 0.11304
Validation: Loss 0.08601 Accuracy 1.00000
Epoch [ 8]: Loss 0.07478
Validation: Loss 0.06107 Accuracy 1.00000
Epoch [ 9]: Loss 0.05445
Validation: Loss 0.04633 Accuracy 1.00000
Epoch [ 10]: Loss 0.04223
Validation: Loss 0.03708 Accuracy 1.00000
Epoch [ 11]: Loss 0.03453
Validation: Loss 0.03095 Accuracy 1.00000
Epoch [ 12]: Loss 0.02909
Validation: Loss 0.02658 Accuracy 1.00000
Epoch [ 13]: Loss 0.02516
Validation: Loss 0.02331 Accuracy 1.00000
Epoch [ 14]: Loss 0.02229
Validation: Loss 0.02078 Accuracy 1.00000
Epoch [ 15]: Loss 0.01992
Validation: Loss 0.01876 Accuracy 1.00000
Epoch [ 16]: Loss 0.01811
Validation: Loss 0.01709 Accuracy 1.00000
Epoch [ 17]: Loss 0.01649
Validation: Loss 0.01569 Accuracy 1.00000
Epoch [ 18]: Loss 0.01523
Validation: Loss 0.01449 Accuracy 1.00000
Epoch [ 19]: Loss 0.01406
Validation: Loss 0.01345 Accuracy 1.00000
Epoch [ 20]: Loss 0.01308
Validation: Loss 0.01254 Accuracy 1.00000
Epoch [ 21]: Loss 0.01219
Validation: Loss 0.01173 Accuracy 1.00000
Epoch [ 22]: Loss 0.01143
Validation: Loss 0.01100 Accuracy 1.00000
Epoch [ 23]: Loss 0.01073
Validation: Loss 0.01034 Accuracy 1.00000
Epoch [ 24]: Loss 0.01009
Validation: Loss 0.00974 Accuracy 1.00000
Epoch [ 25]: Loss 0.00952
Validation: Loss 0.00920 Accuracy 1.00000Saving the Model
We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.
@save "trained_model.jld2" ps_trained st_trainedLet's try loading the model
@load "trained_model.jld2" ps_trained st_trained2-element Vector{Symbol}:
:ps_trained
:st_trainedAppendix
using InteractiveUtils
InteractiveUtils.versioninfo()
if @isdefined(MLDataDevices)
if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
println()
CUDA.versioninfo()
end
if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
println()
AMDGPU.versioninfo()
end
endJulia Version 1.11.7
Commit f2b3dbda30a (2025-09-08 12:10 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 4 × AMD EPYC 7763 64-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 4 default, 0 interactive, 2 GC (on 4 virtual cores)
Environment:
JULIA_DEBUG = Literate
LD_LIBRARY_PATH =
JULIA_NUM_THREADS = 4
JULIA_CPU_HARD_MEMORY_LIMIT = 100%
JULIA_PKG_PRECOMPILE_AUTO = 0This page was generated using Literate.jl.