Training a Simple LSTM
In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:
Create custom Lux models.
Become familiar with the Lux recurrent neural network API.
Training using Optimisers.jl and Zygote.jl.
Package Imports
Note: If you wish to use AutoZygote() for automatic differentiation, add Zygote to your project dependencies and include using Zygote.
using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, RandomDataset
We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.
function create_dataset(; dataset_size=1000, sequence_length=50)
# Create the spirals
data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
# Get the labels
labels = vcat(repeat([0.0f0], dataset_size ÷ 2), repeat([1.0f0], dataset_size ÷ 2))
clockwise_spirals = [
reshape(d[1][:, 1:sequence_length], :, sequence_length, 1) for
d in data[1:(dataset_size ÷ 2)]
]
anticlockwise_spirals = [
reshape(d[1][:, (sequence_length + 1):end], :, sequence_length, 1) for
d in data[((dataset_size ÷ 2) + 1):end]
]
x_data = Float32.(cat(clockwise_spirals..., anticlockwise_spirals...; dims=3))
return x_data, labels
end
function get_dataloaders(; dataset_size=1000, sequence_length=50)
x_data, labels = create_dataset(; dataset_size, sequence_length)
# Split the dataset
(x_train, y_train), (x_val, y_val) = splitobs((x_data, labels); at=0.8, shuffle=true)
# Create DataLoaders
return (
# Use DataLoader to automatically minibatch and shuffle the data
DataLoader(
collect.((x_train, y_train)); batchsize=128, shuffle=true, partial=false
),
# Don't shuffle the validation data
DataLoader(collect.((x_val, y_val)); batchsize=128, shuffle=false, partial=false),
)
endCreating a Classifier
We will be extending the Lux.AbstractLuxContainerLayer type for our custom model since it will contain a LSTM block and a classifier head.
We pass the field names lstm_cell and classifier to the type to ensure that the parameters and states are automatically populated and we don't have to define Lux.initialparameters and Lux.initialstates.
To understand more about container layers, please look at Container Layer.
struct SpiralClassifier{L,C} <: AbstractLuxContainerLayer{(:lstm_cell, :classifier)}
lstm_cell::L
classifier::C
endWe won't define the model from scratch but rather use the Lux.LSTMCell and Lux.Dense.
function SpiralClassifier(in_dims, hidden_dims, out_dims)
return SpiralClassifier(
LSTMCell(in_dims => hidden_dims), Dense(hidden_dims => out_dims, sigmoid)
)
endWe can use default Lux blocks – Recurrence(LSTMCell(in_dims => hidden_dims) – instead of defining the following. But let's still do it for the sake of it.
Now we need to define the behavior of the Classifier when it is invoked.
function (s::SpiralClassifier)(
x::AbstractArray{T,3}, ps::NamedTuple, st::NamedTuple
) where {T}
# First we will have to run the sequence through the LSTM Cell
# The first call to LSTM Cell will create the initial hidden state
# See that the parameters and states are automatically populated into a field called
# `lstm_cell` We use `eachslice` to get the elements in the sequence without copying,
# and `Iterators.peel` to split out the first element for LSTM initialization.
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
(y, carry), st_lstm = s.lstm_cell(x_init, ps.lstm_cell, st.lstm_cell)
# Now that we have the hidden state and memory in `carry` we will pass the input and
# `carry` jointly
for x in x_rest
(y, carry), st_lstm = s.lstm_cell((x, carry), ps.lstm_cell, st_lstm)
end
# After running through the sequence we will pass the output through the classifier
y, st_classifier = s.classifier(y, ps.classifier, st.classifier)
# Finally remember to create the updated state
st = merge(st, (classifier=st_classifier, lstm_cell=st_lstm))
return vec(y), st
endUsing the @compact API
We can also define the model using the Lux.@compact API, which is a more concise way of defining models. This macro automatically handles the boilerplate code for you and as such we recommend this way of defining custom layers
function SpiralClassifierCompact(in_dims, hidden_dims, out_dims)
lstm_cell = LSTMCell(in_dims => hidden_dims)
classifier = Dense(hidden_dims => out_dims, sigmoid)
return @compact(; lstm_cell, classifier) do x::AbstractArray{T,3} where {T}
x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
y, carry = lstm_cell(x_init)
for x in x_rest
y, carry = lstm_cell((x, carry))
end
@return vec(classifier(y))
end
endDefining Accuracy, Loss and Optimiser
Now let's define the binary cross-entropy loss. Typically it is recommended to use logitbinarycrossentropy since it is more numerically stable, but for the sake of simplicity we will use binarycrossentropy.
const lossfn = BinaryCrossEntropyLoss()
function compute_loss(model, ps, st, (x, y))
ŷ, st_ = model(x, ps, st)
loss = lossfn(ŷ, y)
return loss, st_, (; y_pred=ŷ)
end
matches(y_pred, y_true) = sum((y_pred .> 0.5f0) .== y_true)
accuracy(y_pred, y_true) = matches(y_pred, y_true) / length(y_pred)Training the Model
function main(model_type)
dev = reactant_device()
cdev = cpu_device()
# Get the dataloaders
train_loader, val_loader = get_dataloaders() |> dev
# Create the model
model = model_type(2, 8, 1)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
train_state = Training.TrainState(model, ps, st, Adam(0.01f0))
model_compiled = if dev isa ReactantDevice
@compile model(first(train_loader)[1], ps, Lux.testmode(st))
else
model
end
ad = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()
for epoch in 1:25
# Train the model
total_loss = 0.0f0
total_samples = 0
for (x, y) in train_loader
(_, loss, _, train_state) = Training.single_train_step!(
ad, lossfn, (x, y), train_state
)
total_loss += loss * length(y)
total_samples += length(y)
end
@printf("Epoch [%3d]: Loss %4.5f\n", epoch, total_loss / total_samples)
# Validate the model
total_acc = 0.0f0
total_loss = 0.0f0
total_samples = 0
st_ = Lux.testmode(train_state.states)
for (x, y) in val_loader
ŷ, st_ = model_compiled(x, train_state.parameters, st_)
ŷ, y = cdev(ŷ), cdev(y)
total_acc += accuracy(ŷ, y) * length(y)
total_loss += lossfn(ŷ, y) * length(y)
total_samples += length(y)
end
@printf(
"Validation:\tLoss %4.5f\tAccuracy %4.5f\n",
total_loss / total_samples,
total_acc / total_samples
)
end
return (train_state.parameters, train_state.states) |> cdev
end
ps_trained, st_trained = main(SpiralClassifier)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.58885
Validation: Loss 0.49334 Accuracy 0.96875
Epoch [ 2]: Loss 0.47016
Validation: Loss 0.40542 Accuracy 1.00000
Epoch [ 3]: Loss 0.38202
Validation: Loss 0.32570 Accuracy 1.00000
Epoch [ 4]: Loss 0.30518
Validation: Loss 0.25052 Accuracy 1.00000
Epoch [ 5]: Loss 0.22850
Validation: Loss 0.18812 Accuracy 1.00000
Epoch [ 6]: Loss 0.17281
Validation: Loss 0.14390 Accuracy 1.00000
Epoch [ 7]: Loss 0.13229
Validation: Loss 0.11088 Accuracy 1.00000
Epoch [ 8]: Loss 0.10230
Validation: Loss 0.08679 Accuracy 1.00000
Epoch [ 9]: Loss 0.08068
Validation: Loss 0.06939 Accuracy 1.00000
Epoch [ 10]: Loss 0.06480
Validation: Loss 0.05613 Accuracy 1.00000
Epoch [ 11]: Loss 0.05242
Validation: Loss 0.04520 Accuracy 1.00000
Epoch [ 12]: Loss 0.04169
Validation: Loss 0.03600 Accuracy 1.00000
Epoch [ 13]: Loss 0.03317
Validation: Loss 0.02925 Accuracy 1.00000
Epoch [ 14]: Loss 0.02698
Validation: Loss 0.02422 Accuracy 1.00000
Epoch [ 15]: Loss 0.02248
Validation: Loss 0.02057 Accuracy 1.00000
Epoch [ 16]: Loss 0.01918
Validation: Loss 0.01775 Accuracy 1.00000
Epoch [ 17]: Loss 0.01658
Validation: Loss 0.01543 Accuracy 1.00000
Epoch [ 18]: Loss 0.01448
Validation: Loss 0.01365 Accuracy 1.00000
Epoch [ 19]: Loss 0.01292
Validation: Loss 0.01231 Accuracy 1.00000
Epoch [ 20]: Loss 0.01174
Validation: Loss 0.01125 Accuracy 1.00000
Epoch [ 21]: Loss 0.01075
Validation: Loss 0.01036 Accuracy 1.00000
Epoch [ 22]: Loss 0.00993
Validation: Loss 0.00960 Accuracy 1.00000
Epoch [ 23]: Loss 0.00924
Validation: Loss 0.00894 Accuracy 1.00000
Epoch [ 24]: Loss 0.00860
Validation: Loss 0.00836 Accuracy 1.00000
Epoch [ 25]: Loss 0.00806
Validation: Loss 0.00784 Accuracy 1.00000We can also train the compact model with the exact same code!
ps_trained2, st_trained2 = main(SpiralClassifierCompact)┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
└ @ LuxCore ~/work/Lux.jl/Lux.jl/lib/LuxCore/src/LuxCore.jl:18
Epoch [ 1]: Loss 0.50287
Validation: Loss 0.40875 Accuracy 1.00000
Epoch [ 2]: Loss 0.37375
Validation: Loss 0.31609 Accuracy 1.00000
Epoch [ 3]: Loss 0.28381
Validation: Loss 0.23291 Accuracy 1.00000
Epoch [ 4]: Loss 0.20631
Validation: Loss 0.16998 Accuracy 1.00000
Epoch [ 5]: Loss 0.15135
Validation: Loss 0.12787 Accuracy 1.00000
Epoch [ 6]: Loss 0.11325
Validation: Loss 0.09596 Accuracy 1.00000
Epoch [ 7]: Loss 0.08399
Validation: Loss 0.07084 Accuracy 1.00000
Epoch [ 8]: Loss 0.06123
Validation: Loss 0.05113 Accuracy 1.00000
Epoch [ 9]: Loss 0.04433
Validation: Loss 0.03767 Accuracy 1.00000
Epoch [ 10]: Loss 0.03299
Validation: Loss 0.02869 Accuracy 1.00000
Epoch [ 11]: Loss 0.02556
Validation: Loss 0.02289 Accuracy 1.00000
Epoch [ 12]: Loss 0.02067
Validation: Loss 0.01900 Accuracy 1.00000
Epoch [ 13]: Loss 0.01746
Validation: Loss 0.01632 Accuracy 1.00000
Epoch [ 14]: Loss 0.01518
Validation: Loss 0.01438 Accuracy 1.00000
Epoch [ 15]: Loss 0.01338
Validation: Loss 0.01290 Accuracy 1.00000
Epoch [ 16]: Loss 0.01215
Validation: Loss 0.01172 Accuracy 1.00000
Epoch [ 17]: Loss 0.01108
Validation: Loss 0.01076 Accuracy 1.00000
Epoch [ 18]: Loss 0.01021
Validation: Loss 0.00995 Accuracy 1.00000
Epoch [ 19]: Loss 0.00942
Validation: Loss 0.00925 Accuracy 1.00000
Epoch [ 20]: Loss 0.00884
Validation: Loss 0.00864 Accuracy 1.00000
Epoch [ 21]: Loss 0.00823
Validation: Loss 0.00811 Accuracy 1.00000
Epoch [ 22]: Loss 0.00777
Validation: Loss 0.00763 Accuracy 1.00000
Epoch [ 23]: Loss 0.00732
Validation: Loss 0.00720 Accuracy 1.00000
Epoch [ 24]: Loss 0.00692
Validation: Loss 0.00681 Accuracy 1.00000
Epoch [ 25]: Loss 0.00654
Validation: Loss 0.00646 Accuracy 1.00000Saving the Model
We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.
@save "trained_model.jld2" ps_trained st_trainedLet's try loading the model
@load "trained_model.jld2" ps_trained st_trained2-element Vector{Symbol}:
:ps_trained
:st_trainedAppendix
using InteractiveUtils
InteractiveUtils.versioninfo()
if @isdefined(MLDataDevices)
if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
println()
CUDA.versioninfo()
end
if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
println()
AMDGPU.versioninfo()
end
endJulia Version 1.11.8
Commit cf1da5e20e3 (2025-11-06 17:49 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 4 × AMD EPYC 7763 64-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 4 default, 0 interactive, 2 GC (on 4 virtual cores)
Environment:
JULIA_DEBUG = Literate
LD_LIBRARY_PATH =
JULIA_NUM_THREADS = 4
JULIA_CPU_HARD_MEMORY_LIMIT = 100%
JULIA_PKG_PRECOMPILE_AUTO = 0This page was generated using Literate.jl.