Skip to content

Compiling Lux Models using Reactant.jl

Quoting the Reactant.jl Readme:

Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

Experimental

Reactant compilation is a very new feature and is currently experimental. Certain models might not be compilable yet, but we are actively working on it. Open an issue if you encounter any problems.

julia
using Lux, Reactant, Enzyme, Random, Zygote
using Functors, Optimisers, Printf

Using the TrainState API

If you are using the Training.TrainState API, skip to the bottom of this page to see how to train the model without any of this boilerplate.

We start by defining a simple MLP model:

julia
model = Chain(
    Dense(2 => 32, gelu),
    Dense(32 => 32, gelu),
    Dense(32 => 2)
)
ps, st = Lux.setup(Random.default_rng(), model)
((layer_1 = (weight = Float32[-1.2228831 -0.87702435; 0.5031421 -0.15133555; … ; -0.31550723 -0.7672513; 0.111552626 0.6064619], bias = Float32[-0.63795453, 0.62450767, -0.014877922, 0.25385493, -0.20188306, 0.21950458, 0.109203495, 0.23021114, -0.26657984, 0.16187939  …  -0.6409691, 0.4391564, 0.14488737, 0.49998975, -0.04566476, -0.56069607, -0.33442986, -0.1549292, -0.42669478, 0.636308]), layer_2 = (weight = Float32[0.293211 0.19084926 … 0.2464001 0.2913357; -0.116796836 0.09926938 … -0.26311737 -0.15802455; … ; -0.2042089 -0.22406094 … 0.13504265 0.09289699; 0.25389904 0.28355134 … 0.28725442 0.13343152], bias = Float32[0.12992674, 0.14568081, -0.10754459, -0.15686738, -0.14118214, 0.088205874, -0.06301335, 0.06027697, 0.14445141, 0.08791955  …  0.053627778, -0.06618893, 0.1124609, 0.037500158, 0.12827216, -0.13913931, -0.17048413, -0.1032465, -0.15493166, -0.0069942693]), layer_3 = (weight = Float32[-0.031503614 -0.23162955 … 0.097182155 -0.099906564; 0.05729505 0.28042415 … 0.1293236 -0.18089005], bias = Float32[-0.16409892, 0.042256515])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

We then create a random input and output data:

julia
x = randn(Float32, 2, 32)
y = x .^ 2
2×32 Matrix{Float32}:
 0.203036   0.362593  0.354464   0.0320963  …  0.0954186  0.713316  0.438519
 0.0155126  1.13864   0.0187668  0.142251      2.24169    4.16407   0.415858

We will use xla_device similar to gpu_device to move the arrays to Reactant.

julia
const xdev = xla_device()

x_ra = x |> xdev
y_ra = y |> xdev
ps_ra = ps |> xdev
st_ra = st |> xdev
nothing

First let's run the model as we would normally:

julia
pred_lux, _ = model(x, ps, Lux.testmode(st))
(Float32[-0.20053944 -0.8147778 … -2.3903124 -0.15544322; 0.1585735 0.4981351 … 1.2586653 0.27545732], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using xla_device first.

julia
model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
Reactant.Compiler.Thunk{Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#1069")}()

Now we can test the difference between the results:

julia
pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))

pred_lux .- Array(pred_compiled)
2×32 Matrix{Float32}:
 0.0  -1.19209f-7  -2.98023f-8  2.98023f-8  …  2.98023f-8  0.0  -1.49012f-8
 0.0   1.19209f-7   8.9407f-8   1.78814f-7     1.49012f-8  0.0   2.98023f-8

The difference is very small as we would expect. Now, let's try to differentiate the output of the model. We need to use Enzyme.jl to do this.

julia
function loss_function(model, ps, st, x, y)
    pred, _ = model(x, ps, st)
    return MSELoss()(pred, y)
end
loss_function (generic function with 1 method)

We will use Zygote.jl to compute the gradient of the loss function for the vanilla model.

julia
loss_function(model, ps, st, x, y)

∂ps_zyg = only(Zygote.gradient(ps -> loss_function(model, ps, st, x, y), ps))
(layer_1 = (weight = Float32[-0.011611392 -0.12556516; -0.09724939 0.11515345; … ; 0.08667634 -0.2689521; -0.09643307 0.030881835], bias = Float32[0.048133414, -0.106884085, 0.097701035, 0.105524555, -0.039647065, -0.018338889, -0.019115759, -0.15107606, 0.013992601, -0.014150472  …  0.0041674753, 0.032615878, 0.031403527, 0.13760866, -0.04225484, 0.049417753, -0.00059220614, -0.03242131, 0.18807876, -0.07640441]), layer_2 = (weight = Float32[-0.004287243 0.028275706 … -0.0073489705 0.0028297475; 0.016479947 0.030926052 … -0.0036810301 0.019791333; … ; 0.010637202 -0.002057937 … 0.010218928 -0.047897488; 0.13518015 0.25378025 … 0.0903271 0.048811335], bias = Float32[0.018884761, 0.053747915, -0.17435724, -0.059518166, -0.10950818, 0.13725635, -0.048533253, -0.11365668, -0.3891182, 0.26477236  …  0.2236399, 0.1377298, -0.027226413, -0.09919551, -0.12902719, 0.0072498624, -0.012183794, 0.066751055, -0.017432783, 0.26700422]), layer_3 = (weight = Float32[-2.5994074 0.07425845 … 0.08953094 -0.9130077; -1.1187928 0.0062888456 … -0.032405674 -0.4112945], bias = Float32[-1.6541586, -0.61384505]))

Now we will compile the gradient function using Reactant.@compile.

julia
function enzyme_gradient(model, ps, st, x, y)
    dps = Enzyme.make_zero(ps)
    Enzyme.autodiff(Enzyme.Reverse, Const(loss_function), Active, Const(model),
        Duplicated(ps, dps), Const(st), Const(x), Const(y))
    return dps
end

enzyme_gradient_compiled = @compile enzyme_gradient(model, ps_ra, st_ra, x_ra, y_ra)

∂ps_enzyme = enzyme_gradient_compiled(model, ps_ra, st_ra, x_ra, y_ra)
(layer_1 = (weight = Float32[-0.011611394 -0.12556516; -0.09724942 0.11515346; … ; 0.08667634 -0.2689521; -0.09643307 0.030881835], bias = Float32[0.048133414, -0.10688411, 0.097701035, 0.10552457, -0.039647065, -0.018338893, -0.01911576, -0.15107603, 0.013992591, -0.01415047  …  0.0041674743, 0.03261588, 0.031403534, 0.13760868, -0.042254835, 0.049417756, -0.0005922087, -0.0324213, 0.18807876, -0.076404385]), layer_2 = (weight = Float32[-0.004287243 0.02827571 … -0.0073489705 0.0028297466; 0.016479947 0.030926049 … -0.0036810287 0.019791335; … ; 0.010637204 -0.0020579377 … 0.0102189295 -0.047897488; 0.13518013 0.25378025 … 0.0903271 0.048811335], bias = Float32[0.018884756, 0.053747915, -0.17435724, -0.059518166, -0.1095082, 0.13725637, -0.04853325, -0.11365668, -0.38911813, 0.26477236  …  0.22363997, 0.1377298, -0.027226416, -0.0991955, -0.12902719, 0.007249862, -0.012183795, 0.06675106, -0.017432783, 0.26700416]), layer_3 = (weight = Float32[-2.5994072 0.07425846 … 0.089530945 -0.9130077; -1.1187928 0.006288857 … -0.032405667 -0.4112944], bias = Float32[-1.6541584, -0.6138451]))

Now we check the difference:

julia
fmap(Broadcast.BroadcastFunction(-), ∂ps_zyg, ∂ps_enzyme)
(layer_1 = (weight = Float32[1.8626451f-9 0.0; 2.9802322f-8 -1.4901161f-8; … ; 0.0 0.0; 0.0 0.0], bias = Float32[0.0, 2.2351742f-8, 0.0, -1.4901161f-8, 0.0, 3.7252903f-9, 1.8626451f-9, -2.9802322f-8, 1.0244548f-8, -2.7939677f-9  …  9.313226f-10, -3.7252903f-9, -7.450581f-9, -1.4901161f-8, -3.7252903f-9, -3.7252903f-9, 2.561137f-9, -1.1175871f-8, 0.0, -2.2351742f-8]), layer_2 = (weight = Float32[0.0 -3.7252903f-9 … 0.0 9.313226f-10; 0.0 3.7252903f-9 … -1.3969839f-9 -1.8626451f-9; … ; -1.8626451f-9 6.9849193f-10 … -1.8626451f-9 0.0; 1.4901161f-8 0.0 … 0.0 0.0], bias = Float32[5.5879354f-9, 0.0, 0.0, 0.0, 2.2351742f-8, -1.4901161f-8, -3.7252903f-9, 0.0, -5.9604645f-8, 0.0  …  -5.9604645f-8, 0.0, 3.7252903f-9, -7.450581f-9, 0.0, 4.656613f-10, 9.313226f-10, -7.450581f-9, 0.0, 5.9604645f-8]), layer_3 = (weight = Float32[-2.3841858f-7 -1.4901161f-8 … -7.450581f-9 0.0; 0.0 -1.1641532f-8 … -7.450581f-9 -8.940697f-8], bias = Float32[-2.3841858f-7, 5.9604645f-8]))

Using the TrainState API

Now that we saw the low-level API let's see how to train the model without any of this boilerplate. Simply follow the following steps:

  1. Create a device using xla_device. Remember to load Reactant.jl before doing this.

  2. Similar to other device functions move the model, parameters, states and data to the device. Note that you might want to use DeviceIterator to move the data loader to the device with an iterator.

  3. Construct a TrainState using Training.TrainState.

  4. And most importantly use AutoEnzyme while calling Training.single_train_step! or Training.single_train_step.

julia
model = Chain(
    Dense(2 => 4, gelu),
    Dense(4 => 4, gelu),
    Dense(4 => 2)
)
ps, st = Lux.setup(Random.default_rng(), model)

x_ra = [randn(Float32, 2, 32) for _ in 1:32]
y_ra = [xᵢ .^ 2 for xᵢ in x_ra]
ps_ra = ps |> xdev
st_ra = st |> xdev

dataloader = DeviceIterator(xdev, zip(x_ra, y_ra))

function train_model(model, ps, st, dataloader)
    train_state = Training.TrainState(model, ps, st, Adam(0.001f0))

    for iteration in 1:1000
        for (xᵢ, yᵢ) in dataloader
            grads, loss, stats, train_state = Training.single_train_step!(
                AutoEnzyme(), MSELoss(), (xᵢ, yᵢ), train_state)
        end
        if iteration % 100 == 0 || iteration == 1
            # We need to do this since scalar outputs are currently expressed as a zero-dim
            # array
            loss = Array(loss)[]
            @printf("Iter: [%4d/%4d]\tLoss: %.8f\n", iteration, 1000, loss)
        end
    end

    return train_state
end

train_model(model, ps_ra, st_ra, dataloader)
Iter: [   1/1000]	Loss: 3.07964921
Iter: [ 100/1000]	Loss: 1.06519687
Iter: [ 200/1000]	Loss: 0.44807646
Iter: [ 300/1000]	Loss: 0.24150778
Iter: [ 400/1000]	Loss: 0.14340512
Iter: [ 500/1000]	Loss: 0.09299411
Iter: [ 600/1000]	Loss: 0.06612328
Iter: [ 700/1000]	Loss: 0.04551310
Iter: [ 800/1000]	Loss: 0.03070261
Iter: [ 900/1000]	Loss: 0.02143306
Iter: [1000/1000]	Loss: 0.01542492