Compiling Lux Models using Reactant.jl
Quoting the Reactant.jl Readme:
Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.
Experimental
Reactant compilation is a very new feature and is currently experimental. Certain models might not be compilable yet, but we are actively working on it. Open an issue if you encounter any problems.
using Lux, Reactant, Enzyme, Random, Zygote
using Functors, Optimisers, Printf
Running on alternate accelerators
Reactant.set_default_backend("gpu")
sets the default backend to CUDA and Reactant.set_default_backend("tpu")
sets the default backend to TPU.
Using the TrainState
API
If you are using the Training.TrainState
API, skip to the bottom of this page to see how to train the model without any of this boilerplate.
We start by defining a simple MLP model:
model = Chain(
Dense(2 => 32, gelu),
Dense(32 => 32, gelu),
Dense(32 => 2)
)
ps, st = Lux.setup(Random.default_rng(), model)
((layer_1 = (weight = Float32[-1.2228831 -0.87702435; 0.5031421 -0.15133555; … ; -0.31550723 -0.7672513; 0.111552626 0.6064619], bias = Float32[-0.63795453, 0.62450767, -0.014877922, 0.25385493, -0.20188306, 0.21950458, 0.109203495, 0.23021114, -0.26657984, 0.16187939 … -0.6409691, 0.4391564, 0.14488737, 0.49998975, -0.04566476, -0.56069607, -0.33442986, -0.1549292, -0.42669478, 0.636308]), layer_2 = (weight = Float32[0.293211 0.19084926 … 0.2464001 0.2913357; -0.116796836 0.09926938 … -0.26311737 -0.15802455; … ; -0.2042089 -0.22406094 … 0.13504265 0.09289699; 0.25389904 0.28355134 … 0.28725442 0.13343152], bias = Float32[0.12992674, 0.14568081, -0.10754459, -0.15686738, -0.14118214, 0.088205874, -0.06301335, 0.06027697, 0.14445141, 0.08791955 … 0.053627778, -0.06618893, 0.1124609, 0.037500158, 0.12827216, -0.13913931, -0.17048413, -0.1032465, -0.15493166, -0.0069942693]), layer_3 = (weight = Float32[-0.031503614 -0.23162955 … 0.097182155 -0.099906564; 0.05729505 0.28042415 … 0.1293236 -0.18089005], bias = Float32[-0.16409892, 0.042256515])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))
We then create a random input and output data:
x = randn(Float32, 2, 32)
y = x .^ 2
2×32 Matrix{Float32}:
0.203036 0.362593 0.354464 0.0320963 … 0.0954186 0.713316 0.438519
0.0155126 1.13864 0.0187668 0.142251 2.24169 4.16407 0.415858
We will use reactant_device
similar to gpu_device
to move the arrays to Reactant
.
const xdev = reactant_device()
x_ra = x |> xdev
y_ra = y |> xdev
ps_ra = ps |> xdev
st_ra = st |> xdev
nothing
First let's run the model as we would normally:
pred_lux, _ = model(x, ps, Lux.testmode(st))
(Float32[-0.20053944 -0.8147778 … -2.3903124 -0.15544322; 0.1585735 0.4981351 … 1.2586653 0.27545732], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))
To run it using XLA
we need to compile the model. We can do this using the Reactant.@compile
macro. Note that the inputs need to be moved to the device using reactant_device
first.
model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
Reactant.Compiler.Thunk{Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#622138")}()
Now we can test the difference between the results:
pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
pred_lux .- Array(pred_compiled)
2×32 Matrix{Float32}:
5.25415f-5 0.000289321 -2.48551f-5 … 0.000828981 3.42727f-6
5.14239f-5 -0.000406206 -1.05649f-5 -0.000248909 0.000130564
The difference is very small as we would expect. Now, let's try to differentiate the output of the model. We need to use Enzyme.jl
to do this.
function loss_function(model, ps, st, x, y)
pred, _ = model(x, ps, st)
return MSELoss()(pred, y)
end
loss_function (generic function with 1 method)
We will use Zygote.jl
to compute the gradient of the loss function for the vanilla model.
loss_function(model, ps, st, x, y)
∂ps_zyg = only(Zygote.gradient(ps -> loss_function(model, ps, st, x, y), ps))
(layer_1 = (weight = Float32[-0.011611392 -0.12556516; -0.09724939 0.11515345; … ; 0.08667634 -0.2689521; -0.09643307 0.030881835], bias = Float32[0.048133414, -0.106884085, 0.097701035, 0.105524555, -0.039647065, -0.018338889, -0.019115759, -0.15107606, 0.013992601, -0.014150472 … 0.0041674753, 0.032615878, 0.031403527, 0.13760866, -0.04225484, 0.049417753, -0.00059220614, -0.03242131, 0.18807876, -0.07640441]), layer_2 = (weight = Float32[-0.004287243 0.028275706 … -0.0073489705 0.0028297475; 0.016479947 0.030926052 … -0.0036810301 0.019791333; … ; 0.010637202 -0.002057937 … 0.010218928 -0.047897488; 0.13518015 0.25378025 … 0.0903271 0.048811335], bias = Float32[0.018884761, 0.053747915, -0.17435724, -0.059518166, -0.10950818, 0.13725635, -0.048533253, -0.11365668, -0.3891182, 0.26477236 … 0.2236399, 0.1377298, -0.027226413, -0.09919551, -0.12902719, 0.0072498624, -0.012183794, 0.066751055, -0.017432783, 0.26700422]), layer_3 = (weight = Float32[-2.5994074 0.07425845 … 0.08953094 -0.9130077; -1.1187928 0.0062888456 … -0.032405674 -0.4112945], bias = Float32[-1.6541586, -0.61384505]))
Now we will compile the gradient function using Reactant.@compile
.
function enzyme_gradient(model, ps, st, x, y)
return Enzyme.gradient(Enzyme.Reverse, Const(loss_function), Const(model),
ps, Const(st), Const(x), Const(y))[2]
end
enzyme_gradient_compiled = @compile enzyme_gradient(model, ps_ra, st_ra, x_ra, y_ra)
∂ps_enzyme = enzyme_gradient_compiled(model, ps_ra, st_ra, x_ra, y_ra)
(layer_1 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[-0.011580504 -0.12544116; -0.09727344 0.11514305; … ; 0.086648285 -0.26888952; -0.09638782 0.030892808]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.04807379, -0.10686806, 0.0976663, 0.10547561, -0.039582886, -0.01833376, -0.019115932, -0.15103516, 0.013972283, -0.014150261 … 0.0041622836, 0.032610748, 0.031390227, 0.13753742, -0.04220736, 0.049418904, -0.00059982715, -0.032401938, 0.18802017, -0.07637966])), layer_2 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[-0.004282729 0.028260766 … -0.0073411153 0.002827262; 0.016510727 0.030915348 … -0.0036556367 0.01978655; … ; 0.010627578 -0.0020405184 … 0.01021189 -0.04787042; 0.13508528 0.25350177 … 0.09025641 0.048762415]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.01888247, 0.053769253, -0.17423475, -0.059477396, -0.10942678, 0.13717395, -0.048503045, -0.113529116, -0.38885257, 0.264592 … 0.2235274, 0.13767931, -0.027197167, -0.09917994, -0.12894896, 0.0072445334, -0.012183359, 0.066698216, -0.017422104, 0.26675284])), layer_3 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[-2.597894 0.074095085 … 0.089424826 -0.9124471; -1.1178916 0.006206141 … -0.032391064 -0.41097128]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[-1.6543105, -0.6136933])))
Now we check the difference:
fmap(Broadcast.BroadcastFunction(-), ∂ps_zyg, ∂ps_enzyme |> cpu_device())
(layer_1 = (weight = Float32[-3.0888245f-5 -0.00012399256; 2.4050474f-5 1.039356f-5; … ; 2.8051436f-5 -6.258488f-5; -4.5254827f-5 -1.0972843f-5], bias = Float32[5.962327f-5, -1.6026199f-5, 3.4734607f-5, 4.8942864f-5, -6.41793f-5, -5.1297247f-6, 1.73226f-7, -4.0903687f-5, 2.0317733f-5, -2.1141022f-7 … 5.1916577f-6, 5.1297247f-6, 1.3299286f-5, 7.124245f-5, -4.7478825f-5, -1.1511147f-6, 7.6210126f-6, -1.937151f-5, 5.8591366f-5, -2.4750829f-5]), layer_2 = (weight = Float32[-4.5141205f-6 1.4940277f-5 … -7.85524f-6 2.4854671f-6; -3.078021f-5 1.0704622f-5 … -2.5393441f-5 4.7832727f-6; … ; 9.6242875f-6 -1.7418526f-5 … 7.0380047f-6 -2.706796f-5; 9.486079f-5 0.0002784729 … 7.069111f-5 4.8920512f-5], bias = Float32[2.2910535f-6, -2.1338463f-5, -0.00012248755, -4.0769577f-5, -8.139759f-5, 8.240342f-5, -3.0208379f-5, -0.00012756139, -0.0002656281, 0.00018036366 … 0.00011250377, 5.0485134f-5, -2.9245391f-5, -1.5571713f-5, -7.8231096f-5, 5.3290278f-6, -4.3492764f-7, 5.2839518f-5, -1.0678545f-5, 0.0002513826]), layer_3 = (weight = Float32[-0.0015134811 0.00016336143 … 0.00010611117 -0.0005605817; -0.0009012222 8.2704704f-5 … -1.4610589f-5 -0.0003232062], bias = Float32[0.00015187263, -0.00015175343]))
Using the TrainState
API
Now that we saw the low-level API let's see how to train the model without any of this boilerplate. Simply follow the following steps:
Create a device using
reactant_device
. Remember to loadReactant.jl
before doing this.Similar to other device functions move the model, parameters, states and data to the device. Note that you might want to use
DeviceIterator
to move the data loader to the device with an iterator.Construct a
TrainState
usingTraining.TrainState
.And most importantly use
AutoEnzyme
while callingTraining.single_train_step!
orTraining.single_train_step
.
model = Chain(
Dense(2 => 4, gelu),
Dense(4 => 4, gelu),
Dense(4 => 2)
)
ps, st = Lux.setup(Random.default_rng(), model)
x_ra = [randn(Float32, 2, 32) for _ in 1:32]
y_ra = [xᵢ .^ 2 for xᵢ in x_ra]
ps_ra = ps |> xdev
st_ra = st |> xdev
dataloader = DeviceIterator(xdev, zip(x_ra, y_ra))
function train_model(model, ps, st, dataloader)
train_state = Training.TrainState(model, ps, st, Adam(0.001f0))
for iteration in 1:1000
for (i, (xᵢ, yᵢ)) in enumerate(dataloader)
_, loss, _, train_state = Training.single_train_step!(
AutoEnzyme(), MSELoss(), (xᵢ, yᵢ), train_state)
if (iteration % 100 == 0 || iteration == 1) && i == 1
@printf("Iter: [%4d/%4d]\tLoss: %.8f\n", iteration, 1000, loss)
end
end
end
return train_state
end
train_model(model, ps_ra, st_ra, dataloader)
Iter: [ 1/1000] Loss: 2.78516054
Iter: [ 100/1000] Loss: 0.80673385
Iter: [ 200/1000] Loss: 0.22301091
Iter: [ 300/1000] Loss: 0.09956019
Iter: [ 400/1000] Loss: 0.05548754
Iter: [ 500/1000] Loss: 0.03868378
Iter: [ 600/1000] Loss: 0.03093609
Iter: [ 700/1000] Loss: 0.02368433
Iter: [ 800/1000] Loss: 0.01904443
Iter: [ 900/1000] Loss: 0.01662067
Iter: [1000/1000] Loss: 0.01448759