Utilities¤
Index¤
Lux.cpu
Lux.disable_stacktrace_truncation!
Lux.foldl_init
Lux.gpu
Lux.istraining
Lux.multigate
Lux.replicate
Device Management / Data Transfer¤
#
Lux.cpu
— Function.
cpu(x)
Transfer x
to CPU.
Warning
This function has been deprecated. Use cpu_device
instead.
#
Lux.gpu
— Function.
gpu(x)
Transfer x
to GPU determined by the backend set using Lux.gpu_backend!
.
Warning
This function has been deprecated. Use gpu_device
instead. Using this function inside performance critical code will cause massive slowdowns due to type inference failure.
Note
For detailed API documentation on Data Transfer check out the LuxDeviceUtils.jl
Initialization¤
Note
For API documentation on Initialization check out the WeightInitializers.jl
Miscellaneous Utilities¤
#
Lux.foldl_init
— Function.
foldl_init(op, x)
foldl_init(op, x, init)
Exactly same as foldl(op, x; init)
in the forward pass. But, gives gradients wrt init
in the backward pass.
#
Lux.istraining
— Function.
istraining(::Val{training})
istraining(st::NamedTuple)
Returns true
if training
is true
or if st
contains a training
field with value true
. Else returns false
.
Method undefined if st.training
is not of type Val
.
#
Lux.multigate
— Function.
multigate(x::AbstractArray, ::Val{N})
Split up x
into N
equally sized chunks (along dimension 1
).
#
Lux.replicate
— Function.
replicate(rng::AbstractRNG)
replicate(rng::CUDA.RNG)
Creates a copy of the rng
state depending on its type.
Truncated Stacktraces¤
#
Lux.disable_stacktrace_truncation!
— Function.
disable_stacktrace_truncation!(; disable::Bool=true)
An easy way to update TruncatedStacktraces.VERBOSE
without having to load it manually.
Effectively does TruncatedStacktraces.VERBOSE[] = disable