LuxDeviceUtils
LuxDeviceUtils.jl
is a lightweight package defining rules for transferring data across devices. Most users should directly use Lux.jl instead.
Transition to MLDataDevices.jl
Currently this package is in maintenance mode and won't receive any new features, however, we will backport bug fixes till Lux v1.0
is released. Post that this package should be considered deprecated and users should switch to MLDataDevices.jl
.
For more information on MLDataDevices.jl
checkout the MLDataDevices.jl Documentation.
Index
LuxDeviceUtils.cpu_device
LuxDeviceUtils.default_device_rng
LuxDeviceUtils.functional
LuxDeviceUtils.get_device
LuxDeviceUtils.get_device_type
LuxDeviceUtils.gpu_backend!
LuxDeviceUtils.gpu_device
LuxDeviceUtils.loaded
LuxDeviceUtils.reset_gpu_device!
LuxDeviceUtils.set_device!
LuxDeviceUtils.supported_gpu_backends
Preferences
gpu_backend!() = gpu_backend!("")
gpu_backend!(backend) = gpu_backend!(string(backend))
gpu_backend!(backend::AbstractLuxGPUDevice)
gpu_backend!(backend::String)
Creates a LocalPreferences.toml
file with the desired GPU backend.
If backend == ""
, then the gpu_backend
preference is deleted. Otherwise, backend
is validated to be one of the possible backends and the preference is set to backend
.
If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.
Data Transfer
cpu_device() -> LuxCPUDevice()
Return a LuxCPUDevice
object which can be used to transfer data to CPU.
gpu_device(device_id::Union{Nothing, Integer}=nothing;
force_gpu_usage::Bool=false) -> AbstractLuxDevice()
Selects GPU device based on the following criteria:
If
gpu_backend
preference is set and the backend is functional on the system, then that device is selected.Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by
supported_gpu_backends()
and select the first functional backend.If no GPU device is functional and
force_gpu_usage
isfalse
, thencpu_device()
is invoked.If nothing works, an error is thrown.
Arguments
device_id::Union{Nothing, Integer}
: The device id to select. Ifnothing
, then we return the last selected device or if none was selected then we run the autoselection and choose the current device usingCUDA.device()
orAMDGPU.device()
or similar. IfInteger
, then we select the device with the given id. Note that this is1
-indexed, in contrast to the0
-indexedCUDA.jl
. For example,id = 4
corresponds toCUDA.device!(3)
.
Warning
device_id
is only applicable for CUDA
and AMDGPU
backends. For Metal
, oneAPI
and CPU
backends, device_id
is ignored and a warning is printed.
Keyword Arguments
force_gpu_usage::Bool
: Iftrue
, then an error is thrown if no functional GPU device is found.
Miscellaneous
reset_gpu_device!()
Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.
supported_gpu_backends() -> Tuple{String, ...}
Return a tuple of supported GPU backends.
Warning
This is not the list of functional backends on the system, but rather backends which Lux.jl
supports.
Danger
Metal.jl
and oneAPI.jl
support is extremely experimental and most things are not expected to work.
default_device_rng(::AbstractLuxDevice)
Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.
get_device(x) -> dev::AbstractLuxDevice | Exception | nothing
If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing
.
Note
Trigger Packages must be loaded for this to return the correct device.
Warning
RNG types currently don't participate in device determination. We will remove this restriction in the future.
See also get_device_type
for a faster alternative that can be used for dispatch based on device type.
get_device_type(x) -> Type{<:AbstractLuxDevice} | Exception | Type{Nothing}
Similar to get_device
but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device
where ever defining dispatches based on the device type.
Note
Trigger Packages must be loaded for this to return the correct device.
Warning
RNG types currently don't participate in device determination. We will remove this restriction in the future.
loaded(x::AbstractLuxDevice) -> Bool
loaded(::Type{<:AbstractLuxDevice}) -> Bool
Checks if the trigger package for the device is loaded. Trigger packages are as follows:
LuxCUDA.jl
for NVIDIA CUDA Support.AMDGPU.jl
for AMD GPU ROCM Support.Metal.jl
for Apple Metal GPU Support.oneAPI.jl
for Intel oneAPI GPU Support.
functional(x::AbstractLuxDevice) -> Bool
functional(::Type{<:AbstractLuxDevice}) -> Bool
Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via LuxDeviceUtils.loaded
), the device may not be functional.
Note that while this function is not exported, it is considered part of the public API.
Multi-GPU Support
set_device!(T::Type{<:AbstractLuxDevice}, dev_or_id)
Set the device for the given type. This is a no-op for LuxCPUDevice
. For LuxCUDADevice
and LuxAMDGPUDevice
, it prints a warning if the corresponding trigger package is not loaded.
Currently, LuxMetalDevice
and LuxoneAPIDevice
doesn't support setting the device.
Arguments
T::Type{<:AbstractLuxDevice}
: The device type to set.dev_or_id
: Can be the device from the corresponding package. For example for CUDA it can be aCuDevice
. If it is an integer, it is the device id to set. This is1
-indexed.
Danger
This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils
instead of using this function.
set_device!(T::Type{<:AbstractLuxDevice}, ::Nothing, rank::Integer)
Set the device for the given type. This is a no-op for LuxCPUDevice
. For LuxCUDADevice
and LuxAMDGPUDevice
, it prints a warning if the corresponding trigger package is not loaded.
Currently, LuxMetalDevice
and LuxoneAPIDevice
doesn't support setting the device.
Arguments
T::Type{<:AbstractLuxDevice}
: The device type to set.rank::Integer
: Local Rank of the process. This is applicable for distributed training and must be0
-indexed.
Danger
This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils
instead of using this function.