Skip to content

LuxDeviceUtils

LuxDeviceUtils.jl is a lightweight package defining rules for transferring data across devices. Most users should directly use Lux.jl instead.

Transition to MLDataDevices.jl

Currently this package is in maintenance mode and won't receive any new features, however, we will backport bug fixes till Lux v1.0 is released. Post that this package should be considered deprecated and users should switch to MLDataDevices.jl.

For more information on MLDataDevices.jl checkout the MLDataDevices.jl Documentation.

Index

Preferences

# LuxDeviceUtils.gpu_backend!Function.
julia
gpu_backend!() = gpu_backend!("")
gpu_backend!(backend) = gpu_backend!(string(backend))
gpu_backend!(backend::AbstractLuxGPUDevice)
gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source


Data Transfer

# LuxDeviceUtils.cpu_deviceFunction.
julia
cpu_device() -> LuxCPUDevice()

Return a LuxCPUDevice object which can be used to transfer data to CPU.

source


# LuxDeviceUtils.gpu_deviceFunction.
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
    force_gpu_usage::Bool=false) -> AbstractLuxDevice()

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force_gpu_usage is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Keyword Arguments

  • force_gpu_usage::Bool: If true, then an error is thrown if no functional GPU device is found.

source


Miscellaneous

# LuxDeviceUtils.reset_gpu_device!Function.
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source


# LuxDeviceUtils.supported_gpu_backendsFunction.
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which Lux.jl supports.

Danger

Metal.jl and oneAPI.jl support is extremely experimental and most things are not expected to work.

source


# LuxDeviceUtils.default_device_rngFunction.
julia
default_device_rng(::AbstractLuxDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source


# LuxDeviceUtils.get_deviceFunction.
julia
get_device(x) -> dev::AbstractLuxDevice | Exception | nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Warning

RNG types currently don't participate in device determination. We will remove this restriction in the future.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source


# LuxDeviceUtils.get_device_typeFunction.
julia
get_device_type(x) -> Type{<:AbstractLuxDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Warning

RNG types currently don't participate in device determination. We will remove this restriction in the future.

source


# LuxDeviceUtils.loadedFunction.
julia
loaded(x::AbstractLuxDevice) -> Bool
loaded(::Type{<:AbstractLuxDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • LuxCUDA.jl for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source


# LuxDeviceUtils.functionalFunction.
julia
functional(x::AbstractLuxDevice) -> Bool
functional(::Type{<:AbstractLuxDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via LuxDeviceUtils.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source


Multi-GPU Support

# LuxDeviceUtils.set_device!Function.
julia
set_device!(T::Type{<:AbstractLuxDevice}, dev_or_id)

Set the device for the given type. This is a no-op for LuxCPUDevice. For LuxCUDADevice and LuxAMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, LuxMetalDevice and LuxoneAPIDevice doesn't support setting the device.

Arguments

  • T::Type{<:AbstractLuxDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractLuxDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for LuxCPUDevice. For LuxCUDADevice and LuxAMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, LuxMetalDevice and LuxoneAPIDevice doesn't support setting the device.

Arguments

  • T::Type{<:AbstractLuxDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source