Skip to content

LuxDeviceUtils

LuxDeviceUtils.jl is a lightweight package defining rules for transferring data across devices. Most users should directly use Lux.jl instead.

Index

Preferences

# LuxDeviceUtils.gpu_backend!Function.
julia
gpu_backend!() = gpu_backend!("")
gpu_backend!(backend) = gpu_backend!(string(backend))
gpu_backend!(backend::AbstractLuxGPUDevice)
gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source


Data Transfer

# LuxDeviceUtils.cpu_deviceFunction.
julia
cpu_device() -> LuxCPUDevice()

Return a LuxCPUDevice object which can be used to transfer data to CPU.

source


# LuxDeviceUtils.gpu_deviceFunction.
julia
gpu_device(device_id::Union{Nothing, Int}=nothing;
    force_gpu_usage::Bool=false) -> AbstractLuxDevice()

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force_gpu_usage is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Int}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Int, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal and CPU backends, device_id is ignored and a warning is printed.

Keyword Arguments

  • force_gpu_usage::Bool: If true, then an error is thrown if no functional GPU device is found.

source


Miscellaneous

# LuxDeviceUtils.reset_gpu_device!Function.
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source


# LuxDeviceUtils.supported_gpu_backendsFunction.
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which Lux.jl supports.

Danger

Metal.jl support is extremely experimental and most things are not expected to work.

source


# LuxDeviceUtils.default_device_rngFunction.
julia
default_device_rng(::AbstractLuxDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source


# LuxDeviceUtils.get_deviceFunction.
julia
get_device(x::AbstractArray) -> AbstractLuxDevice

Returns the device of the array x. Trigger Packages must be loaded for this to return the correct device.

source


# LuxDeviceUtils.set_device!Function.
julia
set_device!(T::Type{<:AbstractLuxDevice}, dev_or_id)

Set the device for the given type. This is a no-op for LuxCPUDevice. For LuxCUDADevice and LuxAMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, LuxMetalDevice doesn't support setting the device.

Arguments

  • T::Type{<:AbstractLuxDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractLuxDevice}, ::Nothing, rank::Int)

Set the device for the given type. This is a no-op for LuxCPUDevice. For LuxCUDADevice and LuxAMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, LuxMetalDevice doesn't support setting the device.

Arguments

  • T::Type{<:AbstractLuxDevice}: The device type to set.

  • rank::Int: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source