LuxDeviceUtils
LuxDeviceUtils.jl
is a lightweight package defining rules for transferring data across devices. Most users should directly use Lux.jl instead.
Index
LuxDeviceUtils.cpu_device
LuxDeviceUtils.default_device_rng
LuxDeviceUtils.get_device
LuxDeviceUtils.gpu_backend!
LuxDeviceUtils.gpu_device
LuxDeviceUtils.reset_gpu_device!
LuxDeviceUtils.set_device!
LuxDeviceUtils.supported_gpu_backends
Preferences
gpu_backend!() = gpu_backend!("")
gpu_backend!(backend) = gpu_backend!(string(backend))
gpu_backend!(backend::AbstractLuxGPUDevice)
gpu_backend!(backend::String)
Creates a LocalPreferences.toml
file with the desired GPU backend.
If backend == ""
, then the gpu_backend
preference is deleted. Otherwise, backend
is validated to be one of the possible backends and the preference is set to backend
.
If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.
Data Transfer
cpu_device() -> LuxCPUDevice()
Return a LuxCPUDevice
object which can be used to transfer data to CPU.
gpu_device(device_id::Union{Nothing, Int}=nothing;
force_gpu_usage::Bool=false) -> AbstractLuxDevice()
Selects GPU device based on the following criteria:
If
gpu_backend
preference is set and the backend is functional on the system, then that device is selected.Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by
supported_gpu_backends()
and select the first functional backend.If no GPU device is functional and
force_gpu_usage
isfalse
, thencpu_device()
is invoked.If nothing works, an error is thrown.
Arguments
device_id::Union{Nothing, Int}
: The device id to select. Ifnothing
, then we return the last selected device or if none was selected then we run the autoselection and choose the current device usingCUDA.device()
orAMDGPU.device()
or similar. IfInt
, then we select the device with the given id. Note that this is1
-indexed, in contrast to the0
-indexedCUDA.jl
. For example,id = 4
corresponds toCUDA.device!(3)
.
Warning
device_id
is only applicable for CUDA
and AMDGPU
backends. For Metal
and CPU
backends, device_id
is ignored and a warning is printed.
Keyword Arguments
force_gpu_usage::Bool
: Iftrue
, then an error is thrown if no functional GPU device is found.
Miscellaneous
reset_gpu_device!()
Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.
supported_gpu_backends() -> Tuple{String, ...}
Return a tuple of supported GPU backends.
Warning
This is not the list of functional backends on the system, but rather backends which Lux.jl
supports.
Danger
Metal.jl
support is extremely experimental and most things are not expected to work.
default_device_rng(::AbstractLuxDevice)
Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.
get_device(x::AbstractArray) -> AbstractLuxDevice
Returns the device of the array x
. Trigger Packages must be loaded for this to return the correct device.
set_device!(T::Type{<:AbstractLuxDevice}, dev_or_id)
Set the device for the given type. This is a no-op for LuxCPUDevice
. For LuxCUDADevice
and LuxAMDGPUDevice
, it prints a warning if the corresponding trigger package is not loaded.
Currently, LuxMetalDevice
doesn't support setting the device.
Arguments
T::Type{<:AbstractLuxDevice}
: The device type to set.dev_or_id
: Can be the device from the corresponding package. For example for CUDA it can be aCuDevice
. If it is an integer, it is the device id to set. This is1
-indexed.
Danger
This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils
instead of using this function.
set_device!(T::Type{<:AbstractLuxDevice}, ::Nothing, rank::Int)
Set the device for the given type. This is a no-op for LuxCPUDevice
. For LuxCUDADevice
and LuxAMDGPUDevice
, it prints a warning if the corresponding trigger package is not loaded.
Currently, LuxMetalDevice
doesn't support setting the device.
Arguments
T::Type{<:AbstractLuxDevice}
: The device type to set.rank::Int
: Local Rank of the process. This is applicable for distributed training and must be0
-indexed.
Danger
This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils
instead of using this function.