Skip to content
0.6k

MLDataDevices

MLDataDevices.jl is a lightweight package defining rules for transferring data across devices.

Preferences

MLDataDevices.gpu_backend! Function
julia
gpu_backend!() = gpu_backend!("")
gpu_backend!(backend) = gpu_backend!(string(backend))
gpu_backend!(backend::AbstractGPUDevice)
gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

Data Transfer

MLDataDevices.cpu_device Function
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source
MLDataDevices.gpu_device Function
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

  • force::Bool: If true, then an error is thrown if no functional GPU device is found.
source
MLDataDevices.reactant_device Function
julia
reactant_device(;
    force::Bool=false, client=missing, device=missing, sharding=missing
) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

sharding is used to specify the sharding strategy. If a Reactant.Sharding.AbstractSharding is specified, then we use it to shard all abstract arrays. Alternatively, pass in a IdDict to specify the sharding for specific leaves.

source

Miscellaneous

MLDataDevices.reset_gpu_device! Function
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source
MLDataDevices.supported_gpu_backends Function
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source
MLDataDevices.default_device_rng Function
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source
MLDataDevices.get_device Function
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice() – denotes that the device type is unknown.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source
MLDataDevices.get_device_type Function
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • Nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice – denotes that the device type is unknown.

source
MLDataDevices.loaded Function
julia
loaded(x::AbstractDevice) -> Bool
loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • CUDA.jl and cuDNN.jl (or just LuxCUDA.jl) for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source
MLDataDevices.functional Function
julia
functional(x::AbstractDevice) -> Bool
functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source
MLDataDevices.isleaf Function
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

Multi-GPU Support

MLDataDevices.set_device! Function
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source
julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

Iteration

MLDataDevices.DeviceIterator Type
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils

julia> X = rand(Float64, 3, 33);

julia> dataloader = DataLoader(X; batchsize=13, shuffle=false);

julia> for (i, x) in enumerate(dataloader)
           @show i, summary(x)
       end
(i, summary(x)) = (1, "3×13 Matrix{Float64}")
(i, summary(x)) = (2, "3×13 Matrix{Float64}")
(i, summary(x)) = (3, "3×7 Matrix{Float64}")

julia> for (i, x) in enumerate(CUDADevice()(dataloader))
           @show i, summary(x)
       end
(i, summary(x)) = (1, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
(i, summary(x)) = (2, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")
source

Layout Switch

Adjust the layout style of VitePress to adapt to different reading needs and screens.

Expand all
The sidebar and content area occupy the entire width of the screen.
Expand sidebar with adjustable values
Expand sidebar width and add a new slider for user to choose and customize their desired width of the maximum width of sidebar can go, but the content area width will remain the same.
Expand all with adjustable values
Expand sidebar width and add a new slider for user to choose and customize their desired width of the maximum width of sidebar can go, but the content area width will remain the same.
Original width
The original layout width of VitePress

Page Layout Max Width

Adjust the exact value of the page width of VitePress layout to adapt to different reading needs and screens.

Adjust the maximum width of the page layout
A ranged slider for user to choose and customize their desired width of the maximum width of the page layout can go.

Content Layout Max Width

Adjust the exact value of the document content width of VitePress layout to adapt to different reading needs and screens.

Adjust the maximum width of the content layout
A ranged slider for user to choose and customize their desired width of the maximum width of the content layout can go.

Spotlight

Highlight the line where the mouse is currently hovering in the content to optimize for users who may have reading and focusing difficulties.

ONOn
Turn on Spotlight.
OFFOff
Turn off Spotlight.