WeightInitializers
This package is a light dependency providing common weight initialization schemes for deep learning models.
Index
API Reference
# WeightInitializers.zeros32 —
Function.
```julia
zeros32([::AbstractRNG=_default_rng()], size...) -> Array{Float32, length(size)}
```
Return an `Array{Float32}` of zeros of the given `size`. (`rng` is ignored)
# WeightInitializers.ones32 —
Function.
```julia
ones32([::AbstractRNG=_default_rng()], size...) -> Array{Float32, length(size)}
```
Return an `Array{Float32}` of ones of the given `size`. (`rng` is ignored)
# WeightInitializers.rand32 —
Function.
```julia
rand32([::AbstractRNG=_default_rng()], size...) -> Array{Float32, length(size)}
```
Return an `Array{Float32}` of random numbers from a uniform distribution of the given `size`.
# WeightInitializers.randn32 —
Function.
```julia
randn32([::AbstractRNG=_default_rng()], size...) -> Array{Float32, length(size)}
```
Return an `Array{Float32}` of random numbers from a standard normal distribution of the given `size`.
# WeightInitializers.glorot_normal —
Function.
```julia
glorot_normal([::AbstractRNG=_default_rng()], [T=Float32], size...;
gain = 1) -> Array{T, length(size)}
```
Return an `Array{T}` of the given `size` containing random numbers drawn from a normal distribution with standard deviation `gain * sqrt(2 / (fan_in + fan_out))`. This method is described in [1] and also known as Xavier initialization.
**References**
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." *Proceedings of the thirteenth international conference on artificial intelligence and statistics*. 2010.
# WeightInitializers.glorot_uniform —
Function.
```julia
glorot_uniform([::AbstractRNG=_default_rng()], [T=Float32], size...;
gain = 1) -> Array{T, length(size)}
```
Return an `Array{T}` of the given `size` containing random numbers drawn from a uniform distribution on the interval $[-x, x]$, where `x = gain * sqrt(6 / (fan_in + fan_out))`. This method is described in [1] and also known as Xavier initialization.
**References**
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." *Proceedings of the thirteenth international conference on artificial intelligence and statistics*. 2010.
# WeightInitializers.kaiming_normal —
Function.
```julia
kaiming_normal([::AbstractRNG=_default_rng()], [T=Float32], size...;
gain = √T(2)) -> Array{T, length(size)}
```
Return an `Array{T}` of the given `size` containing random numbers taken from a normal distribution standard deviation `gain / sqrt(fan_in)`
**References**
[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." *Proceedings of the IEEE international conference on computer vision*. 2015.
# WeightInitializers.kaiming_uniform —
Function.
```julia
kaiming_uniform([::AbstractRNG=_default_rng()], [T=Float32], size...;
gain = √T(2)) -> Array{T, length(size)}
```
Return an `Array{T}` of the given `size` containing random numbers drawn from a uniform distribution on the interval `[-x, x]`, where `x = gain * sqrt(3/fan_in)`.
**References**
[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." *Proceedings of the IEEE international conference on computer vision*. 2015.
# WeightInitializers.truncated_normal —
Function.
```julia
truncated_normal([::AbstractRNG=_default_rng()], [T=Float32], size...; mean = 0, std = 1,
lo = -2, hi = 2) -> Array{T, length(size)}
```
Return an `Array{T}` of the given `size` where each element is drawn from a truncated normal distribution. The numbers are distributed like `filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100))`.