echofilter.nn.modules package
Contents
echofilter.nn.modules package#
Submodules#
echofilter.nn.modules.activations module#
Pytorch activation functions.
Swish and Mish implementations taken from https://github.com/fastai/fastai2 under the Apache License Version 2.0.
- class echofilter.nn.modules.activations.HardMish(inplace=True)[source]#
Bases:
torch.nn.modules.module.Module
A second-order approximation to the mish activation function.
Notes
https://forums.fast.ai/t/hard-mish-activation-function/59238
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class echofilter.nn.modules.activations.HardSwish(inplace=True)[source]#
Bases:
torch.nn.modules.module.Module
A second-order approximation to the swish activation function.
See https://arxiv.org/abs/1905.02244
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class echofilter.nn.modules.activations.Mish[source]#
Bases:
torch.nn.modules.module.Module
Apply the mish function elementwise.
mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x)))
See https://arxiv.org/abs/1908.08681
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class echofilter.nn.modules.activations.Swish[source]#
Bases:
torch.nn.modules.module.Module
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- echofilter.nn.modules.activations.mish(x)[source]#
Apply the mish function elementwise.
mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x)))
- echofilter.nn.modules.activations.str2actfnfactory(actfn_name)[source]#
Map an activation function name to a factory which generates that actfun.
- Parameters
actfn_name (str) – Name of the activation function.
- Returns
A generator which yields a subclass of
torch.nn.Module
.- Return type
callable
echofilter.nn.modules.blocks module#
Blocks of modules.
- class echofilter.nn.modules.blocks.MBConv(in_channels, out_channels=None, expansion=6, se_reduction=4, fused=False, residual=True, actfn='InplaceReLU', bias=False, **conv_args)[source]#
Bases:
torch.nn.modules.module.Module
MobileNet style inverted residual block.
See https://arxiv.org/abs/1905.11946 and https://arxiv.org/abs/1905.02244.
- Parameters
in_channels (int) – Number of input channels.
out_channels (int, optional) – Number of output channels. Default is to match
in_channels
.expansion (int or float, optional) – Exansion factor for the inverted-residual bottleneck. Default is
6
.se_reduction (int, optional) – Reduction factor for squeeze-and-excite block. Default is
4
. Set toNone
or0
to disable squeeze-and-excitation.fused (bool, optional) – If
True
, the pointwise and depthwise convolution are fused together into a single regular convolution. Default isFalse
(a depthwise separable convolution).residual (bool, optional) – If
True
, the block is residual with a skip-through connection. Default isTrue
.actfn (str or callable, optional) – An activation class or similar generator. Default is an inplace ReLU activation. If this is a string, it is mapped to a generator with
activations.str2actfnfactory
.bias (bool, optional) – If
True
, the main convolution has a bias term. Default isFalse
. Note that the pointwise convolutions never have bias terms.**conv_args – Additional arguments, such as kernel_size, stride, and padding, which will be passed to the convolution module.
- extra_repr()[source]#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class echofilter.nn.modules.blocks.SqueezeExcite(in_channels, reduction=4, actfn='InplaceReLU')[source]#
Bases:
torch.nn.modules.module.Module
Squeeze and excitation block.
See https://arxiv.org/abs/1709.01507
- Parameters
in_channels (int) – Number of input (and output) channels.
reduction (int or float, optional) – Compression factor for the number of channels in the squeeze and excitation attention module. Default is
4
.actfn (str or callable, optional) – An activation class or similar generator. Default is an inplace ReLU activation. If this is a string, it is mapped to a generator with
activations.str2actfnfactory
.
- forward(input)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
echofilter.nn.modules.conv module#
Convolutional layers.
- class echofilter.nn.modules.conv.Conv2dSame(in_channels, out_channels, kernel_size, stride=1, padding='same', dilation=1, **kwargs)[source]#
Bases:
torch.nn.modules.conv.Conv2d
2D Convolutions with same padding option.
Same padding will only produce an output size which matches the input size if the kernel size is odd and the stride is 1.
- bias: Optional[torch.Tensor]#
- weight: torch.Tensor#
- class echofilter.nn.modules.conv.DepthwiseConv2d(in_channels, kernel_size=3, stride=1, padding='same', dilation=1, **kwargs)[source]#
Bases:
torch.nn.modules.conv.Conv2d
2D Depthwise Convolution.
- bias: Optional[torch.Tensor]#
- weight: torch.Tensor#
- class echofilter.nn.modules.conv.GaussianSmoothing(channels, kernel_size, sigma, padding='same', pad_mode='replicate', ndim=2)[source]#
Bases:
torch.nn.modules.module.Module
Apply gaussian smoothing on a 1d, 2d or 3d tensor.
Filtering is performed seperately for each channel in the input using a depthwise convolution.
- Parameters
channels (int or sequence) – Number of channels of the input tensors. Output will have this number of channels as well.
kernel_size (int or sequence) – Size of the gaussian kernel.
sigma (float or sequence) – Standard deviation of the gaussian kernel.
padding (int or sequence or "same", optional) – Amount of padding to use, for each side of each dimension. If this is
"same"
(default) the amount of padding will be set automatically to ensure the size of the tensor is unchanged.pad_mode (str, optional) – Padding mode. See
torch.nn.functional.pad()
for options. Default is"replicate"
.ndim (int, optional) – The number of dimensions of the data. Default value is 2 (spatial).
Notes
- forward(input)[source]#
Apply gaussian filter to input.
- Parameters
input (torch.Tensor) – Input to apply gaussian filter on.
- Returns
filtered – Filtered output, the same size as the input.
- Return type
- class echofilter.nn.modules.conv.PointwiseConv2d(in_channels, out_channels, **kwargs)[source]#
Bases:
torch.nn.modules.conv.Conv2d
2D Pointwise Convolution.
- bias: Optional[torch.Tensor]#
- weight: torch.Tensor#
echofilter.nn.modules.pathing module#
Connectors and pathing modules.
- class echofilter.nn.modules.pathing.FlexibleConcat2d[source]#
Bases:
torch.nn.modules.module.Module
Concatenate two inputs of nearly the same shape.
- forward(x1, x2)[source]#
Forward step.
- Parameters
x1 (torch.Tensor) – Tensor, possibly smaller than
x2
.x2 (torch.Tensor) – Tensor, at least as large as
x1
.
- Returns
Concatenated
x1
(padded if necessary) andx2
, along dimension1
.- Return type
- class echofilter.nn.modules.pathing.ResidualConnect(in_channels, out_channels)[source]#
Bases:
torch.nn.modules.module.Module
Joins up a residual connection, correcting for changes in number of channels.
- forward(residual, passed_thru)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
echofilter.nn.modules.utils module#
nn.modules utility functions.
- echofilter.nn.modules.utils.init_cnn(m)[source]#
Initialize biases and weights for a CNN layer.
Uses a Kaiming normal distribution for the weight and 0 for biases.
Function is applied recursively within the module.
- Parameters
m (torch.nn.Module) – Module
- echofilter.nn.modules.utils.same_to_padding(kernel_size, stride=1, dilation=1, ndim=None)[source]#
Determine the amount of padding to use for a convolutional layer.
- Parameters
kernel_size (int or sequence) – Size of kernel for each dimension.
stride (int or sequence, optional) – Amount of stride to apply in each dimension of the kernel. If
stride
is an int, the same value is applied for each dimension. Default is1
.dilation (int or sequence, optional) – Amount of dilation to apply in each dimension of the kernel. If
dilation
is an int, the same value is applied for each dimension. Default is1
.ndim (int or None, optional) – Number of dimensions of kernel to pad. If
None
(default), the number of dimensions is inferred from the number of dimensions tokernel_size
.
- Returns
padding – Amount of padding to apply to each dimension before convolving with the kernel in order to preserve the size of input.
- Return type