glasses.models.segmentation.unet package

Module contents

glasses.models.segmentation.unet.DownBlock

alias of glasses.models.segmentation.unet.UNetBasicBlock

class glasses.models.segmentation.unet.DownLayer(in_features: int, out_features: int, donwsample: bool = True, block: torch.nn.modules.module.Module = <class 'glasses.models.segmentation.unet.UNetBasicBlock'>, *args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

UNet down layer (left side).

Parameters
  • out_features (int) – Number of input features

  • out_features – Number of output features

  • donwsample (bool, optional) – If true maxpoll will be used to reduce the resolution of the input. Defaults to True.

  • block (nn.Module, optional) – Block used. Defaults to DownBlock.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class glasses.models.segmentation.unet.UNet(in_channels: int = 1, n_classes: int = 2, encoder: glasses.models.base.Encoder = <class 'glasses.models.segmentation.unet.UNetEncoder'>, decoder: torch.nn.modules.module.Module = <class 'glasses.models.segmentation.unet.UNetDecoder'>, **kwargs)[source]

Bases: glasses.models.segmentation.base.SegmentationModule

Implementation of Unet proposed in U-Net: Convolutional Networks for Biomedical Image Segmentation

https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/UNet.png?raw=true

Examples

Default models

>>> UNet()

You can easily customize your model

>>> # change activation
>>> UNet(activation=nn.SELU)
>>> # change number of classes (default is 2 )
>>> UNet(n_classes=2)
>>> # change encoder
>>> unet = UNet(encoder=lambda *args, **kwargs: ResNet.resnet26(*args, **kwargs).encoder,)
>>> unet = UNet(encoder=lambda *args, **kwargs: EfficientNet.efficientnet_b2(*args, **kwargs).encoder,)
>>> # change decoder
>>> UNet(decoder=partial(UNetDecoder, widths=[256, 128, 64, 32, 16]))
>>> # pass a different block to decoder
>>> UNet(encoder=partial(UNetEncoder, block=SENetBasicBlock))
>>> # all *Decoder class can be directly used
>>> unet = UNet(encoder=partial(ResNetEncoder, block=ResNetBottleneckBlock, depths=[2,2,2,2]))
Parameters
  • in_channels (int, optional) – [description]. Defaults to 1.

  • n_classes (int, optional) – [description]. Defaults to 2.

  • encoder (Encoder, optional) – [description]. Defaults to UNetEncoder.

  • ecoder (nn.Module, optional) – [description]. Defaults to UNetDecoder.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

training: bool
class glasses.models.segmentation.unet.UNetBasicBlock(in_features: int, out_features: int, activation: torch.nn.modules.module.Module = functools.partial(<class 'torch.nn.modules.activation.ReLU'>, inplace=True), *args, **kwargs)[source]

Bases: torch.nn.modules.container.Sequential

Basic Block for UNet. It is composed by a double 3x3 conv.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

class glasses.models.segmentation.unet.UNetDecoder(start_features: int = 512, widths: List[int] = [256, 128, 64, 32], lateral_widths: Optional[List[int]] = None, *args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

UNet Decoder composed of several layer of upsampling layers aimed to decrease the features space and increase the resolution.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor, residuals: List[torch.Tensor]) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class glasses.models.segmentation.unet.UNetEncoder(in_channels: int, widths: List[int] = [64, 128, 256, 512, 1024], *args, **kwargs)[source]

Bases: glasses.models.base.Encoder

UNet Encoder composed of several layers of convolutions aimed to increased the features space and decrease the resolution.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
glasses.models.segmentation.unet.UpBlock

alias of glasses.models.segmentation.unet.UNetBasicBlock

class glasses.models.segmentation.unet.UpLayer(in_features: int, out_features: int, lateral_features: Optional[int] = None, block: torch.nn.modules.module.Module = <class 'glasses.models.segmentation.unet.UNetBasicBlock'>, *args, **kwargs)[source]

Bases: torch.nn.modules.module.Module

UNet up layer (right side).

Parameters
  • out_features (int) – Number of input features

  • out_features – Number of output features

  • block (nn.Module, optional) – Block used. Defaults to UpBlock.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor, res: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool