glasses.nn.regularization package

Module contents

class glasses.nn.regularization.DropBlock(block_size: int = 7, p: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

Implementation of Drop Block proposed in DropBlock: A regularization method for convolutional networks

Similar to dropout but it maskes clusters of close pixels. The following image shows the approach (from the paper)

https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DropBlock.png?raw=true

The following picture shows the effect of DropBlock on an input image

https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DropBlockGrogu.png?raw=true

Note

[From the paper] We found that DropBlock with a fixed keep_prob during training does not work well. Applying small value of keep_prob hurts learning at the beginning. Instead, gradually decreasing keep_prob over time from 1 to the target value is more robust and adds improvement for the most values of keep_prob. In our experiments, we use a linear scheme of decreasing the value of keep_prob, which tends to work well across many hyperparameter settings. This linear scheme is similar to ScheduledDropPath.

keep_prob is p in our implementation.

Parameters
  • block_size (int, optional) – Dimension of the pixel cluster. Defaults to 7.

  • p (float, optional) – probability, the bigger the mode clusters. Defaults to 0.5.

calculate_gamma(x: torch.Tensor) float[source]

Compute gamma, eq (1) in the paper

Parameters

x (Tensor) – Input tensor

Returns

gamma

Return type

Tensor

forward(x: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class glasses.nn.regularization.StochasticDepth(p: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

Implementation of Stochastic Depth proposed in Deep Networks with Stochastic Depth.

The main idea is to skip one layer completely.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x: torch.Tensor) torch.Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool