glasses.interpretability package

Submodules

glasses.interpretability.GradCam module

class glasses.interpretability.GradCam.GradCam[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of GradCam proposed in Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, target: Optional[int] = None, ctx: Optional[torch.Tensor] = None, postprocessing: Optional[Callable[[torch.Tensor], torch.Tensor]] = None) glasses.interpretability.GradCam.GradCamResult[source]

Run GradCam on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • ctx (torch.Tensor, optional) – The tensor w.r we derive, if None we will use the one-hot encoding of the target. Defaults to None.

  • postprocessing (Callable[[torch.Tensor], torch.Tensor], optional) – A function used to post process the output, e.g. de-normalize. Defaults to None.

Returns

The result of the gradcam, you can call .show to see it.

Return type

GradCamResult

class glasses.interpretability.GradCam.GradCamResult(img: torch.Tensor, cam: torch.Tensor, postpreocessing: Callable[[torch.Tensor], torch.Tensor])[source]

Bases: object

show(*args, **kwargs) matplotlib.pyplot.figure[source]

glasses.interpretability.Interpretability module

class glasses.interpretability.Interpretability.Interpretability[source]

Bases: object

Base class for all interpretability techniques

glasses.interpretability.SaliencyMap module

class glasses.interpretability.SaliencyMap.SaliencyMap[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, ctx: Optional[torch.Tensor] = None, target: Optional[int] = None, guide: bool = True) glasses.interpretability.SaliencyMap.SaliencyMapResult[source]

Run SaliencyMap on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • ctx (torch.Tensor, optional) – The tensor w.r we derive, if None we will use the one-hot encoding of the target. Defaults to None.

Returns

The result of the saliency map, you can call .show to see it.

Return type

SaliencyMapResult

guide(module)[source]
class glasses.interpretability.SaliencyMap.SaliencyMapResult(saliency_map: torch.Tensor)[source]

Bases: object

show(*args, **kwargs) matplotlib.pyplot.figure[source]

glasses.interpretability.ScoreCam module

class glasses.interpretability.ScoreCam.ScoreCam[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of ScoreCam proposed in Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, target: Optional[int] = None, postprocessing: Optional[Callable[[torch.Tensor], torch.Tensor]] = None) glasses.interpretability.GradCam.GradCamResult[source]

Run GradCam on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • postprocessing (Callable[[torch.Tensor], torch.Tensor], optional) – A function used to post process the output, e.g. de-normalize. Defaults to None.

Returns

The result of the scorecam, you can call .show to see it.

Return type

GradCamResult

glasses.interpretability.utils module

glasses.interpretability.utils.convert_to_grayscale(cv2im)[source]

Converts 3d image to grayscale

Parameters

cv2im (numpy arr) – RGB image with shape (D,W,H)

Returns

Grayscale image with shape (1,W,D)

Return type

grayscale_im (numpy_arr)

credits to https://github.com/utkuozbulak/pytorch-cnn-visualizations

glasses.interpretability.utils.find_first_layer(x: torch.Tensor, module: torch.nn.modules.module.Module, of_type: Type) torch.nn.modules.module.Module[source]

Utility function that return the first layer of a given type

Example

>>> x = torch.rand((1,3,224,224))
>>> model = ResNet.resnet18()
>>> find_last_layer(x, module, nn.Conv2d)
Parameters
  • x (torch.Tensor) – [description]

  • module (nn.Module) – [description]

  • of_type (Type) – [description]

Returns

[description]

Return type

nn.Module

glasses.interpretability.utils.find_last_layer(x: torch.Tensor, module: torch.nn.modules.module.Module, of_type: Type) torch.nn.modules.module.Module[source]

Utility function that return the last layer of a given type

Example

>>> x = torch.rand((1,3,224,224))
>>> model = ResNet.resnet18()
>>> find_last_layer(x, module, nn.Conv2d)
Parameters
  • x (torch.Tensor) – [description]

  • module (nn.Module) – [description]

  • of_type (Type) – [description]

Returns

[description]

Return type

nn.Module

glasses.interpretability.utils.image2cam(image, cam)[source]
glasses.interpretability.utils.tensor2cam(image, cam)[source]

Module contents

class glasses.interpretability.GradCam[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of GradCam proposed in Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, target: Optional[int] = None, ctx: Optional[torch.Tensor] = None, postprocessing: Optional[Callable[[torch.Tensor], torch.Tensor]] = None) glasses.interpretability.GradCam.GradCamResult[source]

Run GradCam on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • ctx (torch.Tensor, optional) – The tensor w.r we derive, if None we will use the one-hot encoding of the target. Defaults to None.

  • postprocessing (Callable[[torch.Tensor], torch.Tensor], optional) – A function used to post process the output, e.g. de-normalize. Defaults to None.

Returns

The result of the gradcam, you can call .show to see it.

Return type

GradCamResult

class glasses.interpretability.Interpretability[source]

Bases: object

Base class for all interpretability techniques

class glasses.interpretability.SaliencyMap[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, ctx: Optional[torch.Tensor] = None, target: Optional[int] = None, guide: bool = True) glasses.interpretability.SaliencyMap.SaliencyMapResult[source]

Run SaliencyMap on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • ctx (torch.Tensor, optional) – The tensor w.r we derive, if None we will use the one-hot encoding of the target. Defaults to None.

Returns

The result of the saliency map, you can call .show to see it.

Return type

SaliencyMapResult

guide(module)[source]
class glasses.interpretability.ScoreCam[source]

Bases: glasses.interpretability.Interpretability.Interpretability

Implementation of ScoreCam proposed in Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks

__call__(x: torch.Tensor, module: torch.nn.modules.module.Module, layer: Optional[torch.nn.modules.module.Module] = None, target: Optional[int] = None, postprocessing: Optional[Callable[[torch.Tensor], torch.Tensor]] = None) glasses.interpretability.GradCam.GradCamResult[source]

Run GradCam on the input given a model

Parameters
  • x (torch.Tensor) – Input tensor, e.g. an image

  • module (nn.Module) – Model

  • layer (nn.Module, optional) – The layer we wish to interpreter, if None then the last conv layer will be used. Defaults to None.

  • target (int, optional) – The target tensor, if None the model output (after softmax and argmax) wil be used. Defaults to None.

  • postprocessing (Callable[[torch.Tensor], torch.Tensor], optional) – A function used to post process the output, e.g. de-normalize. Defaults to None.

Returns

The result of the scorecam, you can call .show to see it.

Return type

GradCamResult