emonet.model module#

Emotion model module.

class emonet.model.ConvTimeBlock(in_channels: int, out_channels: int, kernel_size: int = 3, stride: int = 1, padding: int = 1, pool_kernel_size: int = 4, pool_stride: int = 4, dropout: float = 0.3)[source]#

Bases: torch.nn.modules.module.Module

Convolutional time block.

Create a time-distributed convolutional time block.

Parameters
  • in_channels (int) – Number of input channels for convolutional layer.

  • out_channels (int) – Number of output channels for convolutional layer.

  • kernel_size (int) – Kernel size for convolutional filter.

  • stride (int) – Stride size for convolutional filter.

  • padding (int) – Padding size for convolutional filter.

  • pool_kernel_size (int) – Kernel size for pooling layer.

  • pool_stride (int) – Stride size for pooling layer.

  • dropout (float) – Dropout probability.

__init__(in_channels: int, out_channels: int, kernel_size: int = 3, stride: int = 1, padding: int = 1, pool_kernel_size: int = 4, pool_stride: int = 4, dropout: float = 0.3)[source]#

Create a time-distributed convolutional time block.

Parameters
  • in_channels (int) – Number of input channels for convolutional layer.

  • out_channels (int) – Number of output channels for convolutional layer.

  • kernel_size (int) – Kernel size for convolutional filter.

  • stride (int) – Stride size for convolutional filter.

  • padding (int) – Padding size for convolutional filter.

  • pool_kernel_size (int) – Kernel size for pooling layer.

  • pool_stride (int) – Stride size for pooling layer.

  • dropout (float) – Dropout probability.

forward(x: torch.Tensor) torch.Tensor[source]#

Pass input through time-distributed convolutional block layer.

T_destination#

alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor])

add_module(name: str, module: Optional[torch.nn.modules.module.Module]) None#

Adds a child module to the current module.

The module can be accessed as an attribute using the given name.

Parameters
  • name (string) – name of the child module. The child module can be accessed from this module using the given name

  • module (Module) – child module to be added to the module.

apply(fn: Callable[[torch.nn.modules.module.Module], None]) torch.nn.modules.module.T#

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).

Parameters

fn (Module -> None) – function to be applied to each submodule

Returns

Module – self

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16() torch.nn.modules.module.T#

Casts all floating point parameters and buffers to bfloat16 datatype.

Note

This method modifies the module in-place.

Returns

Module – self

buffers(recurse: bool = True) Iterator[torch.Tensor]#

Returns an iterator over module buffers.

Parameters

recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

torch.Tensor – module buffer

Example:

>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children() Iterator[torch.nn.modules.module.Module]#

Returns an iterator over immediate children modules.

Yields

Module – a child module

cpu() torch.nn.modules.module.T#

Moves all model parameters and buffers to the CPU.

Note

This method modifies the module in-place.

Returns

Module – self

cuda(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T#

Moves all model parameters and buffers to the GPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

Module – self

double() torch.nn.modules.module.T#

Casts all floating point parameters and buffers to double datatype.

Note

This method modifies the module in-place.

Returns

Module – self

dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

eval() torch.nn.modules.module.T#

Sets the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it.

Returns

Module – self

extra_repr() str#

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

float() torch.nn.modules.module.T#

Casts all floating point parameters and buffers to float datatype.

Note

This method modifies the module in-place.

Returns

Module – self

get_buffer(target: str) torch.Tensor#

Returns the buffer given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the buffer to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

torch.Tensor – The buffer referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not a buffer

get_extra_state() Any#

Returns any extra state to include in the module’s state_dict. Implement this and a corresponding set_extra_state() for your module if you need to store extra state. This function is called when building the module’s state_dict().

Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.

Returns

object – Any extra state to store in the module’s state_dict

get_parameter(target: str) torch.nn.parameter.Parameter#

Returns the parameter given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the Parameter to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

torch.nn.Parameter – The Parameter referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Parameter

get_submodule(target: str) torch.nn.modules.module.Module#

Returns the submodule given by target if it exists, otherwise throws an error.

For example, let’s say you have an nn.Module A that looks like this:

(The diagram shows an nn.Module A. A has a nested submodule net_b, which itself has two submodules net_c and linear. net_c then has a submodule conv.)

To check whether or not we have the linear submodule, we would call get_submodule("net_b.linear"). To check whether we have the conv submodule, we would call get_submodule("net_b.net_c.conv").

The runtime of get_submodule is bounded by the degree of module nesting in target. A query against named_modules achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, get_submodule should always be used.

Parameters

target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)

Returns

torch.nn.Module – The submodule referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Module

half() torch.nn.modules.module.T#

Casts all floating point parameters and buffers to half datatype.

Note

This method modifies the module in-place.

Returns

Module – self

load_state_dict(state_dict: collections.OrderedDict[str, torch.Tensor], strict: bool = True)#

Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Parameters
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True

Returns

NamedTuple with missing_keys and unexpected_keys fields – * missing_keys is a list of str containing the missing keys * unexpected_keys is a list of str containing the unexpected keys

Note

If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError.

modules() Iterator[torch.nn.modules.module.Module]#

Returns an iterator over all modules in the network.

Yields

Module – a module in the network

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
        print(idx, '->', m)

0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.Tensor]]#

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Parameters
  • prefix (str) – prefix to prepend to all buffer names.

  • recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

(string, torch.Tensor) – Tuple containing the name and buffer

Example:

>>> for name, buf in self.named_buffers():
>>>    if name in ['running_var']:
>>>        print(buf.size())
named_children() Iterator[Tuple[str, torch.nn.modules.module.Module]]#

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields

(string, Module) – Tuple containing a name and child module

Example:

>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
named_modules(memo: Optional[Set[torch.nn.modules.module.Module]] = None, prefix: str = '', remove_duplicate: bool = True)#

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Parameters
  • memo – a memo to store the set of modules already added to the result

  • prefix – a prefix that will be added to the name of the module

  • remove_duplicate – whether to remove the duplicated module instances in the result or not

Yields

(string, Module) – Tuple of name and module

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
        print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.nn.parameter.Parameter]]#

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

(string, Parameter) – Tuple containing the name and parameter

Example:

>>> for name, param in self.named_parameters():
>>>    if name in ['bias']:
>>>        print(param.size())
parameters(recurse: bool = True) Iterator[torch.nn.parameter.Parameter]#

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Parameters

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

Parameter – module parameter

Example:

>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle#

Registers a backward hook on the module.

This function is deprecated in favor of register_full_backward_hook() and the behavior of this function will change in future versions.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) None#

Adds a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Parameters
  • name (string) – name of the buffer. The buffer can be accessed from this module using the given name

  • tensor (Tensor or None) – buffer to be registered. If None, then operations that run on buffers, such as cuda, are ignored. If None, the buffer is not included in the module’s state_dict.

  • persistent (bool) – whether the buffer is part of this module’s state_dict.

Example:

>>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle#

Registers a forward hook on the module.

The hook will be called every time after forward() has computed an output. It should have the following signature:

hook(module, input, output) -> None or modified output

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_forward_pre_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle#

Registers a forward pre-hook on the module.

The hook will be called every time before forward() is invoked. It should have the following signature:

hook(module, input) -> None or modified input

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_full_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle#

Registers a backward hook on the module.

The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_module(name: str, module: Optional[torch.nn.modules.module.Module]) None#

Alias for add_module().

register_parameter(name: str, param: Optional[torch.nn.parameter.Parameter]) None#

Adds a parameter to the module.

The parameter can be accessed as an attribute using given name.

Parameters
  • name (string) – name of the parameter. The parameter can be accessed from this module using the given name

  • param (Parameter or None) – parameter to be added to the module. If None, then operations that run on parameters, such as cuda, are ignored. If None, the parameter is not included in the module’s state_dict.

requires_grad_(requires_grad: bool = True) torch.nn.modules.module.T#

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.

Parameters

requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True.

Returns

Module – self

set_extra_state(state: Any)#

This function is called from load_state_dict() to handle any extra state found within the state_dict. Implement this function and a corresponding get_extra_state() for your module if you need to store extra state within its state_dict.

Parameters

state (dict) – Extra state from the state_dict

share_memory() torch.nn.modules.module.T#

See torch.Tensor.share_memory_()

state_dict(destination=None, prefix='', keep_vars=False)#

Returns a dictionary containing a whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Returns

dict – a dictionary containing a whole state of the module

Example:

>>> module.state_dict().keys()
['bias', 'weight']
to(*args, **kwargs)#

Moves and/or casts the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Parameters
  • device (torch.device) – the desired device of the parameters and buffers in this module

  • dtype (torch.dtype) – the desired floating point or complex dtype of the parameters and buffers in this module

  • tensor (torch.Tensor) – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

  • memory_format (torch.memory_format) – the desired memory format for 4D parameters and buffers in this module (keyword only argument)

Returns

Module – self

Examples:

>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device: Union[str, torch.device]) torch.nn.modules.module.T#

Moves the parameters and buffers to the specified device without copying storage.

Parameters

device (torch.device) – The desired device of the parameters and buffers in this module.

Returns

Module – self

train(mode: bool = True) torch.nn.modules.module.T#

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns

Module – self

type(dst_type: Union[torch.dtype, str]) torch.nn.modules.module.T#

Casts all parameters and buffers to dst_type.

Note

This method modifies the module in-place.

Parameters

dst_type (type or string) – the desired type

Returns

Module – self

xpu(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T#

Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

Module – self

zero_grad(set_to_none: bool = False) None#

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.

training: bool#
class emonet.model.EmotionRegressor(emotion: str, lr: float = 0.001, weight_decay: float = 0.01, snr_low: int = 15, snr_high: int = 30, freq_mask: int = 20, time_mask: int = 20, drop: float = 0.3, n_mels: int = 128, n_fft: int = 768, n_chunks: int = 10)[source]#

Bases: pytorch_lightning.core.lightning.LightningModule

Emotion regressor model.

Parameters
  • emotion (str) – Emotion to train/score.

  • lr (float) – Learning rate.

  • weight_decay (float) – Weight decay for optimizer.

  • snr_low (int) – Signal-to-noise ratio floor for additive white noise augmentation.

  • snr_high (int) – Signal-to-noise ratio ceiling for additive white noise augmentation.

  • freq_mask (int) – Size of frequency mask for data augmentation.

  • time_mask (int) – Size of time mask for data augmentation.

  • drop (float) – Dropout rate.

  • n_mels (int) – Number of mel filterbanks.

  • n_fft (int) – Size of FFT, creates n_fft // 2 + 1 bins.

  • n_chunks (int) – Number of time-chunks to create from each sample.

__init__(emotion: str, lr: float = 0.001, weight_decay: float = 0.01, snr_low: int = 15, snr_high: int = 30, freq_mask: int = 20, time_mask: int = 20, drop: float = 0.3, n_mels: int = 128, n_fft: int = 768, n_chunks: int = 10)[source]#
Parameters
  • emotion (str) – Emotion to train/score.

  • lr (float) – Learning rate.

  • weight_decay (float) – Weight decay for optimizer.

  • snr_low (int) – Signal-to-noise ratio floor for additive white noise augmentation.

  • snr_high (int) – Signal-to-noise ratio ceiling for additive white noise augmentation.

  • freq_mask (int) – Size of frequency mask for data augmentation.

  • time_mask (int) – Size of time mask for data augmentation.

  • drop (float) – Dropout rate.

  • n_mels (int) – Number of mel filterbanks.

  • n_fft (int) – Size of FFT, creates n_fft // 2 + 1 bins.

  • n_chunks (int) – Number of time-chunks to create from each sample.

conv_block()[source]#

Time-distributed CNN block.

static lstm_block()[source]#

LSTM block.

linear_block()[source]#

Fully-connected block.

augment_pipe(choice, bs)[source]#

Data augmentation pipeline.

Parameters
  • choice (int) – Random choice for augmentation.

  • bs (int) – Batch size.

Returns

nn.Sequential – Augmentation pipeline.

get_results(spec, actual)[source]#

Get model results.

Parameters
  • spec (torch.tensor) – Spectrogram.

  • actual (float) – Actual emotion score.

Returns

Dict – Dictionary of model results.

training_step(batch, idx)[source]#

Training step.

validation_step(batch, idx)[source]#

Validation step.

test_step(batch, idx)[source]#

Test step.

predict_step(batch, batch_idx, dataloader_idx=0)[source]#

Predict step.

static preprocess(x)[source]#

Preprocess an audio tensor be applying VAD model.

score_file(file, sample_rate=16000, vad=True)[source]#

Score a WAV file.

Splits audio signal into separate 8-second chunks, individually scores each and returns mean across splits as final score.

Parameters
  • file (Union[str, pathlib.Path]) – Filepath to audio file.

  • sample_rate (int) – Sample rate of audio file; default 16,000hz

  • vad (bool) – Whether to implement voice activity detection as preprocessing step; default True.

Returns

float – Emotion score.

score_signal(x, vad=True)[source]#

Score a tensor.

Splits audio signal into separate 8-second chunks, individually scores each and returns mean across splits as final score.

Parameters
  • x (torch.Tensor) – Audio signal to score.

  • vad (bool) – Whether to implement voice activity detection as preprocessing step; default True.

Returns

float – Emotion score.

forward(x)[source]#

Score a preprocessed tensor.

Splits audio signal into separate 8-second chunks, individually scores each and returns mean across splits as final score.

Parameters

x (torch.Tensor)

Returns

float – Emotion score.

configure_optimizers()[source]#

Configure optimizer for training.

CHECKPOINT_HYPER_PARAMS_KEY = 'hyper_parameters'#
CHECKPOINT_HYPER_PARAMS_NAME = 'hparams_name'#
CHECKPOINT_HYPER_PARAMS_TYPE = 'hparams_type'#
T_destination#

alias of TypeVar(‘T_destination’, bound=Mapping[str, torch.Tensor])

add_module(name: str, module: Optional[torch.nn.modules.module.Module]) None#

Adds a child module to the current module.

The module can be accessed as an attribute using the given name.

Parameters
  • name (string) – name of the child module. The child module can be accessed from this module using the given name

  • module (Module) – child module to be added to the module.

add_to_queue(queue: pytorch_lightning.strategies.launchers.spawn._FakeQueue) None#

Appends the trainer.callback_metrics dictionary to the given queue. To avoid issues with memory sharing, we cast the data to numpy.

Parameters

queue – the instance of the queue to append the data.

Deprecated since version v1.5: This method was deprecated in v1.5 and will be removed in v1.7.

all_gather(data: Union[torch.Tensor, Dict, List, Tuple], group: Optional[Any] = None, sync_grads: bool = False)#

Allows users to call self.all_gather() from the LightningModule, thus making the all_gather operation accelerator agnostic. all_gather is a function provided by accelerators to gather a tensor from several distributed processes.

Parameters
  • data – int, float, tensor of shape (batch, …), or a (possibly nested) collection thereof.

  • group – the process group to gather results from. Defaults to all processes (world)

  • sync_grads – flag that allows users to synchronize gradients for the all_gather operation

Returns

A tensor of shape (world_size, batch, …), or if the input was a collection the output will also be a collection with tensors of this shape.

apply(fn: Callable[[torch.nn.modules.module.Module], None]) torch.nn.modules.module.T#

Applies fn recursively to every submodule (as returned by .children()) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).

Parameters

fn (Module -> None) – function to be applied to each submodule

Returns

Module – self

Example:

>>> @torch.no_grad()
>>> def init_weights(m):
>>>     print(m)
>>>     if type(m) == nn.Linear:
>>>         m.weight.fill_(1.0)
>>>         print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[ 1.,  1.],
        [ 1.,  1.]])
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
property automatic_optimization: bool#

If set to False you are responsible for calling .backward(), .step(), .zero_grad().

backward(loss: torch.Tensor, optimizer: Optional[torch.optim.optimizer.Optimizer], optimizer_idx: Optional[int], *args, **kwargs) None#

Called to perform backward on the loss returned in training_step(). Override this hook with your own implementation if you need to.

Parameters
  • loss – The loss tensor returned by training_step(). If gradient accumulation is used, the loss here holds the normalized value (scaled by 1 / accumulation steps).

  • optimizer – Current optimizer being used. None if using manual optimization.

  • optimizer_idx – Index of the current optimizer being used. None if using manual optimization.

Example:

def backward(self, loss, optimizer, optimizer_idx):
    loss.backward()
bfloat16() torch.nn.modules.module.T#

Casts all floating point parameters and buffers to bfloat16 datatype.

Note

This method modifies the module in-place.

Returns

Module – self

buffers(recurse: bool = True) Iterator[torch.Tensor]#

Returns an iterator over module buffers.

Parameters

recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

torch.Tensor – module buffer

Example:

>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children() Iterator[torch.nn.modules.module.Module]#

Returns an iterator over immediate children modules.

Yields

Module – a child module

clip_gradients(optimizer: torch.optim.optimizer.Optimizer, gradient_clip_val: Optional[Union[int, float]] = None, gradient_clip_algorithm: Optional[str] = None)#

Handles gradient clipping internally.

Note

Do not override this method. If you want to customize gradient clipping, consider using configure_gradient_clipping() method.

Parameters
  • optimizer – Current optimizer being used.

  • gradient_clip_val – The value at which to clip gradients.

  • gradient_clip_algorithm – The gradient clipping algorithm to use. Pass gradient_clip_algorithm="value" to clip by value, and gradient_clip_algorithm="norm" to clip by norm.

configure_callbacks() Union[Sequence[pytorch_lightning.callbacks.base.Callback], pytorch_lightning.callbacks.base.Callback]#

Configure model-specific callbacks. When the model gets attached, e.g., when .fit() or .test() gets called, the list or a callback returned here will be merged with the list of callbacks passed to the Trainer’s callbacks argument. If a callback returned here has the same type as one or several callbacks already present in the Trainer’s callbacks list, it will take priority and replace them. In addition, Lightning will make sure ModelCheckpoint callbacks run last.

Returns

A callback or a list of callbacks which will extend the list of callbacks in the Trainer.

Example:

def configure_callbacks(self):
    early_stop = EarlyStopping(monitor="val_acc", mode="max")
    checkpoint = ModelCheckpoint(monitor="val_loss")
    return [early_stop, checkpoint]

Note

Certain callback methods like on_init_start() will never be invoked on the new callbacks returned here.

configure_gradient_clipping(optimizer: torch.optim.optimizer.Optimizer, optimizer_idx: int, gradient_clip_val: Optional[Union[int, float]] = None, gradient_clip_algorithm: Optional[str] = None)#

Perform gradient clipping for the optimizer parameters. Called before optimizer_step().

Parameters
  • optimizer – Current optimizer being used.

  • optimizer_idx – Index of the current optimizer being used.

  • gradient_clip_val – The value at which to clip gradients. By default value passed in Trainer will be available here.

  • gradient_clip_algorithm – The gradient clipping algorithm to use. By default value passed in Trainer will be available here.

Example:

# Perform gradient clipping on gradients associated with discriminator (optimizer_idx=1) in GAN
def configure_gradient_clipping(self, optimizer, optimizer_idx, gradient_clip_val, gradient_clip_algorithm):
    if optimizer_idx == 1:
        # Lightning will handle the gradient clipping
        self.clip_gradients(
            optimizer,
            gradient_clip_val=gradient_clip_val,
            gradient_clip_algorithm=gradient_clip_algorithm
        )
    else:
        # implement your own custom logic to clip gradients for generator (optimizer_idx=0)
configure_sharded_model() None#

Hook to create modules in a distributed aware context. This is useful for when using sharded plugins, where we’d like to shard the model instantly, which is useful for extremely large models which can save memory and initialization time.

This hook is called during each of fit/val/test/predict stages in the same process, so ensure that implementation of this hook is idempotent.

cpu() typing_extensions.Self#

Moves all model parameters and buffers to the CPU.

Returns

Module – self

cuda(device: Optional[Union[int, torch.device]] = None) typing_extensions.Self#

Moves all model parameters and buffers to the GPU. This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.

Parameters

device – if specified, all parameters will be copied to that device

Returns

Module – self

property current_epoch: int#

The current epoch in the Trainer, or 0 if not attached.

property device: Union[str, torch.device]#
double() typing_extensions.Self#

Casts all floating point parameters and buffers to double datatype.

Returns

Module – self

property dtype: Union[str, torch.dtype]#
dump_patches: bool = False#

This allows better BC support for load_state_dict(). In state_dict(), the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See _load_from_state_dict on how to use this information in loading.

If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.

eval() torch.nn.modules.module.T#

Sets the module in evaluation mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

This is equivalent with self.train(False).

See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it.

Returns

Module – self

property example_input_array: Any#

The example input array is a specification of what the module can consume in the forward() method. The return type is interpreted as follows:

  • Single tensor: It is assumed the model takes a single argument, i.e., model.forward(model.example_input_array)

  • Tuple: The input array should be interpreted as a sequence of positional arguments, i.e., model.forward(*model.example_input_array)

  • Dict: The input array represents named keyword arguments, i.e., model.forward(**model.example_input_array)

extra_repr() str#

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

float() typing_extensions.Self#

Casts all floating point parameters and buffers to float datatype.

Returns

Module – self

freeze() None#

Freeze all params for inference.

Example:

model = MyLightningModule(...)
model.freeze()
get_buffer(target: str) torch.Tensor#

Returns the buffer given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the buffer to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

torch.Tensor – The buffer referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not a buffer

get_extra_state() Any#

Returns any extra state to include in the module’s state_dict. Implement this and a corresponding set_extra_state() for your module if you need to store extra state. This function is called when building the module’s state_dict().

Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.

Returns

object – Any extra state to store in the module’s state_dict

get_from_queue(queue: pytorch_lightning.strategies.launchers.spawn._FakeQueue) None#

Retrieve the trainer.callback_metrics dictionary from the given queue. To preserve consistency, we cast back the data to torch.Tensor.

Parameters

queue – the instance of the queue from where to get the data.

Deprecated since version v1.5: This method was deprecated in v1.5 and will be removed in v1.7.

get_parameter(target: str) torch.nn.parameter.Parameter#

Returns the parameter given by target if it exists, otherwise throws an error.

See the docstring for get_submodule for a more detailed explanation of this method’s functionality as well as how to correctly specify target.

Parameters

target – The fully-qualified string name of the Parameter to look for. (See get_submodule for how to specify a fully-qualified string.)

Returns

torch.nn.Parameter – The Parameter referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Parameter

get_progress_bar_dict() Dict[str, Union[str, int]]#

Deprecated since version v1.5: This method was deprecated in v1.5 in favor of pytorch_lightning.callbacks.progress.base.get_metrics and will be removed in v1.7.

Implement this to override the default items displayed in the progress bar. By default it includes the average loss value, split index of BPTT (if used) and the version of the experiment when using a logger.

Epoch 1:   4%|▎         | 40/1095 [00:03<01:37, 10.84it/s, loss=4.501, v_num=10]

Here is an example how to override the defaults:

def get_progress_bar_dict(self):
    # don't show the version number
    items = super().get_progress_bar_dict()
    items.pop("v_num", None)
    return items
Returns

Dictionary with the items to be displayed in the progress bar.

get_submodule(target: str) torch.nn.modules.module.Module#

Returns the submodule given by target if it exists, otherwise throws an error.

For example, let’s say you have an nn.Module A that looks like this:

(The diagram shows an nn.Module A. A has a nested submodule net_b, which itself has two submodules net_c and linear. net_c then has a submodule conv.)

To check whether or not we have the linear submodule, we would call get_submodule("net_b.linear"). To check whether we have the conv submodule, we would call get_submodule("net_b.net_c.conv").

The runtime of get_submodule is bounded by the degree of module nesting in target. A query against named_modules achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists, get_submodule should always be used.

Parameters

target – The fully-qualified string name of the submodule to look for. (See above example for how to specify a fully-qualified string.)

Returns

torch.nn.Module – The submodule referenced by target

Raises

AttributeError – If the target string references an invalid path or resolves to something that is not an nn.Module

property global_rank: int#

The index of the current process across all nodes and devices.

property global_step: int#

Total training batches seen across all epochs.

If no Trainer is attached, this propery is 0.

half() typing_extensions.Self#

Casts all floating point parameters and buffers to half datatype.

Returns

Module – self

property hparams: Union[pytorch_lightning.utilities.parsing.AttributeDict, MutableMapping]#

The collection of hyperparameters saved with save_hyperparameters(). It is mutable by the user. For the frozen set of initial hyperparameters, use hparams_initial.

Returns

Mutable hyperparameters dictionary

property hparams_initial: pytorch_lightning.utilities.parsing.AttributeDict#

The collection of hyperparameters saved with save_hyperparameters(). These contents are read-only. Manual updates to the saved hyperparameters can instead be performed through hparams.

Returns

AttributeDict – immutable initial hyperparameters

classmethod load_from_checkpoint(checkpoint_path: Union[str, IO], map_location: Optional[Union[Dict[str, str], str, torch.device, int, Callable]] = None, hparams_file: Optional[str] = None, strict: bool = True, **kwargs)#

Primary way of loading a model from a checkpoint. When Lightning saves a checkpoint it stores the arguments passed to __init__ in the checkpoint under "hyper_parameters".

Any arguments specified through **kwargs will override args stored in "hyper_parameters".

Parameters
  • checkpoint_path – Path to checkpoint. This can also be a URL, or file-like object

  • map_location – If your checkpoint saved a GPU model and you now load on CPUs or a different number of GPUs, use this to map to the new setup. The behaviour is the same as in torch.load().

  • hparams_file – Optional path to a .yaml file with hierarchical structure as in this example:

    drop_prob: 0.2
    dataloader:
        batch_size: 32
    

    You most likely won’t need this since Lightning will always save the hyperparameters to the checkpoint. However, if your checkpoint weights don’t have the hyperparameters saved, use this method to pass in a .yaml file with the hparams you’d like to use. These will be converted into a dict and passed into your LightningModule for use.

    If your model’s hparams argument is Namespace and .yaml file has hierarchical structure, you need to refactor your model to treat hparams as dict.

  • strict – Whether to strictly enforce that the keys in checkpoint_path match the keys returned by this module’s state dict.

  • kwargs – Any extra keyword args needed to init the model. Can also be used to override saved hyperparameter values.

Returns

LightningModule instance with loaded weights and hyperparameters (if available).

Note

load_from_checkpoint is a class method. You should use your LightningModule class to call it instead of the LightningModule instance.

Example:

# load weights without mapping ...
model = MyLightningModule.load_from_checkpoint('path/to/checkpoint.ckpt')

# or load weights mapping all weights from GPU 1 to GPU 0 ...
map_location = {'cuda:1':'cuda:0'}
model = MyLightningModule.load_from_checkpoint(
    'path/to/checkpoint.ckpt',
    map_location=map_location
)

# or load weights and hyperparameters from separate files.
model = MyLightningModule.load_from_checkpoint(
    'path/to/checkpoint.ckpt',
    hparams_file='/path/to/hparams_file.yaml'
)

# override some of the params with new values
model = MyLightningModule.load_from_checkpoint(
    PATH,
    num_layers=128,
    pretrained_ckpt_path=NEW_PATH,
)

# predict
pretrained_model.eval()
pretrained_model.freeze()
y_hat = pretrained_model(x)
load_state_dict(state_dict: collections.OrderedDict[str, torch.Tensor], strict: bool = True)#

Copies parameters and buffers from state_dict into this module and its descendants. If strict is True, then the keys of state_dict must exactly match the keys returned by this module’s state_dict() function.

Parameters
  • state_dict (dict) – a dict containing parameters and persistent buffers.

  • strict (bool, optional) – whether to strictly enforce that the keys in state_dict match the keys returned by this module’s state_dict() function. Default: True

Returns

NamedTuple with missing_keys and unexpected_keys fields – * missing_keys is a list of str containing the missing keys * unexpected_keys is a list of str containing the unexpected keys

Note

If a parameter or buffer is registered as None and its corresponding key exists in state_dict, load_state_dict() will raise a RuntimeError.

property local_rank: int#

The index of the current process within a single node.

log(name: str, value: Union[torchmetrics.metric.Metric, torch.Tensor, int, float, Mapping[str, Union[torchmetrics.metric.Metric, torch.Tensor, int, float]]], prog_bar: bool = False, logger: bool = True, on_step: Optional[bool] = None, on_epoch: Optional[bool] = None, reduce_fx: Union[str, Callable] = 'mean', enable_graph: bool = False, sync_dist: bool = False, sync_dist_group: Optional[Any] = None, add_dataloader_idx: bool = True, batch_size: Optional[int] = None, metric_attribute: Optional[str] = None, rank_zero_only: bool = False) None#

Log a key, value pair.

Example:

self.log('train_loss', loss)

The default behavior per hook is documented here: extensions/logging:Automatic Logging.

Parameters
  • name – key to log.

  • value – value to log. Can be a float, Tensor, Metric, or a dictionary of the former.

  • prog_bar – if True logs to the progress bar.

  • logger – if True logs to the logger.

  • on_step – if True logs at this step. The default value is determined by the hook. See extensions/logging:Automatic Logging for details.

  • on_epoch – if True logs epoch accumulated metrics. The default value is determined by the hook. See extensions/logging:Automatic Logging for details.

  • reduce_fx – reduction function over step values for end of epoch. torch.mean() by default.

  • enable_graph – if True, will not auto detach the graph.

  • sync_dist – if True, reduces the metric across devices. Use with care as this may lead to a significant communication overhead.

  • sync_dist_group – the DDP group to sync across.

  • add_dataloader_idx – if True, appends the index of the current dataloader to the name (when using multiple dataloaders). If False, user needs to give unique names for each dataloader to not mix the values.

  • batch_size – Current batch_size. This will be directly inferred from the loaded batch, but for some data structures you might need to explicitly provide it.

  • metric_attribute – To restore the metric state, Lightning requires the reference of the torchmetrics.Metric in your model. This is found automatically if it is a model attribute.

  • rank_zero_only – Whether the value will be logged only on rank 0. This will prevent synchronization which would produce a deadlock as not all processes would perform this log call.

log_dict(dictionary: Mapping[str, Union[torchmetrics.metric.Metric, torch.Tensor, int, float, Mapping[str, Union[torchmetrics.metric.Metric, torch.Tensor, int, float]]]], prog_bar: bool = False, logger: bool = True, on_step: Optional[bool] = None, on_epoch: Optional[bool] = None, reduce_fx: Union[str, Callable] = 'mean', enable_graph: bool = False, sync_dist: bool = False, sync_dist_group: Optional[Any] = None, add_dataloader_idx: bool = True, batch_size: Optional[int] = None, rank_zero_only: bool = False) None#

Log a dictionary of values at once.

Example:

values = {'loss': loss, 'acc': acc, ..., 'metric_n': metric_n}
self.log_dict(values)
Parameters
  • dictionary – key value pairs. The values can be a float, Tensor, Metric, or a dictionary of the former.

  • prog_bar – if True logs to the progress base.

  • logger – if True logs to the logger.

  • on_step – if True logs at this step. None auto-logs for training_step but not validation/test_step. The default value is determined by the hook. See extensions/logging:Automatic Logging for details.

  • on_epoch – if True logs epoch accumulated metrics. None auto-logs for val/test step but not training_step. The default value is determined by the hook. See extensions/logging:Automatic Logging for details.

  • reduce_fx – reduction function over step values for end of epoch. torch.mean() by default.

  • enable_graph – if True, will not auto-detach the graph

  • sync_dist – if True, reduces the metric across GPUs/TPUs. Use with care as this may lead to a significant communication overhead.

  • sync_dist_group – the ddp group to sync across.

  • add_dataloader_idx – if True, appends the index of the current dataloader to the name (when using multiple). If False, user needs to give unique names for each dataloader to not mix values.

  • batch_size – Current batch size. This will be directly inferred from the loaded batch, but some data structures might need to explicitly provide it.

  • rank_zero_only – Whether the value will be logged only on rank 0. This will prevent synchronization which would produce a deadlock as not all processes would perform this log call.

log_grad_norm(grad_norm_dict: Dict[str, float]) None#

Override this method to change the default behaviour of log_grad_norm.

If clipping gradients, the gradients will not have been clipped yet.

Parameters

grad_norm_dict – Dictionary containing current grad norm metrics

Example:

# DEFAULT
def log_grad_norm(self, grad_norm_dict):
    self.log_dict(grad_norm_dict, on_step=True, on_epoch=True, prog_bar=False, logger=True)
property logger: Optional[pytorch_lightning.loggers.base.LightningLoggerBase]#

Reference to the logger object in the Trainer.

property loggers: List[pytorch_lightning.loggers.base.LightningLoggerBase]#

Reference to the list of loggers in the Trainer.

lr_scheduler_step(scheduler: Union[torch.optim.lr_scheduler._LRScheduler, torch.optim.lr_scheduler.ReduceLROnPlateau], optimizer_idx: int, metric: Optional[Any]) None#

Override this method to adjust the default way the Trainer calls each scheduler. By default, Lightning calls step() and as shown in the example for each scheduler based on its interval.

Parameters
  • scheduler – Learning rate scheduler.

  • optimizer_idx – Index of the optimizer associated with this scheduler.

  • metric – Value of the monitor used for schedulers like ReduceLROnPlateau.

Examples:

# DEFAULT
def lr_scheduler_step(self, scheduler, optimizer_idx, metric):
    if metric is None:
        scheduler.step()
    else:
        scheduler.step(metric)

# Alternative way to update schedulers if it requires an epoch value
def lr_scheduler_step(self, scheduler, optimizer_idx, metric):
    scheduler.step(epoch=self.current_epoch)
lr_schedulers() Optional[Union[torch.optim.lr_scheduler._LRScheduler, torch.optim.lr_scheduler.ReduceLROnPlateau, List[Union[torch.optim.lr_scheduler._LRScheduler, torch.optim.lr_scheduler.ReduceLROnPlateau]]]]#

Returns the learning rate scheduler(s) that are being used during training. Useful for manual optimization.

Returns

A single scheduler, or a list of schedulers in case multiple ones are present, or None if no schedulers were returned in configure_optimizers().

manual_backward(loss: torch.Tensor, *args, **kwargs) None#

Call this directly from your training_step() when doing optimizations manually. By using this, Lightning can ensure that all the proper scaling gets applied when using mixed precision.

See manual optimization for more examples.

Example:

def training_step(...):
    opt = self.optimizers()
    loss = ...
    opt.zero_grad()
    # automatically applies scaling, etc...
    self.manual_backward(loss)
    opt.step()
Parameters
  • loss – The tensor on which to compute gradients. Must have a graph attached.

  • *args – Additional positional arguments to be forwarded to backward()

  • **kwargs – Additional keyword arguments to be forwarded to backward()

property model_size: float#

Returns the model size in MegaBytes (MB)

Note

This property will not return correct value for Deepspeed (stage 3) and fully-sharded training.

modules() Iterator[torch.nn.modules.module.Module]#

Returns an iterator over all modules in the network.

Yields

Module – a module in the network

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
        print(idx, '->', m)

0 -> Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.Tensor]]#

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

Parameters
  • prefix (str) – prefix to prepend to all buffer names.

  • recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

(string, torch.Tensor) – Tuple containing the name and buffer

Example:

>>> for name, buf in self.named_buffers():
>>>    if name in ['running_var']:
>>>        print(buf.size())
named_children() Iterator[Tuple[str, torch.nn.modules.module.Module]]#

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

Yields

(string, Module) – Tuple containing a name and child module

Example:

>>> for name, module in model.named_children():
>>>     if name in ['conv4', 'conv5']:
>>>         print(module)
named_modules(memo: Optional[Set[torch.nn.modules.module.Module]] = None, prefix: str = '', remove_duplicate: bool = True)#

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

Parameters
  • memo – a memo to store the set of modules already added to the result

  • prefix – a prefix that will be added to the name of the module

  • remove_duplicate – whether to remove the duplicated module instances in the result or not

Yields

(string, Module) – Tuple of name and module

Note

Duplicate modules are returned only once. In the following example, l will be returned only once.

Example:

>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
        print(idx, '->', m)

0 -> ('', Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix: str = '', recurse: bool = True) Iterator[Tuple[str, torch.nn.parameter.Parameter]]#

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

(string, Parameter) – Tuple containing the name and parameter

Example:

>>> for name, param in self.named_parameters():
>>>    if name in ['bias']:
>>>        print(param.size())
on_after_backward() None#

Called after loss.backward() and before optimizers are stepped.

Note

If using native AMP, the gradients will not be unscaled at this point. Use the on_before_optimizer_step if you need the unscaled gradients.

on_after_batch_transfer(batch: Any, dataloader_idx: int) Any#

Override to alter or apply batch augmentations to your batch after it is transferred to the device.

Note

To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch – A batch of data that needs to be altered or augmented.

  • dataloader_idx – The index of the dataloader to which the batch belongs.

Returns

A batch of data

Example:

def on_after_batch_transfer(self, batch, dataloader_idx):
    batch['x'] = gpu_transforms(batch['x'])
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(strategy='dp').

on_before_backward(loss: torch.Tensor) None#

Called before loss.backward().

Parameters

loss – Loss divided by number of batches for gradient accumulation and scaled if using native AMP.

on_before_batch_transfer(batch: Any, dataloader_idx: int) Any#

Override to alter or apply batch augmentations to your batch before it is transferred to the device.

Note

To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch – A batch of data that needs to be altered or augmented.

  • dataloader_idx – The index of the dataloader to which the batch belongs.

Returns

A batch of data

Example:

def on_before_batch_transfer(self, batch, dataloader_idx):
    batch['x'] = transforms(batch['x'])
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(strategy='dp').

on_before_optimizer_step(optimizer: torch.optim.optimizer.Optimizer, optimizer_idx: int) None#

Called before optimizer.step().

If using gradient accumulation, the hook is called once the gradients have been accumulated. See: :paramref:`~pytorch_lightning.trainer.Trainer.accumulate_grad_batches`.

If using native AMP, the loss will be unscaled before calling this hook. See these docs for more information on the scaling of gradients.

If clipping gradients, the gradients will not have been clipped yet.

Parameters
  • optimizer – Current optimizer being used.

  • optimizer_idx – Index of the current optimizer being used.

Example:

def on_before_optimizer_step(self, optimizer, optimizer_idx):
    # example to inspect gradient information in tensorboard
    if self.trainer.global_step % 25 == 0:  # don't make the tf file huge
        for k, v in self.named_parameters():
            self.logger.experiment.add_histogram(
                tag=k, values=v.grad, global_step=self.trainer.global_step
            )
on_before_zero_grad(optimizer: torch.optim.optimizer.Optimizer) None#

Called after training_step() and before optimizer.zero_grad().

Called in the training loop after taking an optimizer step and before zeroing grads. Good place to inspect weight information with weights updated.

This is where it is called:

for optimizer in optimizers:
    out = training_step(...)

    model.on_before_zero_grad(optimizer) # < ---- called here
    optimizer.zero_grad()

    backward()
Parameters

optimizer – The optimizer for which grads should be zeroed.

on_epoch_end() None#

Called when either of train/val/test epoch ends.

Deprecated since version v1.6: on_epoch_end() has been deprecated in v1.6 and will be removed in v1.8. Use on_<train/validation/test>_epoch_end instead.

on_epoch_start() None#

Called when either of train/val/test epoch begins.

Deprecated since version v1.6: on_epoch_start() has been deprecated in v1.6 and will be removed in v1.8. Use on_<train/validation/test>_epoch_start instead.

on_fit_end() None#

Called at the very end of fit.

If on DDP it is called on every process

on_fit_start() None#

Called at the very beginning of fit.

If on DDP it is called on every process

property on_gpu#

Returns True if this model is currently located on a GPU.

Useful to set flags around the LightningModule for different CPU vs GPU behavior.

on_hpc_load(checkpoint: Dict[str, Any]) None#

Hook to do whatever you need right before Slurm manager loads the model.

Parameters

checkpoint – A dictionary with variables from the checkpoint.

Deprecated since version v1.6: This method is deprecated in v1.6 and will be removed in v1.8. Please use LightningModule.on_load_checkpoint instead.

on_hpc_save(checkpoint: Dict[str, Any]) None#

Hook to do whatever you need right before Slurm manager saves the model.

Parameters

checkpoint – A dictionary in which you can save variables to save in a checkpoint. Contents need to be pickleable.

Deprecated since version v1.6: This method is deprecated in v1.6 and will be removed in v1.8. Please use LightningModule.on_save_checkpoint instead.

on_load_checkpoint(checkpoint: Dict[str, Any]) None#

Called by Lightning to restore your model. If you saved something with on_save_checkpoint() this is your chance to restore this.

Parameters

checkpoint – Loaded checkpoint

Example:

def on_load_checkpoint(self, checkpoint):
    # 99% of the time you don't need to implement this method
    self.something_cool_i_want_to_save = checkpoint['something_cool_i_want_to_save']

Note

Lightning auto-restores global step, epoch, and train state including amp scaling. There is no need for you to restore anything regarding training.

on_post_move_to_device() None#

Called in the parameter_validation decorator after to() is called. This is a good place to tie weights between modules after moving them to a device. Can be used when training models with weight sharing properties on TPU.

Addresses the handling of shared weights on TPU: https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#xla-tensor-quirks

Example:

def on_post_move_to_device(self):
    self.decoder.weight = self.encoder.weight
on_predict_batch_end(outputs: Optional[Any], batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the predict loop after the batch.

Parameters
  • outputs – The outputs of predict_step_end(test_step(x))

  • batch – The batched data as it is returned by the test DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_predict_batch_start(batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the predict loop before anything happens for that batch.

Parameters
  • batch – The batched data as it is returned by the test DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_predict_dataloader() None#

Called before requesting the predict dataloader.

Deprecated since version v1.5: on_predict_dataloader() is deprecated and will be removed in v1.7.0. Please use predict_dataloader() directly.

on_predict_end() None#

Called at the end of predicting.

on_predict_epoch_end(results: List[Any]) None#

Called at the end of predicting.

on_predict_epoch_start() None#

Called at the beginning of predicting.

on_predict_model_eval() None#

Sets the model to eval during the predict loop.

on_predict_start() None#

Called at the beginning of predicting.

on_pretrain_routine_end() None#

Called at the end of the pretrain routine (between fit and train start).

  • fit

  • pretrain_routine start

  • pretrain_routine end

  • training_start

Deprecated since version v1.6: on_pretrain_routine_end() has been deprecated in v1.6 and will be removed in v1.8. Use on_fit_start instead.

on_pretrain_routine_start() None#

Called at the beginning of the pretrain routine (between fit and train start).

  • fit

  • pretrain_routine start

  • pretrain_routine end

  • training_start

Deprecated since version v1.6: on_pretrain_routine_start() has been deprecated in v1.6 and will be removed in v1.8. Use on_fit_start instead.

on_save_checkpoint(checkpoint: Dict[str, Any]) None#

Called by Lightning when saving a checkpoint to give you a chance to store anything else you might want to save.

Parameters

checkpoint – The full checkpoint dictionary before it gets dumped to a file. Implementations of this hook can insert additional data into this dictionary.

Example:

def on_save_checkpoint(self, checkpoint):
    # 99% of use cases you don't need to implement this method
    checkpoint['something_cool_i_want_to_save'] = my_cool_pickable_object

Note

Lightning saves all aspects of training (epoch, global step, etc…) including amp scaling. There is no need for you to store anything about training.

on_test_batch_end(outputs: Optional[Union[torch.Tensor, Dict[str, Any]]], batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the test loop after the batch.

Parameters
  • outputs – The outputs of test_step_end(test_step(x))

  • batch – The batched data as it is returned by the test DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_test_batch_start(batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the test loop before anything happens for that batch.

Parameters
  • batch – The batched data as it is returned by the test DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_test_dataloader() None#

Called before requesting the test dataloader.

Deprecated since version v1.5: on_test_dataloader() is deprecated and will be removed in v1.7.0. Please use test_dataloader() directly.

on_test_end() None#

Called at the end of testing.

on_test_epoch_end() None#

Called in the test loop at the very end of the epoch.

on_test_epoch_start() None#

Called in the test loop at the very beginning of the epoch.

on_test_model_eval() None#

Sets the model to eval during the test loop.

on_test_model_train() None#

Sets the model to train during the test loop.

on_test_start() None#

Called at the beginning of testing.

on_train_batch_end(outputs: Union[torch.Tensor, Dict[str, Any]], batch: Any, batch_idx: int, unused: int = 0) None#

Called in the training loop after the batch.

Parameters
  • outputs – The outputs of training_step_end(training_step(x))

  • batch – The batched data as it is returned by the training DataLoader.

  • batch_idx – the index of the batch

  • unused – Deprecated argument. Will be removed in v1.7.

on_train_batch_start(batch: Any, batch_idx: int, unused: int = 0) Optional[int]#

Called in the training loop before anything happens for that batch.

If you return -1 here, you will skip training for the rest of the current epoch.

Parameters
  • batch – The batched data as it is returned by the training DataLoader.

  • batch_idx – the index of the batch

  • unused – Deprecated argument. Will be removed in v1.7.

on_train_dataloader() None#

Called before requesting the train dataloader.

Deprecated since version v1.5: on_train_dataloader() is deprecated and will be removed in v1.7.0. Please use train_dataloader() directly.

on_train_end() None#

Called at the end of training before logger experiment is closed.

on_train_epoch_end() None#

Called in the training loop at the very end of the epoch.

To access all batch outputs at the end of the epoch, either:

  1. Implement training_epoch_end in the LightningModule OR

  2. Cache data across steps on the attribute(s) of the LightningModule and access them in this hook

on_train_epoch_start() None#

Called in the training loop at the very beginning of the epoch.

on_train_start() None#

Called at the beginning of training after sanity check.

on_val_dataloader() None#

Called before requesting the val dataloader.

Deprecated since version v1.5: on_val_dataloader() is deprecated and will be removed in v1.7.0. Please use val_dataloader() directly.

on_validation_batch_end(outputs: Optional[Union[torch.Tensor, Dict[str, Any]]], batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the validation loop after the batch.

Parameters
  • outputs – The outputs of validation_step_end(validation_step(x))

  • batch – The batched data as it is returned by the validation DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_validation_batch_start(batch: Any, batch_idx: int, dataloader_idx: int) None#

Called in the validation loop before anything happens for that batch.

Parameters
  • batch – The batched data as it is returned by the validation DataLoader.

  • batch_idx – the index of the batch

  • dataloader_idx – the index of the dataloader

on_validation_end() None#

Called at the end of validation.

on_validation_epoch_end() None#

Called in the validation loop at the very end of the epoch.

on_validation_epoch_start() None#

Called in the validation loop at the very beginning of the epoch.

on_validation_model_eval() None#

Sets the model to eval during the val loop.

on_validation_model_train() None#

Sets the model to train during the val loop.

on_validation_start() None#

Called at the beginning of validation.

optimizer_step(epoch: int, batch_idx: int, optimizer: Union[torch.optim.optimizer.Optimizer, pytorch_lightning.core.optimizer.LightningOptimizer], optimizer_idx: int = 0, optimizer_closure: Optional[Callable[[], Any]] = None, on_tpu: bool = False, using_native_amp: bool = False, using_lbfgs: bool = False) None#

Override this method to adjust the default way the Trainer calls each optimizer.

By default, Lightning calls step() and zero_grad() as shown in the example once per optimizer. This method (and zero_grad()) won’t be called during the accumulation phase when Trainer(accumulate_grad_batches != 1). Overriding this hook has no benefit with manual optimization.

Parameters
  • epoch – Current epoch

  • batch_idx – Index of current batch

  • optimizer – A PyTorch optimizer

  • optimizer_idx – If you used multiple optimizers, this indexes into that list.

  • optimizer_closure – The optimizer closure. This closure must be executed as it includes the calls to training_step(), optimizer.zero_grad(), and backward().

  • on_tpuTrue if TPU backward is required

  • using_native_ampTrue if using native amp

  • using_lbfgs – True if the matching optimizer is torch.optim.LBFGS

Examples:

# DEFAULT
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,
                   optimizer_closure, on_tpu, using_native_amp, using_lbfgs):
    optimizer.step(closure=optimizer_closure)

# Alternating schedule for optimizer steps (i.e.: GANs)
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,
                   optimizer_closure, on_tpu, using_native_amp, using_lbfgs):
    # update generator opt every step
    if optimizer_idx == 0:
        optimizer.step(closure=optimizer_closure)

    # update discriminator opt every 2 steps
    if optimizer_idx == 1:
        if (batch_idx + 1) % 2 == 0 :
            optimizer.step(closure=optimizer_closure)
        else:
            # call the closure by itself to run `training_step` + `backward` without an optimizer step
            optimizer_closure()

    # ...
    # add as many optimizers as you want

Here’s another example showing how to use this for more advanced things such as learning rate warm-up:

# learning rate warm-up
def optimizer_step(
    self,
    epoch,
    batch_idx,
    optimizer,
    optimizer_idx,
    optimizer_closure,
    on_tpu,
    using_native_amp,
    using_lbfgs,
):
    # update params
    optimizer.step(closure=optimizer_closure)

    # manually warm up lr without a scheduler
    if self.trainer.global_step < 500:
        lr_scale = min(1.0, float(self.trainer.global_step + 1) / 500.0)
        for pg in optimizer.param_groups:
            pg["lr"] = lr_scale * self.learning_rate
optimizer_zero_grad(epoch: int, batch_idx: int, optimizer: torch.optim.optimizer.Optimizer, optimizer_idx: int)#

Override this method to change the default behaviour of optimizer.zero_grad().

Parameters
  • epoch – Current epoch

  • batch_idx – Index of current batch

  • optimizer – A PyTorch optimizer

  • optimizer_idx – If you used multiple optimizers this indexes into that list.

Examples:

# DEFAULT
def optimizer_zero_grad(self, epoch, batch_idx, optimizer, optimizer_idx):
    optimizer.zero_grad()

# Set gradients to `None` instead of zero to improve performance.
def optimizer_zero_grad(self, epoch, batch_idx, optimizer, optimizer_idx):
    optimizer.zero_grad(set_to_none=True)

See torch.optim.Optimizer.zero_grad() for the explanation of the above example.

optimizers(use_pl_optimizer: bool = True) Union[torch.optim.optimizer.Optimizer, pytorch_lightning.core.optimizer.LightningOptimizer, List[torch.optim.optimizer.Optimizer], List[pytorch_lightning.core.optimizer.LightningOptimizer]]#

Returns the optimizer(s) that are being used during training. Useful for manual optimization.

Parameters

use_pl_optimizer – If True, will wrap the optimizer(s) in a LightningOptimizer for automatic handling of precision and profiling.

Returns

A single optimizer, or a list of optimizers in case multiple ones are present.

parameters(recurse: bool = True) Iterator[torch.nn.parameter.Parameter]#

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Parameters

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields

Parameter – module parameter

Example:

>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
predict_dataloader() Union[torch.utils.data.dataloader.DataLoader, Sequence[torch.utils.data.dataloader.DataLoader]]#

Implement one or multiple PyTorch DataLoaders for prediction.

It’s recommended that all data downloads and preparation happen in prepare_data().

Note

Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.

Returns

A torch.utils.data.DataLoader or a sequence of them specifying prediction samples.

Note

In the case where you return multiple prediction dataloaders, the predict_step() will have an argument dataloader_idx which matches the order here.

prepare_data() None#

Use this to download and prepare data. Downloading and saving data with multiple processes (distributed settings) will result in corrupted data. Lightning ensures this method is called only within a single process, so you can safely add your downloading logic within.

Warning

DO NOT set state to the model (use setup instead) since this is NOT called on every device

Example:

def prepare_data(self):
    # good
    download_data()
    tokenize()
    etc()

    # bad
    self.split = data_split
    self.some_state = some_other_state()

In DDP prepare_data can be called in two ways (using Trainer(prepare_data_per_node)):

  1. Once per node. This is the default and is only called on LOCAL_RANK=0.

  2. Once in total. Only called on GLOBAL_RANK=0.

See prepare_data_per_node.

Example:

# DEFAULT
# called once per node on LOCAL_RANK=0 of that node
Trainer(prepare_data_per_node=True)

# call on GLOBAL_RANK=0 (great for shared file systems)
Trainer(prepare_data_per_node=False)

This is called before requesting the dataloaders:

model.prepare_data()
initialize_distributed()
model.setup(stage)
model.train_dataloader()
model.val_dataloader()
model.test_dataloader()
print(*args, **kwargs) None#

Prints only from process 0. Use this in any distributed mode to log only once.

Parameters
  • *args – The thing to print. The same as for Python’s built-in print function.

  • **kwargs – The same as for Python’s built-in print function.

Example:

def forward(self, x):
    self.print(x, 'in forward')
register_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle#

Registers a backward hook on the module.

This function is deprecated in favor of register_full_backward_hook() and the behavior of this function will change in future versions.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_buffer(name: str, tensor: Optional[torch.Tensor], persistent: bool = True) None#

Adds a buffer to the module.

This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s running_mean is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by setting persistent to False. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’s state_dict.

Buffers can be accessed as attributes using given names.

Parameters
  • name (string) – name of the buffer. The buffer can be accessed from this module using the given name

  • tensor (Tensor or None) – buffer to be registered. If None, then operations that run on buffers, such as cuda, are ignored. If None, the buffer is not included in the module’s state_dict.

  • persistent (bool) – whether the buffer is part of this module’s state_dict.

Example:

>>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle#

Registers a forward hook on the module.

The hook will be called every time after forward() has computed an output. It should have the following signature:

hook(module, input, output) -> None or modified output

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called after forward() is called.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_forward_pre_hook(hook: Callable[[...], None]) torch.utils.hooks.RemovableHandle#

Registers a forward pre-hook on the module.

The hook will be called every time before forward() is invoked. It should have the following signature:

hook(module, input) -> None or modified input

The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the forward. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_full_backward_hook(hook: Callable[[torch.nn.modules.module.Module, Union[Tuple[torch.Tensor, ...], torch.Tensor], Union[Tuple[torch.Tensor, ...], torch.Tensor]], Union[None, torch.Tensor]]) torch.utils.hooks.RemovableHandle#

Registers a backward hook on the module.

The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:

hook(module, grad_input, grad_output) -> tuple(Tensor) or None

The grad_input and grad_output are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place of grad_input in subsequent computations. grad_input will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries in grad_input and grad_output will be None for all non-Tensor arguments.

For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.

Warning

Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.

Returns

torch.utils.hooks.RemovableHandle – a handle that can be used to remove the added hook by calling handle.remove()

register_module(name: str, module: Optional[torch.nn.modules.module.Module]) None#

Alias for add_module().

register_parameter(name: str, param: Optional[torch.nn.parameter.Parameter]) None#

Adds a parameter to the module.

The parameter can be accessed as an attribute using given name.

Parameters
  • name (string) – name of the parameter. The parameter can be accessed from this module using the given name

  • param (Parameter or None) – parameter to be added to the module. If None, then operations that run on parameters, such as cuda, are ignored. If None, the parameter is not included in the module’s state_dict.

requires_grad_(requires_grad: bool = True) torch.nn.modules.module.T#

Change if autograd should record operations on parameters in this module.

This method sets the parameters’ requires_grad attributes in-place.

This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).

See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.

Parameters

requires_grad (bool) – whether autograd should record operations on parameters in this module. Default: True.

Returns

Module – self

save_hyperparameters(*args: Any, ignore: Optional[Union[Sequence[str], str]] = None, frame: Optional[frame] = None, logger: bool = True) None#

Save arguments to hparams attribute.

Parameters
  • args – single object of dict, NameSpace or OmegaConf or string names or arguments from class __init__

  • ignore – an argument name or a list of argument names from class __init__ to be ignored

  • frame – a frame object. Default is None

  • logger – Whether to send the hyperparameters to the logger. Default: True

Example::
>>> class ManuallyArgsModel(HyperparametersMixin):
...     def __init__(self, arg1, arg2, arg3):
...         super().__init__()
...         # manually assign arguments
...         self.save_hyperparameters('arg1', 'arg3')
...     def forward(self, *args, **kwargs):
...         ...
>>> model = ManuallyArgsModel(1, 'abc', 3.14)
>>> model.hparams
"arg1": 1
"arg3": 3.14
>>> class AutomaticArgsModel(HyperparametersMixin):
...     def __init__(self, arg1, arg2, arg3):
...         super().__init__()
...         # equivalent automatic
...         self.save_hyperparameters()
...     def forward(self, *args, **kwargs):
...         ...
>>> model = AutomaticArgsModel(1, 'abc', 3.14)
>>> model.hparams
"arg1": 1
"arg2": abc
"arg3": 3.14
>>> class SingleArgModel(HyperparametersMixin):
...     def __init__(self, params):
...         super().__init__()
...         # manually assign single argument
...         self.save_hyperparameters(params)
...     def forward(self, *args, **kwargs):
...         ...
>>> model = SingleArgModel(Namespace(p1=1, p2='abc', p3=3.14))
>>> model.hparams
"p1": 1
"p2": abc
"p3": 3.14
>>> class ManuallyArgsModel(HyperparametersMixin):
...     def __init__(self, arg1, arg2, arg3):
...         super().__init__()
...         # pass argument(s) to ignore as a string or in a list
...         self.save_hyperparameters(ignore='arg2')
...     def forward(self, *args, **kwargs):
...         ...
>>> model = ManuallyArgsModel(1, 'abc', 3.14)
>>> model.hparams
"arg1": 1
"arg3": 3.14
set_extra_state(state: Any)#

This function is called from load_state_dict() to handle any extra state found within the state_dict. Implement this function and a corresponding get_extra_state() for your module if you need to store extra state within its state_dict.

Parameters

state (dict) – Extra state from the state_dict

setup(stage: Optional[str] = None) None#

Called at the beginning of fit (train + validate), validate, test, or predict. This is a good hook when you need to build models dynamically or adjust something about them. This hook is called on every process when using DDP.

Parameters

stage – either 'fit', 'validate', 'test', or 'predict'

Example:

class LitModel(...):
    def __init__(self):
        self.l1 = None

    def prepare_data(self):
        download_data()
        tokenize()

        # don't do this
        self.something = else

    def setup(self, stage):
        data = load_data(...)
        self.l1 = nn.Linear(28, data.num_classes)
share_memory() torch.nn.modules.module.T#

See torch.Tensor.share_memory_()

state_dict(destination=None, prefix='', keep_vars=False)#

Returns a dictionary containing a whole state of the module.

Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to None are not included.

Returns

dict – a dictionary containing a whole state of the module

Example:

>>> module.state_dict().keys()
['bias', 'weight']
summarize(max_depth: int = 1) pytorch_lightning.utilities.model_summary.ModelSummary#

Summarize this LightningModule.

Deprecated since version v1.5: This method was deprecated in v1.5 in favor of pytorch_lightning.utilities.model_summary.summarize and will be removed in v1.7.

Parameters

max_depth – The maximum depth of layer nesting that the summary will include. A value of 0 turns the layer summary off. Default: 1.

Returns

The model summary object

tbptt_split_batch(batch: Any, split_size: int) List[Any]#

When using truncated backpropagation through time, each batch must be split along the time dimension. Lightning handles this by default, but for custom behavior override this function.

Parameters
  • batch – Current batch

  • split_size – The size of the split

Returns

List of batch splits. Each split will be passed to training_step() to enable truncated back propagation through time. The default implementation splits root level Tensors and Sequences at dim=1 (i.e. time dim). It assumes that each time dim is the same length.

Examples:

def tbptt_split_batch(self, batch, split_size):
    splits = []
    for t in range(0, time_dims[0], split_size):
        batch_split = []
        for i, x in enumerate(batch):
            if isinstance(x, torch.Tensor):
                split_x = x[:, t:t + split_size]
            elif isinstance(x, collections.Sequence):
                split_x = [None] * len(x)
                for batch_idx in range(len(x)):
                  split_x[batch_idx] = x[batch_idx][t:t + split_size]
            batch_split.append(split_x)
        splits.append(batch_split)
    return splits

Note

Called in the training loop after on_train_batch_start() if :paramref:`~pytorch_lightning.core.lightning.LightningModule.truncated_bptt_steps` > 0. Each returned batch split is passed separately to training_step().

teardown(stage: Optional[str] = None) None#

Called at the end of fit (train + validate), validate, test, or predict.

Parameters

stage – either 'fit', 'validate', 'test', or 'predict'

test_dataloader() Union[torch.utils.data.dataloader.DataLoader, Sequence[torch.utils.data.dataloader.DataLoader]]#

Implement one or multiple PyTorch DataLoaders for testing.

For data processing use the following pattern:

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Returns

A torch.utils.data.DataLoader or a sequence of them specifying testing samples.

Example:

def test_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def test_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a test dataset and a test_step(), you don’t need to implement this method.

Note

In the case where you return multiple test dataloaders, the test_step() will have an argument dataloader_idx which matches the order here.

test_epoch_end(outputs: Union[List[Union[torch.Tensor, Dict[str, Any]]], List[List[Union[torch.Tensor, Dict[str, Any]]]]]) None#

Called at the end of a test epoch with the output of all test steps.

# the pseudocode for these calls
test_outs = []
for test_batch in test_data:
    out = test_step(test_batch)
    test_outs.append(out)
test_epoch_end(test_outs)
Parameters

outputs – List of outputs you defined in test_step_end(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader

Returns

None

Note

If you didn’t define a test_step(), this won’t be called.

Examples

With a single dataloader:

def test_epoch_end(self, outputs):
    # do something with the outputs of all test batches
    all_test_preds = test_step_outputs.predictions

    some_result = calc_all_results(all_test_preds)
    self.log(some_result)

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each test step for that dataloader.

def test_epoch_end(self, outputs):
    final_value = 0
    for dataloader_outputs in outputs:
        for test_step_out in dataloader_outputs:
            # do something
            final_value += test_step_out

    self.log("final_metric", final_value)
test_step_end(*args, **kwargs) Optional[Union[torch.Tensor, Dict[str, Any]]]#

Use this when testing with dp or ddp2 because test_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.

# pseudocode
sub_batches = split_batches_for_dp(batch)
step_output = [test_step(sub_batch) for sub_batch in sub_batches]
test_step_end(step_output)
Parameters

step_output – What you return in test_step() for each batch part.

Returns

None or anything

# WITHOUT test_step_end
# if used in DP or DDP2, this batch is 1/num_gpus large
def test_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)
    loss = self.softmax(out)
    self.log("test_loss", loss)


# --------------
# with test_step_end to do softmax over the full batch
def test_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    return out


def test_step_end(self, output_results):
    # this out is now the full size of the batch
    all_test_step_outs = output_results.out
    loss = nce_loss(all_test_step_outs)
    self.log("test_loss", loss)

See also

See the accelerators/gpu:Multi GPU Training guide for more details.

to(*args: Any, **kwargs: Any) typing_extensions.Self#

Moves and/or casts the parameters and buffers.

This can be called as .. function:: to(device=None, dtype=None, non_blocking=False) .. function:: to(dtype, non_blocking=False) .. function:: to(tensor, non_blocking=False) Its signature is similar to torch.Tensor.to(), but only accepts floating point desired dtype s. In addition, this method will only cast the floating point parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices. See below for examples.

Note

This method modifies the module in-place.

Parameters
  • device – the desired device of the parameters and buffers in this module

  • dtype – the desired floating point type of the floating point parameters and buffers in this module

  • tensor – Tensor whose dtype and device are the desired dtype and device for all parameters and buffers in this module

Returns

Module – self

Example::
>>> class ExampleModule(DeviceDtypeModuleMixin):
...     def __init__(self, weight: torch.Tensor):
...         super().__init__()
...         self.register_buffer('weight', weight)
>>> _ = torch.manual_seed(0)
>>> module = ExampleModule(torch.rand(3, 4))
>>> module.weight 
tensor([[...]])
>>> module.to(torch.double)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float64)
>>> cpu = torch.device('cpu')
>>> module.to(cpu, dtype=torch.half, non_blocking=True)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float16)
>>> module.to(cpu)
ExampleModule()
>>> module.weight 
tensor([[...]], dtype=torch.float16)
>>> module.device
device(type='cpu')
>>> module.dtype
torch.float16
to_empty(*, device: Union[str, torch.device]) torch.nn.modules.module.T#

Moves the parameters and buffers to the specified device without copying storage.

Parameters

device (torch.device) – The desired device of the parameters and buffers in this module.

Returns

Module – self

to_onnx(file_path: Union[str, pathlib.Path], input_sample: Optional[Any] = None, **kwargs)#

Saves the model in ONNX format.

Parameters
  • file_path – The path of the file the onnx model should be saved to.

  • input_sample – An input for tracing. Default: None (Use self.example_input_array)

  • **kwargs – Will be passed to torch.onnx.export function.

Example

>>> class SimpleModel(LightningModule):
...     def __init__(self):
...         super().__init__()
...         self.l1 = torch.nn.Linear(in_features=64, out_features=4)
...
...     def forward(self, x):
...         return torch.relu(self.l1(x.view(x.size(0), -1)))
>>> with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
...     model = SimpleModel()
...     input_sample = torch.randn((1, 64))
...     model.to_onnx(tmpfile.name, input_sample, export_params=True)
...     os.path.isfile(tmpfile.name)
True
to_torchscript(file_path: Optional[Union[str, pathlib.Path]] = None, method: Optional[str] = 'script', example_inputs: Optional[Any] = None, **kwargs) Union[torch._C.ScriptModule, Dict[str, torch._C.ScriptModule]]#

By default compiles the whole model to a ScriptModule. If you want to use tracing, please provided the argument method='trace' and make sure that either the example_inputs argument is provided, or the model has example_input_array set. If you would like to customize the modules that are scripted you should override this method. In case you want to return multiple modules, we recommend using a dictionary.

Parameters
  • file_path – Path where to save the torchscript. Default: None (no file saved).

  • method – Whether to use TorchScript’s script or trace method. Default: ‘script’

  • example_inputs – An input to be used to do tracing when method is set to ‘trace’. Default: None (uses example_input_array)

  • **kwargs – Additional arguments that will be passed to the torch.jit.script() or torch.jit.trace() function.

Note

  • Requires the implementation of the forward() method.

  • The exported script will be set to evaluation mode.

  • It is recommended that you install the latest supported version of PyTorch to use this feature without limitations. See also the torch.jit documentation for supported features.

Example

>>> class SimpleModel(LightningModule):
...     def __init__(self):
...         super().__init__()
...         self.l1 = torch.nn.Linear(in_features=64, out_features=4)
...
...     def forward(self, x):
...         return torch.relu(self.l1(x.view(x.size(0), -1)))
...
>>> model = SimpleModel()
>>> model.to_torchscript(file_path="model.pt")  
>>> os.path.isfile("model.pt")  
>>> torch.jit.save(model.to_torchscript(file_path="model_trace.pt", method='trace', 
...                                     example_inputs=torch.randn(1, 64)))  
>>> os.path.isfile("model_trace.pt")  
True
Returns

This LightningModule as a torchscript, regardless of whether file_path is defined or not.

toggle_optimizer(optimizer: Union[torch.optim.optimizer.Optimizer, pytorch_lightning.core.optimizer.LightningOptimizer], optimizer_idx: int) None#

Makes sure only the gradients of the current optimizer’s parameters are calculated in the training step to prevent dangling gradients in multiple-optimizer setup.

This is only called automatically when automatic optimization is enabled and multiple optimizers are used. It works with untoggle_optimizer() to make sure param_requires_grad_state is properly reset.

Parameters
  • optimizer – The optimizer to toggle.

  • optimizer_idx – The index of the optimizer to toggle.

train(mode: bool = True) torch.nn.modules.module.T#

Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.

Parameters

mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.

Returns

Module – self

train_dataloader() Union[torch.utils.data.dataloader.DataLoader, Sequence[torch.utils.data.dataloader.DataLoader], Sequence[Sequence[torch.utils.data.dataloader.DataLoader]], Sequence[Dict[str, torch.utils.data.dataloader.DataLoader]], Dict[str, torch.utils.data.dataloader.DataLoader], Dict[str, Dict[str, torch.utils.data.dataloader.DataLoader]], Dict[str, Sequence[torch.utils.data.dataloader.DataLoader]]]#

Implement one or more PyTorch DataLoaders for training.

Returns

A collection of torch.utils.data.DataLoader specifying training samples. In the case of multiple dataloaders, please see this section.

The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.

For data processing use the following pattern:

However, the above are only necessary for distributed processing.

Warning

do not assign state in prepare_data

Note

Lightning adds the correct sampler for distributed and arbitrary hardware. There is no need to set it yourself.

Example:

# single dataloader
def train_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform,
                    download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=True
    )
    return loader

# multiple dataloaders, return as list
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a list of tensors: [batch_mnist, batch_cifar]
    return [mnist_loader, cifar_loader]

# multiple dataloader, return as dict
def train_dataloader(self):
    mnist = MNIST(...)
    cifar = CIFAR(...)
    mnist_loader = torch.utils.data.DataLoader(
        dataset=mnist, batch_size=self.batch_size, shuffle=True
    )
    cifar_loader = torch.utils.data.DataLoader(
        dataset=cifar, batch_size=self.batch_size, shuffle=True
    )
    # each batch will be a dict of tensors: {'mnist': batch_mnist, 'cifar': batch_cifar}
    return {'mnist': mnist_loader, 'cifar': cifar_loader}
training_epoch_end(outputs: List[Union[torch.Tensor, Dict[str, Any]]]) None#

Called at the end of the training epoch with the outputs of all training steps. Use this in case you need to do something with all the outputs returned by training_step().

# the pseudocode for these calls
train_outs = []
for train_batch in train_data:
    out = training_step(train_batch)
    train_outs.append(out)
training_epoch_end(train_outs)
Parameters

outputs – List of outputs you defined in training_step(). If there are multiple optimizers or when using truncated_bptt_steps > 0, the lists have the dimensions (n_batches, tbptt_steps, n_optimizers). Dimensions of length 1 are squeezed.

Returns

None

Note

If this method is not overridden, this won’t be called.

def training_epoch_end(self, training_step_outputs):
    # do something with all training_step outputs
    for out in training_step_outputs:
        ...
training_step_end(step_output: Union[torch.Tensor, Dict[str, Any]]) Union[torch.Tensor, Dict[str, Any]]#

Use this when training with dp or ddp2 because training_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code

# pseudocode
sub_batches = split_batches_for_dp(batch)
step_output = [training_step(sub_batch) for sub_batch in sub_batches]
training_step_end(step_output)
Parameters

step_output – What you return in training_step for each batch part.

Returns

Anything

When using dp/ddp2 distributed backends, only a portion of the batch is inside the training_step:

def training_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)

    # softmax uses only a portion of the batch in the denominator
    loss = self.softmax(out)
    loss = nce_loss(loss)
    return loss

If you wish to do something with all the parts of the batch, then use this method to do it:

def training_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    return {"pred": out}


def training_step_end(self, training_step_outputs):
    gpu_0_pred = training_step_outputs[0]["pred"]
    gpu_1_pred = training_step_outputs[1]["pred"]
    gpu_n_pred = training_step_outputs[n]["pred"]

    # this softmax now uses the full batch
    loss = nce_loss([gpu_0_pred, gpu_1_pred, gpu_n_pred])
    return loss

See also

See the accelerators/gpu:Multi GPU Training guide for more details.

transfer_batch_to_device(batch: Any, device: torch.device, dataloader_idx: int) Any#

Override this hook if your DataLoader returns tensors wrapped in a custom data structure.

The data types listed below (and any arbitrary nesting of them) are supported out of the box:

  • torch.Tensor or anything that implements .to(…)

  • list

  • dict

  • tuple

  • torchtext.data.batch.Batch

For anything else, you need to define how the data is moved to the target device (CPU, GPU, TPU, …).

Note

This hook should only transfer the data and not modify it, nor should it move the data to any other device than the one passed in as argument (unless you know what you are doing). To check the current state of execution of this hook you can use self.trainer.training/testing/validating/predicting so that you can add different logic as per your requirement.

Note

This hook only runs on single GPU training and DDP (no data-parallel). Data-Parallel support will come in near future.

Parameters
  • batch – A batch of data that needs to be transferred to a new device.

  • device – The target device as defined in PyTorch.

  • dataloader_idx – The index of the dataloader to which the batch belongs.

Returns

A reference to the data on the new device.

Example:

def transfer_batch_to_device(self, batch, device, dataloader_idx):
    if isinstance(batch, CustomBatch):
        # move all tensors in your custom data structure to the device
        batch.samples = batch.samples.to(device)
        batch.targets = batch.targets.to(device)
    elif dataloader_idx == 0:
        # skip device transfer for the first dataloader or anything you wish
        pass
    else:
        batch = super().transfer_batch_to_device(data, device, dataloader_idx)
    return batch
Raises

MisconfigurationException – If using data-parallel, Trainer(strategy='dp').

See also

  • move_data_to_device()

  • apply_to_collection()

property truncated_bptt_steps: int#

Enables Truncated Backpropagation Through Time in the Trainer when set to a positive integer.

It represents the number of times training_step() gets called before backpropagation. If this is > 0, the training_step() receives an additional argument hiddens and is expected to return a hidden state.

type(dst_type: Union[str, torch.dtype]) typing_extensions.Self#

Casts all parameters and buffers to dst_type.

Parameters

dst_type (type or string) – the desired type

Returns

Module – self

unfreeze() None#

Unfreeze all parameters for training.

model = MyLightningModule(...)
model.unfreeze()
untoggle_optimizer(optimizer_idx: int) None#

Resets the state of required gradients that were toggled with toggle_optimizer().

This is only called automatically when automatic optimization is enabled and multiple optimizers are used.

Parameters

optimizer_idx – The index of the optimizer to untoggle.

property use_amp: bool#

Deprecated since version v1.6..

This property was deprecated in v1.6 and will be removed in v1.8.

val_dataloader() Union[torch.utils.data.dataloader.DataLoader, Sequence[torch.utils.data.dataloader.DataLoader]]#

Implement one or multiple PyTorch DataLoaders for validation.

The dataloader you return will not be reloaded unless you set :paramref:`~pytorch_lightning.trainer.Trainer.reload_dataloaders_every_n_epochs` to a positive integer.

It’s recommended that all data downloads and preparation happen in prepare_data().

Note

Lightning adds the correct sampler for distributed and arbitrary hardware There is no need to set it yourself.

Returns

A torch.utils.data.DataLoader or a sequence of them specifying validation samples.

Examples:

def val_dataloader(self):
    transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize((0.5,), (1.0,))])
    dataset = MNIST(root='/path/to/mnist/', train=False,
                    transform=transform, download=True)
    loader = torch.utils.data.DataLoader(
        dataset=dataset,
        batch_size=self.batch_size,
        shuffle=False
    )

    return loader

# can also return multiple dataloaders
def val_dataloader(self):
    return [loader_a, loader_b, ..., loader_n]

Note

If you don’t need a validation dataset and a validation_step(), you don’t need to implement this method.

Note

In the case where you return multiple validation dataloaders, the validation_step() will have an argument dataloader_idx which matches the order here.

validation_epoch_end(outputs: Union[List[Union[torch.Tensor, Dict[str, Any]]], List[List[Union[torch.Tensor, Dict[str, Any]]]]]) None#

Called at the end of the validation epoch with the outputs of all validation steps.

# the pseudocode for these calls
val_outs = []
for val_batch in val_data:
    out = validation_step(val_batch)
    val_outs.append(out)
validation_epoch_end(val_outs)
Parameters

outputs – List of outputs you defined in validation_step(), or if there are multiple dataloaders, a list containing a list of outputs for each dataloader.

Returns

None

Note

If you didn’t define a validation_step(), this won’t be called.

Examples

With a single dataloader:

def validation_epoch_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

With multiple dataloaders, outputs will be a list of lists. The outer list contains one entry per dataloader, while the inner list contains the individual outputs of each validation step for that dataloader.

def validation_epoch_end(self, outputs):
    for dataloader_output_result in outputs:
        dataloader_outs = dataloader_output_result.dataloader_i_outputs

    self.log("final_metric", final_value)
validation_step_end(*args, **kwargs) Optional[Union[torch.Tensor, Dict[str, Any]]]#

Use this when validating with dp or ddp2 because validation_step() will operate on only part of the batch. However, this is still optional and only needed for things like softmax or NCE loss.

Note

If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code.

# pseudocode
sub_batches = split_batches_for_dp(batch)
step_output = [validation_step(sub_batch) for sub_batch in sub_batches]
validation_step_end(step_output)
Parameters

step_output – What you return in validation_step() for each batch part.

Returns

None or anything

# WITHOUT validation_step_end
# if used in DP or DDP2, this batch is 1/num_gpus large
def validation_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self.encoder(x)
    loss = self.softmax(out)
    loss = nce_loss(loss)
    self.log("val_loss", loss)


# --------------
# with validation_step_end to do softmax over the full batch
def validation_step(self, batch, batch_idx):
    # batch is 1/num_gpus big
    x, y = batch

    out = self(x)
    return out


def validation_step_end(self, val_step_outputs):
    for out in val_step_outputs:
        ...

See also

See the accelerators/gpu:Multi GPU Training guide for more details.

xpu(device: Optional[Union[int, torch.device]] = None) torch.nn.modules.module.T#

Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note

This method modifies the module in-place.

Parameters

device (int, optional) – if specified, all parameters will be copied to that device

Returns

Module – self

zero_grad(set_to_none: bool = False) None#

Sets gradients of all model parameters to zero. See similar function under torch.optim.Optimizer for more context.

Parameters

set_to_none (bool) – instead of setting to zero, set the grads to None. See torch.optim.Optimizer.zero_grad() for details.

training: bool#