nupic.torch.modules package¶
nupic.torch.modules.flatten¶
nupic.torch.modules.k_winners¶
- class KWinners(*args: Any, **kwargs: Any)[source]¶
Bases:
KWinnersBase
Applies K-Winner function to the input tensor.
See
htmresearch.frameworks.pytorch.functions.k_winners
- Parameters
n (int) – Number of units
percent_on (float) – The activity of the top k = percent_on * n will be allowed to remain, the rest are set to zero.
k_inference_factor (float) – During inference (training=False) we increase percent_on by this factor. percent_on * k_inference_factor must be strictly less than 1.0, ideally much lower than 1.0
boost_strength (float) – boost strength (0.0 implies no boosting).
boost_strength_factor (float) – Boost strength factor to use [0..1]
duty_cycle_period (int) – The period used to calculate duty cycles
break_ties (bool) – Whether to use a strict k-winners. Using break_ties=False is faster but may occasionally result in more than k active units.
relu (bool) – This will simulate the effect of having a ReLU before the KWinners.
inplace (bool) – Modify the input in-place.
- class KWinners2d(*args: Any, **kwargs: Any)[source]¶
Bases:
KWinnersBase
Applies K-Winner function to the input tensor.
See
htmresearch.frameworks.pytorch.functions.k_winners2d
- Parameters
channels (int) – Number of channels (filters) in the convolutional layer.
percent_on (float) – The activity of the top k = percent_on * number of input units will be allowed to remain, the rest are set to zero.
k_inference_factor (float) – During inference (training=False) we increase percent_on by this factor. percent_on * k_inference_factor must be strictly less than 1.0, ideally much lower than 1.0
boost_strength (float) – boost strength (0.0 implies no boosting).
boost_strength_factor (float) – Boost strength factor to use [0..1]
duty_cycle_period (int) – The period used to calculate duty cycles
local (bool) – Whether or not to choose the k-winners locally (across the channels at each location) or globally (across the whole input and across all channels).
break_ties (bool) – Whether to use a strict k-winners. Using break_ties=False is faster but may occasionally result in more than k active units.
relu (bool) – This will simulate the effect of having a ReLU before the KWinners.
inplace (bool) – Modify the input in-place.
- class KWinnersBase(*args: Any, **kwargs: Any)[source]¶
Bases:
Module
Base KWinners class.
- Parameters
percent_on (float) – The activity of the top k = percent_on * number of input units will be allowed to remain, the rest are set to zero.
k_inference_factor (float) – During inference (training=False) we increase percent_on by this factor. percent_on * k_inference_factor must be strictly less than 1.0, ideally much lower than 1.0
boost_strength (float) – boost strength (0.0 implies no boosting). Must be >= 0.0
boost_strength_factor (float) – Boost strength factor to use [0..1]
duty_cycle_period (int) – The period used to calculate duty cycles
- update_boost_strength(m)[source]¶
Function used to update KWinner modules boost strength. This is typically done during training at the beginning of each epoch.
Call using
torch.nn.Module.apply()
after each epoch if required For example:m.apply(update_boost_strength)
- Parameters
m – KWinner module
nupic.torch.modules.prunable_sparse_weights¶
- class PrunableSparseWeightBase[source]¶
Bases:
object
Enable easy setting and getting of the off-mask that defines which weights are zero.
- property off_mask¶
Gets the value of zero_mask in bool format. Thus one may call
` self.weight[~self.off_mask] # returns weights that are currently on `
- class PrunableSparseWeights(*args: Any, **kwargs: Any)[source]¶
Bases:
SparseWeights
,PrunableSparseWeightBase
Enforce weight sparsity on linear module. The off-weights may be changed dynamically through the off_mask property.
- class PrunableSparseWeights2d(*args: Any, **kwargs: Any)[source]¶
Bases:
SparseWeights2d
,PrunableSparseWeightBase
Enforce weight sparsity on CNN modules. The off-weights may be changed dynamically through the off_mask property.
nupic.torch.modules.sparse_weights¶
- class SparseWeights(*args: Any, **kwargs: Any)[source]¶
Bases:
SparseWeightsBase
Enforce weight sparsity on linear module during training.
Sample usage:
model = nn.Linear(784, 10) model = SparseWeights(model, sparsity=0.4)
- Parameters
module – The module to sparsify the weights
weight_sparsity – Pct of weights that are NON-ZERO in the layer. Also equal to 1-sparsity Please note this is the first positional parameter for backwards compatibility
sparsity – Pct of weights that are ZERO in the layer Accepts either sparsity or weight_sparsity, but not both at a time
allow_extremes – Allow values sparsity=0 and sparsity=1. These values are often a sign that there is a bug in the configuration, because they lead to Identity and Zero layers, respectively, but they can make sense in scenarios where the mask is dynamic.
- class SparseWeights2d(*args: Any, **kwargs: Any)[source]¶
Bases:
SparseWeightsBase
Enforce weight sparsity on CNN modules Sample usage:
model = nn.Conv2d(in_channels, out_channels, kernel_size, …) model = SparseWeights2d(model, sparsity=0.4)
- Parameters
module – The module to sparsify the weights
weight_sparsity – Pct of weights that are NON-ZERO in the layer. Also equal to 1-sparsity Please note this is the first positional parameter for backwards compatibility
sparsity – Pct of weights that are ZERO in the layer Accepts either sparsity or weight_sparsity, but not both at a time
allow_extremes – Allow values sparsity=0 and sparsity=1. These values are often a sign that there is a bug in the configuration, because they lead to Identity and Zero layers, respectively, but they can make sense in scenarios where the mask is dynamic.
- class SparseWeightsBase(*args: Any, **kwargs: Any)[source]¶
Bases:
Module
,HasRezeroWeights
Base class for the all Sparse Weights modules.
- Parameters
module – The module to sparsify the weights
weight_sparsity – Pct of weights that are NON-ZERO in the layer. Also equal to 1-sparsity Please note this is the first positional parameter for backwards compatibility
sparsity – Pct of weights that are ZERO in the layer Accepts either sparsity or weight_sparsity, but not both at a time
- property bias¶
- property weight¶
- property weight_sparsity¶
- normalize_sparse_weights(m)[source]¶
Initialize the weights using kaiming_uniform initialization normalized to the number of non-zeros in the layer instead of the whole input size.
Similar to torch.nn.Linear.reset_parameters() but applying weight sparsity to the input size
- rezero_weights(m)[source]¶
Function used to update the weights after each epoch.
Call using
torch.nn.Module.apply()
after each epoch if required For example:m.apply(rezero_weights)
- Parameters
m – HasRezeroWeights module