nupic.torch
version 0.0.1.dev0
upgrade_to_masked_sparseweights()
binary_entropy()
max_entropy()
kwinners()
kwinners2d()
GSCSparseCNN
GSCSuperSparseCNN
MNISTSparseCNN
gsc_sparse_cnn()
gsc_super_sparse_cnn()
Flatten
Flatten.forward()
KWinners
KWinners.extra_repr()
KWinners.forward()
KWinners.update_duty_cycle()
KWinners2d
KWinners2d.entropy()
KWinners2d.extra_repr()
KWinners2d.forward()
KWinners2d.update_duty_cycle()
KWinnersBase
KWinnersBase.entropy()
KWinnersBase.extra_repr()
KWinnersBase.max_entropy()
KWinnersBase.update_boost_strength()
KWinnersBase.update_duty_cycle()
update_boost_strength()
PrunableSparseWeightBase
PrunableSparseWeightBase.off_mask
PrunableSparseWeights
PrunableSparseWeights2d
HasRezeroWeights
HasRezeroWeights.rezero_weights()
SparseWeights
SparseWeights.rezero_weights()
SparseWeights2d
SparseWeights2d.rezero_weights()
SparseWeightsBase
SparseWeightsBase.extra_repr()
SparseWeightsBase.forward()
SparseWeightsBase.bias
SparseWeightsBase.weight
SparseWeightsBase.weight_sparsity
normalize_sparse_weights()
rezero_weights()
Returns a new state dict with any “zero_weights” tensors converted to “zero_mask” tensors. (The “zero_weights” was a list of indices of zeroes in the weight tensor.)
Calculate entropy for a list of binary random variables.
x – (torch tensor) the probability of the variable to be 1.
entropy: (torch tensor) entropy, sum(entropy)
The maximum entropy we could get with n units and k winners.