nupic.torch.models package

nupic.torch.models.sparse_cnn

class GSCSparseCNN(*args: Any, **kwargs: Any)[source]

Bases: Sequential

Sparse CNN model used to classify Google Speech Commands dataset as described in How Can We Be So Dense? paper.

Parameters
  • cnn_out_channels – output channels for each CNN layer

  • cnn_percent_on – Percent of units allowed to remain on each convolution layer

  • linear_units – Number of units in the linear layer

  • linear_percent_on – Percent of units allowed to remain on the linear layer

  • k_inference_factor – During inference (training=False) we increase percent_on in all sparse layers by this factor

  • boost_strength – boost strength (0.0 implies no boosting)

  • boost_strength_factor – Boost strength factor to use [0..1]

  • duty_cycle_period – The period used to calculate duty cycles

  • kwinner_local – Whether or not to choose the k-winners locally (across the channels at each location) or globally (across the whole input and across all channels)

  • cnn_sparsity – Percent of weights that zero

  • linear_sparsity – Percent of weights that are zero in the linear layer.

class GSCSuperSparseCNN(*args: Any, **kwargs: Any)[source]

Bases: GSCSparseCNN

Super Sparse CNN model used to classify Google Speech Commands dataset as described in How Can We Be So Dense? paper. This model provides a sparser version of GSCSparseCNN

class MNISTSparseCNN(*args: Any, **kwargs: Any)[source]

Bases: Sequential

Sparse CNN model used to classify MNIST dataset as described in How Can We Be So Dense? paper.

Parameters
  • cnn_out_channels – output channels for each CNN layer

  • cnn_percent_on – Percent of units allowed to remain on each convolution layer

  • linear_units – Number of units in the linear layer

  • linear_percent_on – Percent of units allowed to remain on the linear layer

  • k_inference_factor – During inference (training=False) we increase percent_on in all sparse layers by this factor

  • boost_strength – boost strength (0.0 implies no boosting)

  • boost_strength_factor – Boost strength factor to use [0..1]

  • duty_cycle_period – The period used to calculate duty cycles

  • kwinner_local – Whether or not to choose the k-winners locally (across the channels at each location) or globally (across the whole input and across all channels)

  • cnn_sparsity – Percent of weights that are zero

  • linear_sparsity – Percent of weights that are zero.

gsc_sparse_cnn(pretrained=False, progress=True, **kwargs)[source]

Sparse CNN model used to classify ‘Google Speech Commands’ dataset

Parameters
  • pretrained – If True, returns a model pre-trained on Google Speech Commands

  • progress – If True, displays a progress bar of the download to stderr

  • kwargs – See GSCSparseCNN

gsc_super_sparse_cnn(pretrained=False, progress=True)[source]

Super Sparse CNN model used to classify Google Speech Commands dataset as described in How Can We Be So Dense? paper. This model provides a sparser version of GSCSparseCNN

Parameters
  • pretrained – If True, returns a model pre-trained on Google Speech Commands

  • progress – If True, displays a progress bar of the download to stderr