nupic.torch.functions package

nupic.torch.functions.k_winners

kwinners(x, duty_cycles, k: int, boost_strength: float, break_ties: bool = False, relu: bool = False, inplace: bool = False)

A simple K-winner take all function for creating layers with sparse output.

Use the boost strength to compute a boost factor for each unit represented in x. These factors are used to increase the impact of each unit to improve their chances of being chosen. This encourages participation of more columns in the learning process.

The boosting function is a curve defined as:

\[boostFactors = \exp(-boostStrength \times (dutyCycles - targetDensity))\]

Intuitively this means that units that have been active (i.e. in the top-k) at the target activation level have a boost factor of 1, meaning their activity is not boosted. Columns whose duty cycle drops too much below that of their neighbors are boosted depending on how infrequently they have been active. Unit that has been active more than the target activation level have a boost factor below 1, meaning their activity is suppressed and they are less likely to be in the top-k.

Note that we do not transmit the boosted values. We only use boosting to determine the winning units.

The target activation density for each unit is k / number of units. The boostFactor depends on the duty_cycles via an exponential function:

boostFactor
    ^
    |
    |                |           1  _  |                  |    _
    |      _ _
    |          _ _ _ _
    +--------------------> duty_cycles
       |
  target_density
Parameters
  • x – Current activity of each unit, optionally batched along the 0th dimension.

  • duty_cycles – The averaged duty cycle of each unit.

  • k – The activity of the top k units will be allowed to remain, the rest are set to zero.

  • boost_strength – A boost strength of 0.0 has no effect on x.

  • break_ties – Whether to use a strict k-winners. Using break_ties=False is faster but may occasionally result in more than k active units.

  • relu – Whether to simulate the effect of applying ReLU before KWinners

  • inplace – Whether to modify x in place

Returns

A tensor representing the activity of x after k-winner take all.

kwinners2d(x, duty_cycles, k: int, boost_strength: float, local: bool = True, break_ties: bool = False, relu: bool = False, inplace: bool = False)

A K-winner take all function for creating Conv2d layers with sparse output.

If local=True, k-winners are chosen independently for each location. For Conv2d inputs (batch, channel, H, W), the top k channels are selected locally for each of the H X W locations. If there is a tie for the kth highest boosted value, there will be more than k winners.

The boost strength is used to compute a boost factor for each unit represented in x. These factors are used to increase the impact of each unit to improve their chances of being chosen. This encourages participation of more columns in the learning process. See kwinners() for more details.

Parameters
  • x – Current activity of each unit.

  • duty_cycles – The averaged duty cycle of each unit.

  • k – The activity of the top k units across the channels will be allowed to remain, the rest are set to zero.

  • boost_strength – A boost strength of 0.0 has no effect on x.

  • local – Whether or not to choose the k-winners locally (across the channels at each location) or globally (across the whole input and across all channels).

  • break_ties – Whether to use a strict k-winners. Using break_ties=False is faster but may occasionally result in more than k active units.

  • relu – Whether to simulate the effect of applying ReLU before KWinners.

  • inplace – Whether to modify x in place

Returns

A tensor representing the activity of x after k-winner take all.