repo stringlengths 7 55 | path stringlengths 4 127 | func_name stringlengths 1 88 | original_string stringlengths 75 19.8k | language stringclasses 1
value | code stringlengths 75 19.8k | code_tokens listlengths 20 707 | docstring stringlengths 3 17.3k | docstring_tokens listlengths 3 222 | sha stringlengths 40 40 | url stringlengths 87 242 | partition stringclasses 1
value | idx int64 0 252k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._calculateBasalLearning | def _calculateBasalLearning(self,
activeColumns,
burstingColumns,
correctPredictedCells,
activeBasalSegments,
matchingBasalSegments,
basalPotentialOverlaps):
"""
Basic Temporal Memory learning. Correctly predicted cells always have
active basal segments, and we learn on these segments. In bursting
columns, we either learn on an existing basal segment, or we grow a new one.
The only influence apical dendrites have on basal learning is: the apical
dendrites influence which cells are considered "predicted". So an active
apical dendrite can prevent some basal segments in active columns from
learning.
@param correctPredictedCells (numpy array)
@param burstingColumns (numpy array)
@param activeBasalSegments (numpy array)
@param matchingBasalSegments (numpy array)
@param basalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveBasalSegments (numpy array)
Active basal segments on correct predicted cells
- learningMatchingBasalSegments (numpy array)
Matching basal segments selected for learning in bursting columns
- basalSegmentsToPunish (numpy array)
Basal segments that should be punished for predicting an inactive column
- newBasalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new basal segments
- learningCells (numpy array)
Cells that have learning basal segments or are selected to grow a basal
segment
"""
# Correctly predicted columns
learningActiveBasalSegments = self.basalConnections.filterSegmentsByCell(
activeBasalSegments, correctPredictedCells)
cellsForMatchingBasal = self.basalConnections.mapSegmentsToCells(
matchingBasalSegments)
matchingCells = np.unique(cellsForMatchingBasal)
(matchingCellsInBurstingColumns,
burstingColumnsWithNoMatch) = np2.setCompare(
matchingCells, burstingColumns, matchingCells / self.cellsPerColumn,
rightMinusLeft=True)
learningMatchingBasalSegments = self._chooseBestSegmentPerColumn(
self.basalConnections, matchingCellsInBurstingColumns,
matchingBasalSegments, basalPotentialOverlaps, self.cellsPerColumn)
newBasalSegmentCells = self._getCellsWithFewestSegments(
self.basalConnections, self.rng, burstingColumnsWithNoMatch,
self.cellsPerColumn)
learningCells = np.concatenate(
(correctPredictedCells,
self.basalConnections.mapSegmentsToCells(learningMatchingBasalSegments),
newBasalSegmentCells))
# Incorrectly predicted columns
correctMatchingBasalMask = np.in1d(
cellsForMatchingBasal / self.cellsPerColumn, activeColumns)
basalSegmentsToPunish = matchingBasalSegments[~correctMatchingBasalMask]
return (learningActiveBasalSegments,
learningMatchingBasalSegments,
basalSegmentsToPunish,
newBasalSegmentCells,
learningCells) | python | def _calculateBasalLearning(self,
activeColumns,
burstingColumns,
correctPredictedCells,
activeBasalSegments,
matchingBasalSegments,
basalPotentialOverlaps):
"""
Basic Temporal Memory learning. Correctly predicted cells always have
active basal segments, and we learn on these segments. In bursting
columns, we either learn on an existing basal segment, or we grow a new one.
The only influence apical dendrites have on basal learning is: the apical
dendrites influence which cells are considered "predicted". So an active
apical dendrite can prevent some basal segments in active columns from
learning.
@param correctPredictedCells (numpy array)
@param burstingColumns (numpy array)
@param activeBasalSegments (numpy array)
@param matchingBasalSegments (numpy array)
@param basalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveBasalSegments (numpy array)
Active basal segments on correct predicted cells
- learningMatchingBasalSegments (numpy array)
Matching basal segments selected for learning in bursting columns
- basalSegmentsToPunish (numpy array)
Basal segments that should be punished for predicting an inactive column
- newBasalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new basal segments
- learningCells (numpy array)
Cells that have learning basal segments or are selected to grow a basal
segment
"""
# Correctly predicted columns
learningActiveBasalSegments = self.basalConnections.filterSegmentsByCell(
activeBasalSegments, correctPredictedCells)
cellsForMatchingBasal = self.basalConnections.mapSegmentsToCells(
matchingBasalSegments)
matchingCells = np.unique(cellsForMatchingBasal)
(matchingCellsInBurstingColumns,
burstingColumnsWithNoMatch) = np2.setCompare(
matchingCells, burstingColumns, matchingCells / self.cellsPerColumn,
rightMinusLeft=True)
learningMatchingBasalSegments = self._chooseBestSegmentPerColumn(
self.basalConnections, matchingCellsInBurstingColumns,
matchingBasalSegments, basalPotentialOverlaps, self.cellsPerColumn)
newBasalSegmentCells = self._getCellsWithFewestSegments(
self.basalConnections, self.rng, burstingColumnsWithNoMatch,
self.cellsPerColumn)
learningCells = np.concatenate(
(correctPredictedCells,
self.basalConnections.mapSegmentsToCells(learningMatchingBasalSegments),
newBasalSegmentCells))
# Incorrectly predicted columns
correctMatchingBasalMask = np.in1d(
cellsForMatchingBasal / self.cellsPerColumn, activeColumns)
basalSegmentsToPunish = matchingBasalSegments[~correctMatchingBasalMask]
return (learningActiveBasalSegments,
learningMatchingBasalSegments,
basalSegmentsToPunish,
newBasalSegmentCells,
learningCells) | [
"def",
"_calculateBasalLearning",
"(",
"self",
",",
"activeColumns",
",",
"burstingColumns",
",",
"correctPredictedCells",
",",
"activeBasalSegments",
",",
"matchingBasalSegments",
",",
"basalPotentialOverlaps",
")",
":",
"# Correctly predicted columns",
"learningActiveBasalSeg... | Basic Temporal Memory learning. Correctly predicted cells always have
active basal segments, and we learn on these segments. In bursting
columns, we either learn on an existing basal segment, or we grow a new one.
The only influence apical dendrites have on basal learning is: the apical
dendrites influence which cells are considered "predicted". So an active
apical dendrite can prevent some basal segments in active columns from
learning.
@param correctPredictedCells (numpy array)
@param burstingColumns (numpy array)
@param activeBasalSegments (numpy array)
@param matchingBasalSegments (numpy array)
@param basalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveBasalSegments (numpy array)
Active basal segments on correct predicted cells
- learningMatchingBasalSegments (numpy array)
Matching basal segments selected for learning in bursting columns
- basalSegmentsToPunish (numpy array)
Basal segments that should be punished for predicting an inactive column
- newBasalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new basal segments
- learningCells (numpy array)
Cells that have learning basal segments or are selected to grow a basal
segment | [
"Basic",
"Temporal",
"Memory",
"learning",
".",
"Correctly",
"predicted",
"cells",
"always",
"have",
"active",
"basal",
"segments",
"and",
"we",
"learn",
"on",
"these",
"segments",
".",
"In",
"bursting",
"columns",
"we",
"either",
"learn",
"on",
"an",
"existin... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L331-L407 | train | 198,700 |
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._calculateApicalLearning | def _calculateApicalLearning(self,
learningCells,
activeColumns,
activeApicalSegments,
matchingApicalSegments,
apicalPotentialOverlaps):
"""
Calculate apical learning for each learning cell.
The set of learning cells was determined completely from basal segments.
Do all apical learning on the same cells.
Learn on any active segments on learning cells. For cells without active
segments, learn on the best matching segment. For cells without a matching
segment, grow a new segment.
@param learningCells (numpy array)
@param correctPredictedCells (numpy array)
@param activeApicalSegments (numpy array)
@param matchingApicalSegments (numpy array)
@param apicalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveApicalSegments (numpy array)
Active apical segments on correct predicted cells
- learningMatchingApicalSegments (numpy array)
Matching apical segments selected for learning in bursting columns
- apicalSegmentsToPunish (numpy array)
Apical segments that should be punished for predicting an inactive column
- newApicalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new apical segments
"""
# Cells with active apical segments
learningActiveApicalSegments = self.apicalConnections.filterSegmentsByCell(
activeApicalSegments, learningCells)
# Cells with matching apical segments
learningCellsWithoutActiveApical = np.setdiff1d(
learningCells,
self.apicalConnections.mapSegmentsToCells(learningActiveApicalSegments))
cellsForMatchingApical = self.apicalConnections.mapSegmentsToCells(
matchingApicalSegments)
learningCellsWithMatchingApical = np.intersect1d(
learningCellsWithoutActiveApical, cellsForMatchingApical)
learningMatchingApicalSegments = self._chooseBestSegmentPerCell(
self.apicalConnections, learningCellsWithMatchingApical,
matchingApicalSegments, apicalPotentialOverlaps)
# Cells that need to grow an apical segment
newApicalSegmentCells = np.setdiff1d(learningCellsWithoutActiveApical,
learningCellsWithMatchingApical)
# Incorrectly predicted columns
correctMatchingApicalMask = np.in1d(
cellsForMatchingApical / self.cellsPerColumn, activeColumns)
apicalSegmentsToPunish = matchingApicalSegments[~correctMatchingApicalMask]
return (learningActiveApicalSegments,
learningMatchingApicalSegments,
apicalSegmentsToPunish,
newApicalSegmentCells) | python | def _calculateApicalLearning(self,
learningCells,
activeColumns,
activeApicalSegments,
matchingApicalSegments,
apicalPotentialOverlaps):
"""
Calculate apical learning for each learning cell.
The set of learning cells was determined completely from basal segments.
Do all apical learning on the same cells.
Learn on any active segments on learning cells. For cells without active
segments, learn on the best matching segment. For cells without a matching
segment, grow a new segment.
@param learningCells (numpy array)
@param correctPredictedCells (numpy array)
@param activeApicalSegments (numpy array)
@param matchingApicalSegments (numpy array)
@param apicalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveApicalSegments (numpy array)
Active apical segments on correct predicted cells
- learningMatchingApicalSegments (numpy array)
Matching apical segments selected for learning in bursting columns
- apicalSegmentsToPunish (numpy array)
Apical segments that should be punished for predicting an inactive column
- newApicalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new apical segments
"""
# Cells with active apical segments
learningActiveApicalSegments = self.apicalConnections.filterSegmentsByCell(
activeApicalSegments, learningCells)
# Cells with matching apical segments
learningCellsWithoutActiveApical = np.setdiff1d(
learningCells,
self.apicalConnections.mapSegmentsToCells(learningActiveApicalSegments))
cellsForMatchingApical = self.apicalConnections.mapSegmentsToCells(
matchingApicalSegments)
learningCellsWithMatchingApical = np.intersect1d(
learningCellsWithoutActiveApical, cellsForMatchingApical)
learningMatchingApicalSegments = self._chooseBestSegmentPerCell(
self.apicalConnections, learningCellsWithMatchingApical,
matchingApicalSegments, apicalPotentialOverlaps)
# Cells that need to grow an apical segment
newApicalSegmentCells = np.setdiff1d(learningCellsWithoutActiveApical,
learningCellsWithMatchingApical)
# Incorrectly predicted columns
correctMatchingApicalMask = np.in1d(
cellsForMatchingApical / self.cellsPerColumn, activeColumns)
apicalSegmentsToPunish = matchingApicalSegments[~correctMatchingApicalMask]
return (learningActiveApicalSegments,
learningMatchingApicalSegments,
apicalSegmentsToPunish,
newApicalSegmentCells) | [
"def",
"_calculateApicalLearning",
"(",
"self",
",",
"learningCells",
",",
"activeColumns",
",",
"activeApicalSegments",
",",
"matchingApicalSegments",
",",
"apicalPotentialOverlaps",
")",
":",
"# Cells with active apical segments",
"learningActiveApicalSegments",
"=",
"self",
... | Calculate apical learning for each learning cell.
The set of learning cells was determined completely from basal segments.
Do all apical learning on the same cells.
Learn on any active segments on learning cells. For cells without active
segments, learn on the best matching segment. For cells without a matching
segment, grow a new segment.
@param learningCells (numpy array)
@param correctPredictedCells (numpy array)
@param activeApicalSegments (numpy array)
@param matchingApicalSegments (numpy array)
@param apicalPotentialOverlaps (numpy array)
@return (tuple)
- learningActiveApicalSegments (numpy array)
Active apical segments on correct predicted cells
- learningMatchingApicalSegments (numpy array)
Matching apical segments selected for learning in bursting columns
- apicalSegmentsToPunish (numpy array)
Apical segments that should be punished for predicting an inactive column
- newApicalSegmentCells (numpy array)
Cells in bursting columns that were selected to grow new apical segments | [
"Calculate",
"apical",
"learning",
"for",
"each",
"learning",
"cell",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L410-L475 | train | 198,701 |
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._calculateApicalSegmentActivity | def _calculateApicalSegmentActivity(connections, activeInput, connectedPermanence,
activationThreshold, minThreshold):
"""
Calculate the active and matching apical segments for this timestep.
@param connections (SparseMatrixConnections)
@param activeInput (numpy array)
@return (tuple)
- activeSegments (numpy array)
Dendrite segments with enough active connected synapses to cause a
dendritic spike
- matchingSegments (numpy array)
Dendrite segments with enough active potential synapses to be selected for
learning in a bursting column
- potentialOverlaps (numpy array)
The number of active potential synapses for each segment.
Includes counts for active, matching, and nonmatching segments.
"""
# Active
overlaps = connections.computeActivity(activeInput, connectedPermanence)
activeSegments = np.flatnonzero(overlaps >= activationThreshold)
# Matching
potentialOverlaps = connections.computeActivity(activeInput)
matchingSegments = np.flatnonzero(potentialOverlaps >= minThreshold)
return (activeSegments,
matchingSegments,
potentialOverlaps) | python | def _calculateApicalSegmentActivity(connections, activeInput, connectedPermanence,
activationThreshold, minThreshold):
"""
Calculate the active and matching apical segments for this timestep.
@param connections (SparseMatrixConnections)
@param activeInput (numpy array)
@return (tuple)
- activeSegments (numpy array)
Dendrite segments with enough active connected synapses to cause a
dendritic spike
- matchingSegments (numpy array)
Dendrite segments with enough active potential synapses to be selected for
learning in a bursting column
- potentialOverlaps (numpy array)
The number of active potential synapses for each segment.
Includes counts for active, matching, and nonmatching segments.
"""
# Active
overlaps = connections.computeActivity(activeInput, connectedPermanence)
activeSegments = np.flatnonzero(overlaps >= activationThreshold)
# Matching
potentialOverlaps = connections.computeActivity(activeInput)
matchingSegments = np.flatnonzero(potentialOverlaps >= minThreshold)
return (activeSegments,
matchingSegments,
potentialOverlaps) | [
"def",
"_calculateApicalSegmentActivity",
"(",
"connections",
",",
"activeInput",
",",
"connectedPermanence",
",",
"activationThreshold",
",",
"minThreshold",
")",
":",
"# Active",
"overlaps",
"=",
"connections",
".",
"computeActivity",
"(",
"activeInput",
",",
"connect... | Calculate the active and matching apical segments for this timestep.
@param connections (SparseMatrixConnections)
@param activeInput (numpy array)
@return (tuple)
- activeSegments (numpy array)
Dendrite segments with enough active connected synapses to cause a
dendritic spike
- matchingSegments (numpy array)
Dendrite segments with enough active potential synapses to be selected for
learning in a bursting column
- potentialOverlaps (numpy array)
The number of active potential synapses for each segment.
Includes counts for active, matching, and nonmatching segments. | [
"Calculate",
"the",
"active",
"and",
"matching",
"apical",
"segments",
"for",
"this",
"timestep",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L479-L511 | train | 198,702 |
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._calculatePredictedCells | def _calculatePredictedCells(self, activeBasalSegments, activeApicalSegments):
"""
Calculate the predicted cells, given the set of active segments.
An active basal segment is enough to predict a cell.
An active apical segment is *not* enough to predict a cell.
When a cell has both types of segments active, other cells in its minicolumn
must also have both types of segments to be considered predictive.
@param activeBasalSegments (numpy array)
@param activeApicalSegments (numpy array)
@return (numpy array)
"""
cellsForBasalSegments = self.basalConnections.mapSegmentsToCells(
activeBasalSegments)
cellsForApicalSegments = self.apicalConnections.mapSegmentsToCells(
activeApicalSegments)
fullyDepolarizedCells = np.intersect1d(cellsForBasalSegments,
cellsForApicalSegments)
partlyDepolarizedCells = np.setdiff1d(cellsForBasalSegments,
fullyDepolarizedCells)
inhibitedMask = np.in1d(partlyDepolarizedCells / self.cellsPerColumn,
fullyDepolarizedCells / self.cellsPerColumn)
predictedCells = np.append(fullyDepolarizedCells,
partlyDepolarizedCells[~inhibitedMask])
if self.useApicalTiebreak == False:
predictedCells = cellsForBasalSegments
return predictedCells | python | def _calculatePredictedCells(self, activeBasalSegments, activeApicalSegments):
"""
Calculate the predicted cells, given the set of active segments.
An active basal segment is enough to predict a cell.
An active apical segment is *not* enough to predict a cell.
When a cell has both types of segments active, other cells in its minicolumn
must also have both types of segments to be considered predictive.
@param activeBasalSegments (numpy array)
@param activeApicalSegments (numpy array)
@return (numpy array)
"""
cellsForBasalSegments = self.basalConnections.mapSegmentsToCells(
activeBasalSegments)
cellsForApicalSegments = self.apicalConnections.mapSegmentsToCells(
activeApicalSegments)
fullyDepolarizedCells = np.intersect1d(cellsForBasalSegments,
cellsForApicalSegments)
partlyDepolarizedCells = np.setdiff1d(cellsForBasalSegments,
fullyDepolarizedCells)
inhibitedMask = np.in1d(partlyDepolarizedCells / self.cellsPerColumn,
fullyDepolarizedCells / self.cellsPerColumn)
predictedCells = np.append(fullyDepolarizedCells,
partlyDepolarizedCells[~inhibitedMask])
if self.useApicalTiebreak == False:
predictedCells = cellsForBasalSegments
return predictedCells | [
"def",
"_calculatePredictedCells",
"(",
"self",
",",
"activeBasalSegments",
",",
"activeApicalSegments",
")",
":",
"cellsForBasalSegments",
"=",
"self",
".",
"basalConnections",
".",
"mapSegmentsToCells",
"(",
"activeBasalSegments",
")",
"cellsForApicalSegments",
"=",
"se... | Calculate the predicted cells, given the set of active segments.
An active basal segment is enough to predict a cell.
An active apical segment is *not* enough to predict a cell.
When a cell has both types of segments active, other cells in its minicolumn
must also have both types of segments to be considered predictive.
@param activeBasalSegments (numpy array)
@param activeApicalSegments (numpy array)
@return (numpy array) | [
"Calculate",
"the",
"predicted",
"cells",
"given",
"the",
"set",
"of",
"active",
"segments",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L567-L601 | train | 198,703 |
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._chooseBestSegmentPerCell | def _chooseBestSegmentPerCell(cls,
connections,
cells,
allMatchingSegments,
potentialOverlaps):
"""
For each specified cell, choose its matching segment with largest number
of active potential synapses. When there's a tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param cells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array)
@return (numpy array)
One segment per cell
"""
candidateSegments = connections.filterSegmentsByCell(allMatchingSegments,
cells)
# Narrow it down to one pair per cell.
onePerCellFilter = np2.argmaxMulti(potentialOverlaps[candidateSegments],
connections.mapSegmentsToCells(
candidateSegments))
learningSegments = candidateSegments[onePerCellFilter]
return learningSegments | python | def _chooseBestSegmentPerCell(cls,
connections,
cells,
allMatchingSegments,
potentialOverlaps):
"""
For each specified cell, choose its matching segment with largest number
of active potential synapses. When there's a tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param cells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array)
@return (numpy array)
One segment per cell
"""
candidateSegments = connections.filterSegmentsByCell(allMatchingSegments,
cells)
# Narrow it down to one pair per cell.
onePerCellFilter = np2.argmaxMulti(potentialOverlaps[candidateSegments],
connections.mapSegmentsToCells(
candidateSegments))
learningSegments = candidateSegments[onePerCellFilter]
return learningSegments | [
"def",
"_chooseBestSegmentPerCell",
"(",
"cls",
",",
"connections",
",",
"cells",
",",
"allMatchingSegments",
",",
"potentialOverlaps",
")",
":",
"candidateSegments",
"=",
"connections",
".",
"filterSegmentsByCell",
"(",
"allMatchingSegments",
",",
"cells",
")",
"# Na... | For each specified cell, choose its matching segment with largest number
of active potential synapses. When there's a tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param cells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array)
@return (numpy array)
One segment per cell | [
"For",
"each",
"specified",
"cell",
"choose",
"its",
"matching",
"segment",
"with",
"largest",
"number",
"of",
"active",
"potential",
"synapses",
".",
"When",
"there",
"s",
"a",
"tie",
"the",
"first",
"segment",
"wins",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L660-L687 | train | 198,704 |
numenta/htmresearch | htmresearch/algorithms/apical_tiebreak_temporal_memory.py | ApicalTiebreakTemporalMemory._chooseBestSegmentPerColumn | def _chooseBestSegmentPerColumn(cls, connections, matchingCells,
allMatchingSegments, potentialOverlaps,
cellsPerColumn):
"""
For all the columns covered by 'matchingCells', choose the column's matching
segment with largest number of active potential synapses. When there's a
tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param matchingCells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array)
"""
candidateSegments = connections.filterSegmentsByCell(allMatchingSegments,
matchingCells)
# Narrow it down to one segment per column.
cellScores = potentialOverlaps[candidateSegments]
columnsForCandidates = (connections.mapSegmentsToCells(candidateSegments) /
cellsPerColumn)
onePerColumnFilter = np2.argmaxMulti(cellScores, columnsForCandidates)
learningSegments = candidateSegments[onePerColumnFilter]
return learningSegments | python | def _chooseBestSegmentPerColumn(cls, connections, matchingCells,
allMatchingSegments, potentialOverlaps,
cellsPerColumn):
"""
For all the columns covered by 'matchingCells', choose the column's matching
segment with largest number of active potential synapses. When there's a
tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param matchingCells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array)
"""
candidateSegments = connections.filterSegmentsByCell(allMatchingSegments,
matchingCells)
# Narrow it down to one segment per column.
cellScores = potentialOverlaps[candidateSegments]
columnsForCandidates = (connections.mapSegmentsToCells(candidateSegments) /
cellsPerColumn)
onePerColumnFilter = np2.argmaxMulti(cellScores, columnsForCandidates)
learningSegments = candidateSegments[onePerColumnFilter]
return learningSegments | [
"def",
"_chooseBestSegmentPerColumn",
"(",
"cls",
",",
"connections",
",",
"matchingCells",
",",
"allMatchingSegments",
",",
"potentialOverlaps",
",",
"cellsPerColumn",
")",
":",
"candidateSegments",
"=",
"connections",
".",
"filterSegmentsByCell",
"(",
"allMatchingSegmen... | For all the columns covered by 'matchingCells', choose the column's matching
segment with largest number of active potential synapses. When there's a
tie, the first segment wins.
@param connections (SparseMatrixConnections)
@param matchingCells (numpy array)
@param allMatchingSegments (numpy array)
@param potentialOverlaps (numpy array) | [
"For",
"all",
"the",
"columns",
"covered",
"by",
"matchingCells",
"choose",
"the",
"column",
"s",
"matching",
"segment",
"with",
"largest",
"number",
"of",
"active",
"potential",
"synapses",
".",
"When",
"there",
"s",
"a",
"tie",
"the",
"first",
"segment",
"... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/apical_tiebreak_temporal_memory.py#L691-L716 | train | 198,705 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.infer | def infer(self, sensationList, reset=True, objectName=None):
"""
Infer on given sensations.
The provided sensationList is a list of sensations, and each sensation is
a mapping from cortical column to a tuple of two SDR's respectively
corresponding to the location in object space and the feature.
For example, the input can look as follows, if we are inferring a simple
object with two sensations (with very few active bits for simplicity):
sensationList = [
{
0: (set([1, 5, 10]), set([6, 12, 52]), # location, feature for CC0
1: (set([6, 2, 15]), set([64, 1, 5]), # location, feature for CC1
},
{
0: (set([5, 46, 50]), set([8, 10, 11]), # location, feature for CC0
1: (set([1, 6, 45]), set([12, 17, 23]), # location, feature for CC1
},
]
In many uses cases, this object can be created by implementations of
ObjectMachines (cf htm.research.object_machine_factory), through their
method providedObjectsToInfer.
If the object is known by the caller, an object name can be specified
as an optional argument, and must match the objects given while learning.
Parameters:
----------------------------
@param sensationList (list)
List of sensations, in the canonical format specified above
@param reset (bool)
If set to True (which is the default value), the network will
be reset after learning.
@param objectName (str)
Name of the objects (must match the names given during learning).
"""
self._unsetLearningMode()
statistics = collections.defaultdict(list)
for sensations in sensationList:
# feed all columns with sensations
for col in xrange(self.numColumns):
location, feature = sensations[col]
self.sensorInputs[col].addDataToQueue(list(feature), 0, 0)
self.externalInputs[col].addDataToQueue(list(location), 0, 0)
self.network.run(1)
self._updateInferenceStats(statistics, objectName)
if reset:
# send reset signal
self._sendReset()
# save statistics
statistics["numSteps"] = len(sensationList)
statistics["object"] = objectName if objectName is not None else "Unknown"
self.statistics.append(statistics) | python | def infer(self, sensationList, reset=True, objectName=None):
"""
Infer on given sensations.
The provided sensationList is a list of sensations, and each sensation is
a mapping from cortical column to a tuple of two SDR's respectively
corresponding to the location in object space and the feature.
For example, the input can look as follows, if we are inferring a simple
object with two sensations (with very few active bits for simplicity):
sensationList = [
{
0: (set([1, 5, 10]), set([6, 12, 52]), # location, feature for CC0
1: (set([6, 2, 15]), set([64, 1, 5]), # location, feature for CC1
},
{
0: (set([5, 46, 50]), set([8, 10, 11]), # location, feature for CC0
1: (set([1, 6, 45]), set([12, 17, 23]), # location, feature for CC1
},
]
In many uses cases, this object can be created by implementations of
ObjectMachines (cf htm.research.object_machine_factory), through their
method providedObjectsToInfer.
If the object is known by the caller, an object name can be specified
as an optional argument, and must match the objects given while learning.
Parameters:
----------------------------
@param sensationList (list)
List of sensations, in the canonical format specified above
@param reset (bool)
If set to True (which is the default value), the network will
be reset after learning.
@param objectName (str)
Name of the objects (must match the names given during learning).
"""
self._unsetLearningMode()
statistics = collections.defaultdict(list)
for sensations in sensationList:
# feed all columns with sensations
for col in xrange(self.numColumns):
location, feature = sensations[col]
self.sensorInputs[col].addDataToQueue(list(feature), 0, 0)
self.externalInputs[col].addDataToQueue(list(location), 0, 0)
self.network.run(1)
self._updateInferenceStats(statistics, objectName)
if reset:
# send reset signal
self._sendReset()
# save statistics
statistics["numSteps"] = len(sensationList)
statistics["object"] = objectName if objectName is not None else "Unknown"
self.statistics.append(statistics) | [
"def",
"infer",
"(",
"self",
",",
"sensationList",
",",
"reset",
"=",
"True",
",",
"objectName",
"=",
"None",
")",
":",
"self",
".",
"_unsetLearningMode",
"(",
")",
"statistics",
"=",
"collections",
".",
"defaultdict",
"(",
"list",
")",
"for",
"sensations"... | Infer on given sensations.
The provided sensationList is a list of sensations, and each sensation is
a mapping from cortical column to a tuple of two SDR's respectively
corresponding to the location in object space and the feature.
For example, the input can look as follows, if we are inferring a simple
object with two sensations (with very few active bits for simplicity):
sensationList = [
{
0: (set([1, 5, 10]), set([6, 12, 52]), # location, feature for CC0
1: (set([6, 2, 15]), set([64, 1, 5]), # location, feature for CC1
},
{
0: (set([5, 46, 50]), set([8, 10, 11]), # location, feature for CC0
1: (set([1, 6, 45]), set([12, 17, 23]), # location, feature for CC1
},
]
In many uses cases, this object can be created by implementations of
ObjectMachines (cf htm.research.object_machine_factory), through their
method providedObjectsToInfer.
If the object is known by the caller, an object name can be specified
as an optional argument, and must match the objects given while learning.
Parameters:
----------------------------
@param sensationList (list)
List of sensations, in the canonical format specified above
@param reset (bool)
If set to True (which is the default value), the network will
be reset after learning.
@param objectName (str)
Name of the objects (must match the names given during learning). | [
"Infer",
"on",
"given",
"sensations",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L401-L465 | train | 198,706 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment._saveL2Representation | def _saveL2Representation(self, objectName):
"""
Record the current active L2 cells as the representation for 'objectName'.
"""
self.objectL2Representations[objectName] = self.getL2Representations()
try:
objectIndex = self.objectNameToIndex[objectName]
except KeyError:
# Grow the matrices as needed.
if self.objectNamesAreIndices:
objectIndex = objectName
if objectIndex >= self.objectL2RepresentationsMatrices[0].nRows():
for matrix in self.objectL2RepresentationsMatrices:
matrix.resize(objectIndex + 1, matrix.nCols())
else:
objectIndex = self.objectL2RepresentationsMatrices[0].nRows()
for matrix in self.objectL2RepresentationsMatrices:
matrix.resize(matrix.nRows() + 1, matrix.nCols())
self.objectNameToIndex[objectName] = objectIndex
for colIdx, matrix in enumerate(self.objectL2RepresentationsMatrices):
activeCells = self.L2Columns[colIdx]._pooler.getActiveCells()
matrix.setRowFromSparse(objectIndex, activeCells,
np.ones(len(activeCells), dtype="float32")) | python | def _saveL2Representation(self, objectName):
"""
Record the current active L2 cells as the representation for 'objectName'.
"""
self.objectL2Representations[objectName] = self.getL2Representations()
try:
objectIndex = self.objectNameToIndex[objectName]
except KeyError:
# Grow the matrices as needed.
if self.objectNamesAreIndices:
objectIndex = objectName
if objectIndex >= self.objectL2RepresentationsMatrices[0].nRows():
for matrix in self.objectL2RepresentationsMatrices:
matrix.resize(objectIndex + 1, matrix.nCols())
else:
objectIndex = self.objectL2RepresentationsMatrices[0].nRows()
for matrix in self.objectL2RepresentationsMatrices:
matrix.resize(matrix.nRows() + 1, matrix.nCols())
self.objectNameToIndex[objectName] = objectIndex
for colIdx, matrix in enumerate(self.objectL2RepresentationsMatrices):
activeCells = self.L2Columns[colIdx]._pooler.getActiveCells()
matrix.setRowFromSparse(objectIndex, activeCells,
np.ones(len(activeCells), dtype="float32")) | [
"def",
"_saveL2Representation",
"(",
"self",
",",
"objectName",
")",
":",
"self",
".",
"objectL2Representations",
"[",
"objectName",
"]",
"=",
"self",
".",
"getL2Representations",
"(",
")",
"try",
":",
"objectIndex",
"=",
"self",
".",
"objectNameToIndex",
"[",
... | Record the current active L2 cells as the representation for 'objectName'. | [
"Record",
"the",
"current",
"active",
"L2",
"cells",
"as",
"the",
"representation",
"for",
"objectName",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L468-L493 | train | 198,707 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.plotInferenceStats | def plotInferenceStats(self,
fields,
plotDir="plots",
experimentID=0,
onePlot=True):
"""
Plots and saves the desired inference statistics.
Parameters:
----------------------------
@param fields (list(str))
List of fields to include in the plots
@param experimentID (int)
ID of the experiment (usually 0 if only one was conducted)
@param onePlot (bool)
If true, all cortical columns will be merged in one plot.
"""
if not os.path.exists(plotDir):
os.makedirs(plotDir)
plt.figure()
stats = self.statistics[experimentID]
objectName = stats["object"]
for i in xrange(self.numColumns):
if not onePlot:
plt.figure()
# plot request stats
for field in fields:
fieldKey = field + " C" + str(i)
plt.plot(stats[fieldKey], marker='+', label=fieldKey)
# format
plt.legend(loc="upper right")
plt.xlabel("Sensation #")
plt.xticks(range(stats["numSteps"]))
plt.ylabel("Number of active bits")
plt.ylim(plt.ylim()[0] - 5, plt.ylim()[1] + 5)
plt.title("Object inference for object {}".format(objectName))
# save
if not onePlot:
relPath = "{}_exp_{}_C{}.png".format(self.name, experimentID, i)
path = os.path.join(plotDir, relPath)
plt.savefig(path)
plt.close()
if onePlot:
relPath = "{}_exp_{}.png".format(self.name, experimentID)
path = os.path.join(plotDir, relPath)
plt.savefig(path)
plt.close() | python | def plotInferenceStats(self,
fields,
plotDir="plots",
experimentID=0,
onePlot=True):
"""
Plots and saves the desired inference statistics.
Parameters:
----------------------------
@param fields (list(str))
List of fields to include in the plots
@param experimentID (int)
ID of the experiment (usually 0 if only one was conducted)
@param onePlot (bool)
If true, all cortical columns will be merged in one plot.
"""
if not os.path.exists(plotDir):
os.makedirs(plotDir)
plt.figure()
stats = self.statistics[experimentID]
objectName = stats["object"]
for i in xrange(self.numColumns):
if not onePlot:
plt.figure()
# plot request stats
for field in fields:
fieldKey = field + " C" + str(i)
plt.plot(stats[fieldKey], marker='+', label=fieldKey)
# format
plt.legend(loc="upper right")
plt.xlabel("Sensation #")
plt.xticks(range(stats["numSteps"]))
plt.ylabel("Number of active bits")
plt.ylim(plt.ylim()[0] - 5, plt.ylim()[1] + 5)
plt.title("Object inference for object {}".format(objectName))
# save
if not onePlot:
relPath = "{}_exp_{}_C{}.png".format(self.name, experimentID, i)
path = os.path.join(plotDir, relPath)
plt.savefig(path)
plt.close()
if onePlot:
relPath = "{}_exp_{}.png".format(self.name, experimentID)
path = os.path.join(plotDir, relPath)
plt.savefig(path)
plt.close() | [
"def",
"plotInferenceStats",
"(",
"self",
",",
"fields",
",",
"plotDir",
"=",
"\"plots\"",
",",
"experimentID",
"=",
"0",
",",
"onePlot",
"=",
"True",
")",
":",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"plotDir",
")",
":",
"os",
".",
"make... | Plots and saves the desired inference statistics.
Parameters:
----------------------------
@param fields (list(str))
List of fields to include in the plots
@param experimentID (int)
ID of the experiment (usually 0 if only one was conducted)
@param onePlot (bool)
If true, all cortical columns will be merged in one plot. | [
"Plots",
"and",
"saves",
"the",
"desired",
"inference",
"statistics",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L518-L573 | train | 198,708 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.averageConvergencePoint | def averageConvergencePoint(self, prefix, minOverlap, maxOverlap,
settlingTime=1, firstStat=0, lastStat=None):
"""
For each object, compute the convergence time - the first point when all
L2 columns have converged.
Return the average convergence time and accuracy across all objects.
Using inference statistics for a bunch of runs, locate all traces with the
given prefix. For each trace locate the iteration where it finally settles
on targetValue. Return the average settling iteration and accuracy across
all runs.
:param prefix: Use this prefix to filter relevant stats.
:param minOverlap: Min target overlap
:param maxOverlap: Max target overlap
:param settlingTime: Setting time between iteration. Default 1
:return: Average settling iteration and accuracy across all runs
"""
convergenceSum = 0.0
numCorrect = 0.0
inferenceLength = 1000000
# For each object
for stats in self.statistics[firstStat:lastStat]:
# For each L2 column locate convergence time
convergencePoint = 0.0
for key in stats.iterkeys():
if prefix in key:
inferenceLength = len(stats[key])
columnConvergence = L4L2Experiment._locateConvergencePoint(
stats[key], minOverlap, maxOverlap)
convergencePoint = max(convergencePoint, columnConvergence)
convergenceSum += ceil(float(convergencePoint) / settlingTime)
if ceil(float(convergencePoint) / settlingTime) <= inferenceLength:
numCorrect += 1
if len(self.statistics[firstStat:lastStat]) == 0:
return 10000.0, 0.0
return (convergenceSum / len(self.statistics[firstStat:lastStat]),
numCorrect / len(self.statistics[firstStat:lastStat]) ) | python | def averageConvergencePoint(self, prefix, minOverlap, maxOverlap,
settlingTime=1, firstStat=0, lastStat=None):
"""
For each object, compute the convergence time - the first point when all
L2 columns have converged.
Return the average convergence time and accuracy across all objects.
Using inference statistics for a bunch of runs, locate all traces with the
given prefix. For each trace locate the iteration where it finally settles
on targetValue. Return the average settling iteration and accuracy across
all runs.
:param prefix: Use this prefix to filter relevant stats.
:param minOverlap: Min target overlap
:param maxOverlap: Max target overlap
:param settlingTime: Setting time between iteration. Default 1
:return: Average settling iteration and accuracy across all runs
"""
convergenceSum = 0.0
numCorrect = 0.0
inferenceLength = 1000000
# For each object
for stats in self.statistics[firstStat:lastStat]:
# For each L2 column locate convergence time
convergencePoint = 0.0
for key in stats.iterkeys():
if prefix in key:
inferenceLength = len(stats[key])
columnConvergence = L4L2Experiment._locateConvergencePoint(
stats[key], minOverlap, maxOverlap)
convergencePoint = max(convergencePoint, columnConvergence)
convergenceSum += ceil(float(convergencePoint) / settlingTime)
if ceil(float(convergencePoint) / settlingTime) <= inferenceLength:
numCorrect += 1
if len(self.statistics[firstStat:lastStat]) == 0:
return 10000.0, 0.0
return (convergenceSum / len(self.statistics[firstStat:lastStat]),
numCorrect / len(self.statistics[firstStat:lastStat]) ) | [
"def",
"averageConvergencePoint",
"(",
"self",
",",
"prefix",
",",
"minOverlap",
",",
"maxOverlap",
",",
"settlingTime",
"=",
"1",
",",
"firstStat",
"=",
"0",
",",
"lastStat",
"=",
"None",
")",
":",
"convergenceSum",
"=",
"0.0",
"numCorrect",
"=",
"0.0",
"... | For each object, compute the convergence time - the first point when all
L2 columns have converged.
Return the average convergence time and accuracy across all objects.
Using inference statistics for a bunch of runs, locate all traces with the
given prefix. For each trace locate the iteration where it finally settles
on targetValue. Return the average settling iteration and accuracy across
all runs.
:param prefix: Use this prefix to filter relevant stats.
:param minOverlap: Min target overlap
:param maxOverlap: Max target overlap
:param settlingTime: Setting time between iteration. Default 1
:return: Average settling iteration and accuracy across all runs | [
"For",
"each",
"object",
"compute",
"the",
"convergence",
"time",
"-",
"the",
"first",
"point",
"when",
"all",
"L2",
"columns",
"have",
"converged",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L593-L639 | train | 198,709 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.getAlgorithmInstance | def getAlgorithmInstance(self, layer="L2", column=0):
"""
Returns an instance of the underlying algorithm. For example,
layer=L2 and column=1 could return the actual instance of ColumnPooler
that is responsible for column 1.
"""
assert ( (column>=0) and (column<self.numColumns)), ("Column number not "
"in valid range")
if layer == "L2":
return self.L2Columns[column].getAlgorithmInstance()
elif layer == "L4":
return self.L4Columns[column].getAlgorithmInstance()
else:
raise Exception("Invalid layer. Must be 'L4' or 'L2'") | python | def getAlgorithmInstance(self, layer="L2", column=0):
"""
Returns an instance of the underlying algorithm. For example,
layer=L2 and column=1 could return the actual instance of ColumnPooler
that is responsible for column 1.
"""
assert ( (column>=0) and (column<self.numColumns)), ("Column number not "
"in valid range")
if layer == "L2":
return self.L2Columns[column].getAlgorithmInstance()
elif layer == "L4":
return self.L4Columns[column].getAlgorithmInstance()
else:
raise Exception("Invalid layer. Must be 'L4' or 'L2'") | [
"def",
"getAlgorithmInstance",
"(",
"self",
",",
"layer",
"=",
"\"L2\"",
",",
"column",
"=",
"0",
")",
":",
"assert",
"(",
"(",
"column",
">=",
"0",
")",
"and",
"(",
"column",
"<",
"self",
".",
"numColumns",
")",
")",
",",
"(",
"\"Column number not \""... | Returns an instance of the underlying algorithm. For example,
layer=L2 and column=1 could return the actual instance of ColumnPooler
that is responsible for column 1. | [
"Returns",
"an",
"instance",
"of",
"the",
"underlying",
"algorithm",
".",
"For",
"example",
"layer",
"=",
"L2",
"and",
"column",
"=",
"1",
"could",
"return",
"the",
"actual",
"instance",
"of",
"ColumnPooler",
"that",
"is",
"responsible",
"for",
"column",
"1"... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L733-L747 | train | 198,710 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.getCurrentObjectOverlaps | def getCurrentObjectOverlaps(self):
"""
Get every L2's current overlap with each L2 object representation that has
been learned.
:return: 2D numpy array.
Each row represents a cortical column. Each column represents an object.
Each value represents the cortical column's current L2 overlap with the
specified object.
"""
overlaps = np.zeros((self.numColumns,
len(self.objectL2Representations)),
dtype="uint32")
for i, representations in enumerate(self.objectL2RepresentationsMatrices):
activeCells = self.L2Columns[i]._pooler.getActiveCells()
overlaps[i, :] = representations.rightVecSumAtNZSparse(activeCells)
return overlaps | python | def getCurrentObjectOverlaps(self):
"""
Get every L2's current overlap with each L2 object representation that has
been learned.
:return: 2D numpy array.
Each row represents a cortical column. Each column represents an object.
Each value represents the cortical column's current L2 overlap with the
specified object.
"""
overlaps = np.zeros((self.numColumns,
len(self.objectL2Representations)),
dtype="uint32")
for i, representations in enumerate(self.objectL2RepresentationsMatrices):
activeCells = self.L2Columns[i]._pooler.getActiveCells()
overlaps[i, :] = representations.rightVecSumAtNZSparse(activeCells)
return overlaps | [
"def",
"getCurrentObjectOverlaps",
"(",
"self",
")",
":",
"overlaps",
"=",
"np",
".",
"zeros",
"(",
"(",
"self",
".",
"numColumns",
",",
"len",
"(",
"self",
".",
"objectL2Representations",
")",
")",
",",
"dtype",
"=",
"\"uint32\"",
")",
"for",
"i",
",",
... | Get every L2's current overlap with each L2 object representation that has
been learned.
:return: 2D numpy array.
Each row represents a cortical column. Each column represents an object.
Each value represents the cortical column's current L2 overlap with the
specified object. | [
"Get",
"every",
"L2",
"s",
"current",
"overlap",
"with",
"each",
"L2",
"object",
"representation",
"that",
"has",
"been",
"learned",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L750-L768 | train | 198,711 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.isObjectClassified | def isObjectClassified(self, objectName, minOverlap=None, maxL2Size=None):
"""
Return True if objectName is currently unambiguously classified by every L2
column. Classification is correct and unambiguous if the current L2 overlap
with the true object is greater than minOverlap and if the size of the L2
representation is no more than maxL2Size
:param minOverlap: min overlap to consider the object as recognized.
Defaults to half of the SDR size
:param maxL2Size: max size for the L2 representation
Defaults to 1.5 * SDR size
:return: True/False
"""
L2Representation = self.getL2Representations()
objectRepresentation = self.objectL2Representations[objectName]
sdrSize = self.config["L2Params"]["sdrSize"]
if minOverlap is None:
minOverlap = sdrSize / 2
if maxL2Size is None:
maxL2Size = 1.5*sdrSize
numCorrectClassifications = 0
for col in xrange(self.numColumns):
overlapWithObject = len(objectRepresentation[col] & L2Representation[col])
if ( overlapWithObject >= minOverlap and
len(L2Representation[col]) <= maxL2Size ):
numCorrectClassifications += 1
return numCorrectClassifications == self.numColumns | python | def isObjectClassified(self, objectName, minOverlap=None, maxL2Size=None):
"""
Return True if objectName is currently unambiguously classified by every L2
column. Classification is correct and unambiguous if the current L2 overlap
with the true object is greater than minOverlap and if the size of the L2
representation is no more than maxL2Size
:param minOverlap: min overlap to consider the object as recognized.
Defaults to half of the SDR size
:param maxL2Size: max size for the L2 representation
Defaults to 1.5 * SDR size
:return: True/False
"""
L2Representation = self.getL2Representations()
objectRepresentation = self.objectL2Representations[objectName]
sdrSize = self.config["L2Params"]["sdrSize"]
if minOverlap is None:
minOverlap = sdrSize / 2
if maxL2Size is None:
maxL2Size = 1.5*sdrSize
numCorrectClassifications = 0
for col in xrange(self.numColumns):
overlapWithObject = len(objectRepresentation[col] & L2Representation[col])
if ( overlapWithObject >= minOverlap and
len(L2Representation[col]) <= maxL2Size ):
numCorrectClassifications += 1
return numCorrectClassifications == self.numColumns | [
"def",
"isObjectClassified",
"(",
"self",
",",
"objectName",
",",
"minOverlap",
"=",
"None",
",",
"maxL2Size",
"=",
"None",
")",
":",
"L2Representation",
"=",
"self",
".",
"getL2Representations",
"(",
")",
"objectRepresentation",
"=",
"self",
".",
"objectL2Repre... | Return True if objectName is currently unambiguously classified by every L2
column. Classification is correct and unambiguous if the current L2 overlap
with the true object is greater than minOverlap and if the size of the L2
representation is no more than maxL2Size
:param minOverlap: min overlap to consider the object as recognized.
Defaults to half of the SDR size
:param maxL2Size: max size for the L2 representation
Defaults to 1.5 * SDR size
:return: True/False | [
"Return",
"True",
"if",
"objectName",
"is",
"currently",
"unambiguously",
"classified",
"by",
"every",
"L2",
"column",
".",
"Classification",
"is",
"correct",
"and",
"unambiguous",
"if",
"the",
"current",
"L2",
"overlap",
"with",
"the",
"true",
"object",
"is",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L814-L845 | train | 198,712 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.getDefaultL4Params | def getDefaultL4Params(self, inputSize, numInputBits):
"""
Returns a good default set of parameters to use in the L4 region.
"""
sampleSize = int(1.5 * numInputBits)
if numInputBits == 20:
activationThreshold = 13
minThreshold = 13
elif numInputBits == 10:
activationThreshold = 8
minThreshold = 8
else:
activationThreshold = int(numInputBits * .6)
minThreshold = activationThreshold
return {
"columnCount": inputSize,
"cellsPerColumn": 16,
"learn": True,
"initialPermanence": 0.51,
"connectedPermanence": 0.6,
"permanenceIncrement": 0.1,
"permanenceDecrement": 0.02,
"minThreshold": minThreshold,
"basalPredictedSegmentDecrement": 0.0,
"apicalPredictedSegmentDecrement": 0.0,
"activationThreshold": activationThreshold,
"reducedBasalThreshold": int(activationThreshold*0.6),
"sampleSize": sampleSize,
"implementation": "ApicalTiebreak",
"seed": self.seed
} | python | def getDefaultL4Params(self, inputSize, numInputBits):
"""
Returns a good default set of parameters to use in the L4 region.
"""
sampleSize = int(1.5 * numInputBits)
if numInputBits == 20:
activationThreshold = 13
minThreshold = 13
elif numInputBits == 10:
activationThreshold = 8
minThreshold = 8
else:
activationThreshold = int(numInputBits * .6)
minThreshold = activationThreshold
return {
"columnCount": inputSize,
"cellsPerColumn": 16,
"learn": True,
"initialPermanence": 0.51,
"connectedPermanence": 0.6,
"permanenceIncrement": 0.1,
"permanenceDecrement": 0.02,
"minThreshold": minThreshold,
"basalPredictedSegmentDecrement": 0.0,
"apicalPredictedSegmentDecrement": 0.0,
"activationThreshold": activationThreshold,
"reducedBasalThreshold": int(activationThreshold*0.6),
"sampleSize": sampleSize,
"implementation": "ApicalTiebreak",
"seed": self.seed
} | [
"def",
"getDefaultL4Params",
"(",
"self",
",",
"inputSize",
",",
"numInputBits",
")",
":",
"sampleSize",
"=",
"int",
"(",
"1.5",
"*",
"numInputBits",
")",
"if",
"numInputBits",
"==",
"20",
":",
"activationThreshold",
"=",
"13",
"minThreshold",
"=",
"13",
"el... | Returns a good default set of parameters to use in the L4 region. | [
"Returns",
"a",
"good",
"default",
"set",
"of",
"parameters",
"to",
"use",
"in",
"the",
"L4",
"region",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L848-L880 | train | 198,713 |
numenta/htmresearch | htmresearch/frameworks/layers/l2_l4_inference.py | L4L2Experiment.getDefaultL2Params | def getDefaultL2Params(self, inputSize, numInputBits):
"""
Returns a good default set of parameters to use in the L2 region.
"""
if numInputBits == 20:
sampleSizeProximal = 10
minThresholdProximal = 5
elif numInputBits == 10:
sampleSizeProximal = 6
minThresholdProximal = 3
else:
sampleSizeProximal = int(numInputBits * .6)
minThresholdProximal = int(sampleSizeProximal * .6)
return {
"inputWidth": inputSize * 16,
"cellCount": 4096,
"sdrSize": 40,
"synPermProximalInc": 0.1,
"synPermProximalDec": 0.001,
"initialProximalPermanence": 0.6,
"minThresholdProximal": minThresholdProximal,
"sampleSizeProximal": sampleSizeProximal,
"connectedPermanenceProximal": 0.5,
"synPermDistalInc": 0.1,
"synPermDistalDec": 0.001,
"initialDistalPermanence": 0.41,
"activationThresholdDistal": 13,
"sampleSizeDistal": 20,
"connectedPermanenceDistal": 0.5,
"seed": self.seed,
"learningMode": True,
} | python | def getDefaultL2Params(self, inputSize, numInputBits):
"""
Returns a good default set of parameters to use in the L2 region.
"""
if numInputBits == 20:
sampleSizeProximal = 10
minThresholdProximal = 5
elif numInputBits == 10:
sampleSizeProximal = 6
minThresholdProximal = 3
else:
sampleSizeProximal = int(numInputBits * .6)
minThresholdProximal = int(sampleSizeProximal * .6)
return {
"inputWidth": inputSize * 16,
"cellCount": 4096,
"sdrSize": 40,
"synPermProximalInc": 0.1,
"synPermProximalDec": 0.001,
"initialProximalPermanence": 0.6,
"minThresholdProximal": minThresholdProximal,
"sampleSizeProximal": sampleSizeProximal,
"connectedPermanenceProximal": 0.5,
"synPermDistalInc": 0.1,
"synPermDistalDec": 0.001,
"initialDistalPermanence": 0.41,
"activationThresholdDistal": 13,
"sampleSizeDistal": 20,
"connectedPermanenceDistal": 0.5,
"seed": self.seed,
"learningMode": True,
} | [
"def",
"getDefaultL2Params",
"(",
"self",
",",
"inputSize",
",",
"numInputBits",
")",
":",
"if",
"numInputBits",
"==",
"20",
":",
"sampleSizeProximal",
"=",
"10",
"minThresholdProximal",
"=",
"5",
"elif",
"numInputBits",
"==",
"10",
":",
"sampleSizeProximal",
"=... | Returns a good default set of parameters to use in the L2 region. | [
"Returns",
"a",
"good",
"default",
"set",
"of",
"parameters",
"to",
"use",
"in",
"the",
"L2",
"region",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/l2_l4_inference.py#L883-L915 | train | 198,714 |
numenta/htmresearch | projects/union_path_integration/noise_experiments/noise_simulation.py | generateFeatures | def generateFeatures(numFeatures):
"""Return string features.
If <=62 features are requested, output will be single character
alphanumeric strings. Otherwise, output will be ["F1", "F2", ...]
"""
# Capital letters, lowercase letters, numbers
candidates = ([chr(i+65) for i in xrange(26)] +
[chr(i+97) for i in xrange(26)] +
[chr(i+48) for i in xrange(10)])
if numFeatures > len(candidates):
candidates = ["F{}".format(i) for i in xrange(numFeatures)]
return candidates
return candidates[:numFeatures] | python | def generateFeatures(numFeatures):
"""Return string features.
If <=62 features are requested, output will be single character
alphanumeric strings. Otherwise, output will be ["F1", "F2", ...]
"""
# Capital letters, lowercase letters, numbers
candidates = ([chr(i+65) for i in xrange(26)] +
[chr(i+97) for i in xrange(26)] +
[chr(i+48) for i in xrange(10)])
if numFeatures > len(candidates):
candidates = ["F{}".format(i) for i in xrange(numFeatures)]
return candidates
return candidates[:numFeatures] | [
"def",
"generateFeatures",
"(",
"numFeatures",
")",
":",
"# Capital letters, lowercase letters, numbers",
"candidates",
"=",
"(",
"[",
"chr",
"(",
"i",
"+",
"65",
")",
"for",
"i",
"in",
"xrange",
"(",
"26",
")",
"]",
"+",
"[",
"chr",
"(",
"i",
"+",
"97",... | Return string features.
If <=62 features are requested, output will be single character
alphanumeric strings. Otherwise, output will be ["F1", "F2", ...] | [
"Return",
"string",
"features",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/union_path_integration/noise_experiments/noise_simulation.py#L56-L71 | train | 198,715 |
numenta/htmresearch | projects/location_layer/single_layer_2d_experiment/runner.py | SingleLayerLocation2DExperiment.addMonitor | def addMonitor(self, monitor):
"""
Subscribe to SingleLayer2DExperiment events.
@param monitor (SingleLayer2DExperimentMonitor)
An object that implements a set of monitor methods
@return (object)
An opaque object that can be used to refer to this monitor.
"""
token = self.nextMonitorToken
self.nextMonitorToken += 1
self.monitors[token] = monitor
return token | python | def addMonitor(self, monitor):
"""
Subscribe to SingleLayer2DExperiment events.
@param monitor (SingleLayer2DExperimentMonitor)
An object that implements a set of monitor methods
@return (object)
An opaque object that can be used to refer to this monitor.
"""
token = self.nextMonitorToken
self.nextMonitorToken += 1
self.monitors[token] = monitor
return token | [
"def",
"addMonitor",
"(",
"self",
",",
"monitor",
")",
":",
"token",
"=",
"self",
".",
"nextMonitorToken",
"self",
".",
"nextMonitorToken",
"+=",
"1",
"self",
".",
"monitors",
"[",
"token",
"]",
"=",
"monitor",
"return",
"token"
] | Subscribe to SingleLayer2DExperiment events.
@param monitor (SingleLayer2DExperimentMonitor)
An object that implements a set of monitor methods
@return (object)
An opaque object that can be used to refer to this monitor. | [
"Subscribe",
"to",
"SingleLayer2DExperiment",
"events",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/location_layer/single_layer_2d_experiment/runner.py#L91-L107 | train | 198,716 |
numenta/htmresearch | projects/location_layer/single_layer_2d_experiment/runner.py | SingleLayerLocation2DExperiment.doTimestep | def doTimestep(self, locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn):
"""
Run one timestep.
"""
for monitor in self.monitors.values():
monitor.beforeTimestep(locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn)
params = {
"newLocation": locationSDR,
"deltaLocation": transitionSDR,
"featureLocationInput": self.inputLayer.getActiveCells(),
"featureLocationGrowthCandidates": self.inputLayer.getPredictedActiveCells(),
"learn": learn,
}
self.locationLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterLocationCompute(**params)
params = {
"activeColumns": featureSDR,
"basalInput": self.locationLayer.getActiveCells(),
"apicalInput": self.objectLayer.getActiveCells(),
}
self.inputLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterInputCompute(**params)
params = {
"feedforwardInput": self.inputLayer.getActiveCells(),
"feedforwardGrowthCandidates": self.inputLayer.getPredictedActiveCells(),
"learn": learn,
}
self.objectLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterObjectCompute(**params) | python | def doTimestep(self, locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn):
"""
Run one timestep.
"""
for monitor in self.monitors.values():
monitor.beforeTimestep(locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn)
params = {
"newLocation": locationSDR,
"deltaLocation": transitionSDR,
"featureLocationInput": self.inputLayer.getActiveCells(),
"featureLocationGrowthCandidates": self.inputLayer.getPredictedActiveCells(),
"learn": learn,
}
self.locationLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterLocationCompute(**params)
params = {
"activeColumns": featureSDR,
"basalInput": self.locationLayer.getActiveCells(),
"apicalInput": self.objectLayer.getActiveCells(),
}
self.inputLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterInputCompute(**params)
params = {
"feedforwardInput": self.inputLayer.getActiveCells(),
"feedforwardGrowthCandidates": self.inputLayer.getPredictedActiveCells(),
"learn": learn,
}
self.objectLayer.compute(**params)
for monitor in self.monitors.values():
monitor.afterObjectCompute(**params) | [
"def",
"doTimestep",
"(",
"self",
",",
"locationSDR",
",",
"transitionSDR",
",",
"featureSDR",
",",
"egocentricLocation",
",",
"learn",
")",
":",
"for",
"monitor",
"in",
"self",
".",
"monitors",
".",
"values",
"(",
")",
":",
"monitor",
".",
"beforeTimestep",... | Run one timestep. | [
"Run",
"one",
"timestep",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/location_layer/single_layer_2d_experiment/runner.py#L120-L157 | train | 198,717 |
numenta/htmresearch | projects/location_layer/single_layer_2d_experiment/runner.py | SingleLayerLocation2DExperiment.learnTransitions | def learnTransitions(self):
"""
Train the location layer to do path integration. For every location, teach
it each previous-location + motor command pair.
"""
print "Learning transitions"
for (i, j), locationSDR in self.locations.iteritems():
print "i, j", (i, j)
for (di, dj), transitionSDR in self.transitions.iteritems():
i2 = i + di
j2 = j + dj
if (0 <= i2 < self.diameter and
0 <= j2 < self.diameter):
for _ in xrange(5):
self.locationLayer.reset()
self.locationLayer.compute(newLocation=self.locations[(i,j)])
self.locationLayer.compute(deltaLocation=transitionSDR,
newLocation=self.locations[(i2, j2)])
self.locationLayer.reset() | python | def learnTransitions(self):
"""
Train the location layer to do path integration. For every location, teach
it each previous-location + motor command pair.
"""
print "Learning transitions"
for (i, j), locationSDR in self.locations.iteritems():
print "i, j", (i, j)
for (di, dj), transitionSDR in self.transitions.iteritems():
i2 = i + di
j2 = j + dj
if (0 <= i2 < self.diameter and
0 <= j2 < self.diameter):
for _ in xrange(5):
self.locationLayer.reset()
self.locationLayer.compute(newLocation=self.locations[(i,j)])
self.locationLayer.compute(deltaLocation=transitionSDR,
newLocation=self.locations[(i2, j2)])
self.locationLayer.reset() | [
"def",
"learnTransitions",
"(",
"self",
")",
":",
"print",
"\"Learning transitions\"",
"for",
"(",
"i",
",",
"j",
")",
",",
"locationSDR",
"in",
"self",
".",
"locations",
".",
"iteritems",
"(",
")",
":",
"print",
"\"i, j\"",
",",
"(",
"i",
",",
"j",
")... | Train the location layer to do path integration. For every location, teach
it each previous-location + motor command pair. | [
"Train",
"the",
"location",
"layer",
"to",
"do",
"path",
"integration",
".",
"For",
"every",
"location",
"teach",
"it",
"each",
"previous",
"-",
"location",
"+",
"motor",
"command",
"pair",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/location_layer/single_layer_2d_experiment/runner.py#L160-L180 | train | 198,718 |
numenta/htmresearch | projects/location_layer/single_layer_2d_experiment/runner.py | SingleLayerLocation2DExperiment.learnObjects | def learnObjects(self, objectPlacements):
"""
Learn each provided object in egocentric space. Touch every location on each
object.
This method doesn't try move the sensor along a path. Instead it just leaps
the sensor to each object location, resetting the location layer with each
leap.
This method simultaneously learns 4 sets of synapses:
- location -> input
- input -> location
- input -> object
- object -> input
"""
for monitor in self.monitors.values():
monitor.afterPlaceObjects(objectPlacements)
for objectName, objectDict in self.objects.iteritems():
self.reset()
objectPlacement = objectPlacements[objectName]
for locationName, featureName in objectDict.iteritems():
egocentricLocation = (locationName[0] + objectPlacement[0],
locationName[1] + objectPlacement[1])
locationSDR = self.locations[egocentricLocation]
featureSDR = self.features[featureName]
transitionSDR = np.empty(0)
self.locationLayer.reset()
self.inputLayer.reset()
for _ in xrange(10):
self.doTimestep(locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn=True)
self.inputRepresentations[(featureName, egocentricLocation)] = (
self.inputLayer.getActiveCells())
self.objectRepresentations[objectName] = self.objectLayer.getActiveCells()
self.learnedObjectPlacements[objectName] = objectPlacement | python | def learnObjects(self, objectPlacements):
"""
Learn each provided object in egocentric space. Touch every location on each
object.
This method doesn't try move the sensor along a path. Instead it just leaps
the sensor to each object location, resetting the location layer with each
leap.
This method simultaneously learns 4 sets of synapses:
- location -> input
- input -> location
- input -> object
- object -> input
"""
for monitor in self.monitors.values():
monitor.afterPlaceObjects(objectPlacements)
for objectName, objectDict in self.objects.iteritems():
self.reset()
objectPlacement = objectPlacements[objectName]
for locationName, featureName in objectDict.iteritems():
egocentricLocation = (locationName[0] + objectPlacement[0],
locationName[1] + objectPlacement[1])
locationSDR = self.locations[egocentricLocation]
featureSDR = self.features[featureName]
transitionSDR = np.empty(0)
self.locationLayer.reset()
self.inputLayer.reset()
for _ in xrange(10):
self.doTimestep(locationSDR, transitionSDR, featureSDR,
egocentricLocation, learn=True)
self.inputRepresentations[(featureName, egocentricLocation)] = (
self.inputLayer.getActiveCells())
self.objectRepresentations[objectName] = self.objectLayer.getActiveCells()
self.learnedObjectPlacements[objectName] = objectPlacement | [
"def",
"learnObjects",
"(",
"self",
",",
"objectPlacements",
")",
":",
"for",
"monitor",
"in",
"self",
".",
"monitors",
".",
"values",
"(",
")",
":",
"monitor",
".",
"afterPlaceObjects",
"(",
"objectPlacements",
")",
"for",
"objectName",
",",
"objectDict",
"... | Learn each provided object in egocentric space. Touch every location on each
object.
This method doesn't try move the sensor along a path. Instead it just leaps
the sensor to each object location, resetting the location layer with each
leap.
This method simultaneously learns 4 sets of synapses:
- location -> input
- input -> location
- input -> object
- object -> input | [
"Learn",
"each",
"provided",
"object",
"in",
"egocentric",
"space",
".",
"Touch",
"every",
"location",
"on",
"each",
"object",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/location_layer/single_layer_2d_experiment/runner.py#L183-L225 | train | 198,719 |
numenta/htmresearch | projects/location_layer/single_layer_2d_experiment/runner.py | SingleLayerLocation2DExperiment._selectTransition | def _selectTransition(self, allocentricLocation, objectDict, visitCounts):
"""
Choose the transition that lands us in the location we've touched the least
often. Break ties randomly, i.e. choose the first candidate in a shuffled
list.
"""
candidates = list(transition
for transition in self.transitions.keys()
if (allocentricLocation[0] + transition[0],
allocentricLocation[1] + transition[1]) in objectDict)
random.shuffle(candidates)
selectedVisitCount = None
selectedTransition = None
selectedAllocentricLocation = None
for transition in candidates:
candidateLocation = (allocentricLocation[0] + transition[0],
allocentricLocation[1] + transition[1])
if (selectedVisitCount is None or
visitCounts[candidateLocation] < selectedVisitCount):
selectedVisitCount = visitCounts[candidateLocation]
selectedTransition = transition
selectedAllocentricLocation = candidateLocation
return selectedAllocentricLocation, selectedTransition | python | def _selectTransition(self, allocentricLocation, objectDict, visitCounts):
"""
Choose the transition that lands us in the location we've touched the least
often. Break ties randomly, i.e. choose the first candidate in a shuffled
list.
"""
candidates = list(transition
for transition in self.transitions.keys()
if (allocentricLocation[0] + transition[0],
allocentricLocation[1] + transition[1]) in objectDict)
random.shuffle(candidates)
selectedVisitCount = None
selectedTransition = None
selectedAllocentricLocation = None
for transition in candidates:
candidateLocation = (allocentricLocation[0] + transition[0],
allocentricLocation[1] + transition[1])
if (selectedVisitCount is None or
visitCounts[candidateLocation] < selectedVisitCount):
selectedVisitCount = visitCounts[candidateLocation]
selectedTransition = transition
selectedAllocentricLocation = candidateLocation
return selectedAllocentricLocation, selectedTransition | [
"def",
"_selectTransition",
"(",
"self",
",",
"allocentricLocation",
",",
"objectDict",
",",
"visitCounts",
")",
":",
"candidates",
"=",
"list",
"(",
"transition",
"for",
"transition",
"in",
"self",
".",
"transitions",
".",
"keys",
"(",
")",
"if",
"(",
"allo... | Choose the transition that lands us in the location we've touched the least
often. Break ties randomly, i.e. choose the first candidate in a shuffled
list. | [
"Choose",
"the",
"transition",
"that",
"lands",
"us",
"in",
"the",
"location",
"we",
"ve",
"touched",
"the",
"least",
"often",
".",
"Break",
"ties",
"randomly",
"i",
".",
"e",
".",
"choose",
"the",
"first",
"candidate",
"in",
"a",
"shuffled",
"list",
"."... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/location_layer/single_layer_2d_experiment/runner.py#L228-L255 | train | 198,720 |
numenta/htmresearch | htmresearch/algorithms/temporal_pooler.py | TemporalPooler.reset | def reset(self):
"""
Reset the state of the temporal pooler
"""
self._poolingActivation = numpy.zeros((self._numColumns), dtype="int32")
self._poolingColumns = []
self._overlapDutyCycles = numpy.zeros(self._numColumns, dtype=realDType)
self._activeDutyCycles = numpy.zeros(self._numColumns, dtype=realDType)
self._minOverlapDutyCycles = numpy.zeros(self._numColumns,
dtype=realDType)
self._minActiveDutyCycles = numpy.zeros(self._numColumns,
dtype=realDType)
self._boostFactors = numpy.ones(self._numColumns, dtype=realDType) | python | def reset(self):
"""
Reset the state of the temporal pooler
"""
self._poolingActivation = numpy.zeros((self._numColumns), dtype="int32")
self._poolingColumns = []
self._overlapDutyCycles = numpy.zeros(self._numColumns, dtype=realDType)
self._activeDutyCycles = numpy.zeros(self._numColumns, dtype=realDType)
self._minOverlapDutyCycles = numpy.zeros(self._numColumns,
dtype=realDType)
self._minActiveDutyCycles = numpy.zeros(self._numColumns,
dtype=realDType)
self._boostFactors = numpy.ones(self._numColumns, dtype=realDType) | [
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"_poolingActivation",
"=",
"numpy",
".",
"zeros",
"(",
"(",
"self",
".",
"_numColumns",
")",
",",
"dtype",
"=",
"\"int32\"",
")",
"self",
".",
"_poolingColumns",
"=",
"[",
"]",
"self",
".",
"_overlapD... | Reset the state of the temporal pooler | [
"Reset",
"the",
"state",
"of",
"the",
"temporal",
"pooler"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/temporal_pooler.py#L365-L377 | train | 198,721 |
numenta/htmresearch | htmresearch/algorithms/temporal_pooler.py | TemporalPooler.compute | def compute(self, inputVector, learn, activeArray, burstingColumns,
predictedCells):
"""
This is the primary public method of the class. This function takes an input
vector and outputs the indices of the active columns.
New parameters defined here:
----------------------------
@param inputVector: The active cells from a Temporal Memory
@param learn: A Boolean specifying whether learning will be
performed
@param activeArray: An array representing the active columns
produced by this method
@param burstingColumns: A numpy array with numColumns elements having
binary values with 1 representing a
currently bursting column in Temporal Memory.
@param predictedCells: A numpy array with numInputs elements. A 1
indicates that this cell switching from
predicted state in the previous time step to
active state in the current timestep
"""
assert (numpy.size(inputVector) == self._numInputs)
assert (numpy.size(predictedCells) == self._numInputs)
self._updateBookeepingVars(learn)
inputVector = numpy.array(inputVector, dtype=realDType)
predictedCells = numpy.array(predictedCells, dtype=realDType)
inputVector.reshape(-1)
if self._spVerbosity > 3:
print " Input bits: ", inputVector.nonzero()[0]
print " predictedCells: ", predictedCells.nonzero()[0]
# Phase 1: Calculate overlap scores
# The overlap score has 4 components:
# (1) Overlap between correctly predicted input cells and pooling TP cells
# (2) Overlap between active input cells and all TP cells
# (like standard SP calculation)
# (3) Overlap between correctly predicted input cells and all TP cells
# (4) Overlap from bursting columns in TM and all TP cells
# 1) Calculate pooling overlap
if self.usePoolingRule:
overlapsPooling = self._calculatePoolingActivity(predictedCells, learn)
if self._spVerbosity > 4:
print "usePoolingRule: Overlaps after step 1:"
print " ", overlapsPooling
else:
overlapsPooling = 0
# 2) Calculate overlap between active input cells and connected synapses
overlapsAllInput = self._calculateOverlap(inputVector)
# 3) overlap with predicted inputs
# NEW: Isn't this redundant with 1 and 2)? This looks at connected synapses
# only.
# If 1) is called with learning=False connected synapses are used and
# it is somewhat redundant although there is a boosting factor in 1) which
# makes 1's effect stronger. If 1) is called with learning=True it's less
# redundant
overlapsPredicted = self._calculateOverlap(predictedCells)
if self._spVerbosity > 4:
print "Overlaps with all inputs:"
print " Number of On Bits: ", inputVector.sum()
print " ", overlapsAllInput
print "Overlaps with predicted inputs:"
print " ", overlapsPredicted
# 4) consider bursting columns
if self.useBurstingRule:
overlapsBursting = self._calculateBurstingColumns(burstingColumns)
if self._spVerbosity > 4:
print "Overlaps with bursting inputs:"
print " ", overlapsBursting
else:
overlapsBursting = 0
overlaps = (overlapsPooling + overlapsPredicted + overlapsAllInput +
overlapsBursting)
# Apply boosting when learning is on
if learn:
boostedOverlaps = self._boostFactors * overlaps
if self._spVerbosity > 4:
print "Overlaps after boosting:"
print " ", boostedOverlaps
else:
boostedOverlaps = overlaps
# Apply inhibition to determine the winning columns
activeColumns = self._inhibitColumns(boostedOverlaps)
if learn:
self._adaptSynapses(inputVector, activeColumns, predictedCells)
self._updateDutyCycles(overlaps, activeColumns)
self._bumpUpWeakColumns()
self._updateBoostFactors()
if self._isUpdateRound():
self._updateInhibitionRadius()
self._updateMinDutyCycles()
activeArray.fill(0)
if activeColumns.size > 0:
activeArray[activeColumns] = 1
# update pooling state of cells
activeColumnIndices = numpy.where(overlapsPredicted[activeColumns] > 0)[0]
activeColWithPredictedInput = activeColumns[activeColumnIndices]
numUnPredictedInput = float(len(burstingColumns.nonzero()[0]))
numPredictedInput = float(len(predictedCells))
fracUnPredicted = numUnPredictedInput / (numUnPredictedInput +
numPredictedInput)
self._updatePoolingState(activeColWithPredictedInput, fracUnPredicted)
if self._spVerbosity > 2:
activeColumns.sort()
print "The following columns are finally active:"
print " ", activeColumns
print "The following columns are in pooling state:"
print " ", self._poolingActivation.nonzero()[0]
# print "Inputs to pooling columns"
# print " ",overlapsPredicted[self._poolingColumns]
return activeColumns | python | def compute(self, inputVector, learn, activeArray, burstingColumns,
predictedCells):
"""
This is the primary public method of the class. This function takes an input
vector and outputs the indices of the active columns.
New parameters defined here:
----------------------------
@param inputVector: The active cells from a Temporal Memory
@param learn: A Boolean specifying whether learning will be
performed
@param activeArray: An array representing the active columns
produced by this method
@param burstingColumns: A numpy array with numColumns elements having
binary values with 1 representing a
currently bursting column in Temporal Memory.
@param predictedCells: A numpy array with numInputs elements. A 1
indicates that this cell switching from
predicted state in the previous time step to
active state in the current timestep
"""
assert (numpy.size(inputVector) == self._numInputs)
assert (numpy.size(predictedCells) == self._numInputs)
self._updateBookeepingVars(learn)
inputVector = numpy.array(inputVector, dtype=realDType)
predictedCells = numpy.array(predictedCells, dtype=realDType)
inputVector.reshape(-1)
if self._spVerbosity > 3:
print " Input bits: ", inputVector.nonzero()[0]
print " predictedCells: ", predictedCells.nonzero()[0]
# Phase 1: Calculate overlap scores
# The overlap score has 4 components:
# (1) Overlap between correctly predicted input cells and pooling TP cells
# (2) Overlap between active input cells and all TP cells
# (like standard SP calculation)
# (3) Overlap between correctly predicted input cells and all TP cells
# (4) Overlap from bursting columns in TM and all TP cells
# 1) Calculate pooling overlap
if self.usePoolingRule:
overlapsPooling = self._calculatePoolingActivity(predictedCells, learn)
if self._spVerbosity > 4:
print "usePoolingRule: Overlaps after step 1:"
print " ", overlapsPooling
else:
overlapsPooling = 0
# 2) Calculate overlap between active input cells and connected synapses
overlapsAllInput = self._calculateOverlap(inputVector)
# 3) overlap with predicted inputs
# NEW: Isn't this redundant with 1 and 2)? This looks at connected synapses
# only.
# If 1) is called with learning=False connected synapses are used and
# it is somewhat redundant although there is a boosting factor in 1) which
# makes 1's effect stronger. If 1) is called with learning=True it's less
# redundant
overlapsPredicted = self._calculateOverlap(predictedCells)
if self._spVerbosity > 4:
print "Overlaps with all inputs:"
print " Number of On Bits: ", inputVector.sum()
print " ", overlapsAllInput
print "Overlaps with predicted inputs:"
print " ", overlapsPredicted
# 4) consider bursting columns
if self.useBurstingRule:
overlapsBursting = self._calculateBurstingColumns(burstingColumns)
if self._spVerbosity > 4:
print "Overlaps with bursting inputs:"
print " ", overlapsBursting
else:
overlapsBursting = 0
overlaps = (overlapsPooling + overlapsPredicted + overlapsAllInput +
overlapsBursting)
# Apply boosting when learning is on
if learn:
boostedOverlaps = self._boostFactors * overlaps
if self._spVerbosity > 4:
print "Overlaps after boosting:"
print " ", boostedOverlaps
else:
boostedOverlaps = overlaps
# Apply inhibition to determine the winning columns
activeColumns = self._inhibitColumns(boostedOverlaps)
if learn:
self._adaptSynapses(inputVector, activeColumns, predictedCells)
self._updateDutyCycles(overlaps, activeColumns)
self._bumpUpWeakColumns()
self._updateBoostFactors()
if self._isUpdateRound():
self._updateInhibitionRadius()
self._updateMinDutyCycles()
activeArray.fill(0)
if activeColumns.size > 0:
activeArray[activeColumns] = 1
# update pooling state of cells
activeColumnIndices = numpy.where(overlapsPredicted[activeColumns] > 0)[0]
activeColWithPredictedInput = activeColumns[activeColumnIndices]
numUnPredictedInput = float(len(burstingColumns.nonzero()[0]))
numPredictedInput = float(len(predictedCells))
fracUnPredicted = numUnPredictedInput / (numUnPredictedInput +
numPredictedInput)
self._updatePoolingState(activeColWithPredictedInput, fracUnPredicted)
if self._spVerbosity > 2:
activeColumns.sort()
print "The following columns are finally active:"
print " ", activeColumns
print "The following columns are in pooling state:"
print " ", self._poolingActivation.nonzero()[0]
# print "Inputs to pooling columns"
# print " ",overlapsPredicted[self._poolingColumns]
return activeColumns | [
"def",
"compute",
"(",
"self",
",",
"inputVector",
",",
"learn",
",",
"activeArray",
",",
"burstingColumns",
",",
"predictedCells",
")",
":",
"assert",
"(",
"numpy",
".",
"size",
"(",
"inputVector",
")",
"==",
"self",
".",
"_numInputs",
")",
"assert",
"(",... | This is the primary public method of the class. This function takes an input
vector and outputs the indices of the active columns.
New parameters defined here:
----------------------------
@param inputVector: The active cells from a Temporal Memory
@param learn: A Boolean specifying whether learning will be
performed
@param activeArray: An array representing the active columns
produced by this method
@param burstingColumns: A numpy array with numColumns elements having
binary values with 1 representing a
currently bursting column in Temporal Memory.
@param predictedCells: A numpy array with numInputs elements. A 1
indicates that this cell switching from
predicted state in the previous time step to
active state in the current timestep | [
"This",
"is",
"the",
"primary",
"public",
"method",
"of",
"the",
"class",
".",
"This",
"function",
"takes",
"an",
"input",
"vector",
"and",
"outputs",
"the",
"indices",
"of",
"the",
"active",
"columns",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/temporal_pooler.py#L380-L513 | train | 198,722 |
numenta/htmresearch | htmresearch/algorithms/temporal_pooler.py | TemporalPooler.printParameters | def printParameters(self):
"""
Useful for debugging.
"""
print "------------PY TemporalPooler Parameters ------------------"
print "numInputs = ", self.getNumInputs()
print "numColumns = ", self.getNumColumns()
print "columnDimensions = ", self._columnDimensions
print "numActiveColumnsPerInhArea = ", self.getNumActiveColumnsPerInhArea()
print "potentialPct = ", self.getPotentialPct()
print "globalInhibition = ", self.getGlobalInhibition()
print "localAreaDensity = ", self.getLocalAreaDensity()
print "stimulusThreshold = ", self.getStimulusThreshold()
print "synPermActiveInc = ", self.getSynPermActiveInc()
print "synPermInactiveDec = ", self.getSynPermInactiveDec()
print "synPermConnected = ", self.getSynPermConnected()
print "minPctOverlapDutyCycle = ", self.getMinPctOverlapDutyCycles()
print "dutyCyclePeriod = ", self.getDutyCyclePeriod()
print "boostStrength = ", self.getBoostStrength()
print "spVerbosity = ", self.getSpVerbosity()
print "version = ", self._version | python | def printParameters(self):
"""
Useful for debugging.
"""
print "------------PY TemporalPooler Parameters ------------------"
print "numInputs = ", self.getNumInputs()
print "numColumns = ", self.getNumColumns()
print "columnDimensions = ", self._columnDimensions
print "numActiveColumnsPerInhArea = ", self.getNumActiveColumnsPerInhArea()
print "potentialPct = ", self.getPotentialPct()
print "globalInhibition = ", self.getGlobalInhibition()
print "localAreaDensity = ", self.getLocalAreaDensity()
print "stimulusThreshold = ", self.getStimulusThreshold()
print "synPermActiveInc = ", self.getSynPermActiveInc()
print "synPermInactiveDec = ", self.getSynPermInactiveDec()
print "synPermConnected = ", self.getSynPermConnected()
print "minPctOverlapDutyCycle = ", self.getMinPctOverlapDutyCycles()
print "dutyCyclePeriod = ", self.getDutyCyclePeriod()
print "boostStrength = ", self.getBoostStrength()
print "spVerbosity = ", self.getSpVerbosity()
print "version = ", self._version | [
"def",
"printParameters",
"(",
"self",
")",
":",
"print",
"\"------------PY TemporalPooler Parameters ------------------\"",
"print",
"\"numInputs = \"",
",",
"self",
".",
"getNumInputs",
"(",
")",
"print",
"\"numColumns = \"",
",",
"self",
"... | Useful for debugging. | [
"Useful",
"for",
"debugging",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/temporal_pooler.py#L677-L697 | train | 198,723 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.train | def train(self, inputData, numIterations, reset=False):
"""
Trains the SparseNet, with the provided data.
The reset parameter can be set to False if the network should not be
reset before training (for example for continuing a previous started
training).
:param inputData: (array) Input data, of dimension (inputDim, numPoints)
:param numIterations: (int) Number of training iterations
:param reset: (bool) If set to True, reset basis and history
"""
if not isinstance(inputData, np.ndarray):
inputData = np.array(inputData)
if reset:
self._reset()
for _ in xrange(numIterations):
self._iteration += 1
batch = self._getDataBatch(inputData)
# check input dimension, change if necessary
if batch.shape[0] != self.filterDim:
raise ValueError("Batches and filter dimesions don't match!")
activations = self.encode(batch)
self._learn(batch, activations)
if self._iteration % self.decayCycle == 0:
self.learningRate *= self.learningRateDecay
if self.verbosity >= 1:
self.plotLoss()
self.plotBasis() | python | def train(self, inputData, numIterations, reset=False):
"""
Trains the SparseNet, with the provided data.
The reset parameter can be set to False if the network should not be
reset before training (for example for continuing a previous started
training).
:param inputData: (array) Input data, of dimension (inputDim, numPoints)
:param numIterations: (int) Number of training iterations
:param reset: (bool) If set to True, reset basis and history
"""
if not isinstance(inputData, np.ndarray):
inputData = np.array(inputData)
if reset:
self._reset()
for _ in xrange(numIterations):
self._iteration += 1
batch = self._getDataBatch(inputData)
# check input dimension, change if necessary
if batch.shape[0] != self.filterDim:
raise ValueError("Batches and filter dimesions don't match!")
activations = self.encode(batch)
self._learn(batch, activations)
if self._iteration % self.decayCycle == 0:
self.learningRate *= self.learningRateDecay
if self.verbosity >= 1:
self.plotLoss()
self.plotBasis() | [
"def",
"train",
"(",
"self",
",",
"inputData",
",",
"numIterations",
",",
"reset",
"=",
"False",
")",
":",
"if",
"not",
"isinstance",
"(",
"inputData",
",",
"np",
".",
"ndarray",
")",
":",
"inputData",
"=",
"np",
".",
"array",
"(",
"inputData",
")",
... | Trains the SparseNet, with the provided data.
The reset parameter can be set to False if the network should not be
reset before training (for example for continuing a previous started
training).
:param inputData: (array) Input data, of dimension (inputDim, numPoints)
:param numIterations: (int) Number of training iterations
:param reset: (bool) If set to True, reset basis and history | [
"Trains",
"the",
"SparseNet",
"with",
"the",
"provided",
"data",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L119-L153 | train | 198,724 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.encode | def encode(self, data, flatten=False):
"""
Encodes the provided input data, returning a sparse vector of activations.
It solves a dynamic system to find optimal activations, as proposed by
Rozell et al. (2008).
:param data: (array) Data to be encoded (single point or multiple)
:param flatten (bool) Whether or not the data needs to be flattened,
in the case of images for example. Does not
need to be enabled during training.
:return: (array) Array of sparse activations (dimOutput,
numPoints)
"""
if not isinstance(data, np.ndarray):
data = np.array(data)
# flatten if necessary
if flatten:
try:
data = np.reshape(data, (self.filterDim, data.shape[-1]))
except ValueError:
# only one data point
data = np.reshape(data, (self.filterDim, 1))
if data.shape[0] != self.filterDim:
raise ValueError("Data does not have the correct dimension!")
# if single data point, convert to 2-dimensional array for consistency
if len(data.shape) == 1:
data = data[:, np.newaxis]
projection = self.basis.T.dot(data)
representation = self.basis.T.dot(self.basis) - np.eye(self.outputDim)
states = np.zeros((self.outputDim, data.shape[1]))
threshold = 0.5 * np.max(np.abs(projection), axis=0)
activations = self._thresholdNonLinearity(states, threshold)
for _ in xrange(self.numLcaIterations):
# update dynamic system
states *= (1 - self.lcaLearningRate)
states += self.lcaLearningRate * (projection - representation.dot(activations))
activations = self._thresholdNonLinearity(states, threshold)
# decay threshold
threshold *= self.thresholdDecay
threshold[threshold < self.minThreshold] = self.minThreshold
return activations | python | def encode(self, data, flatten=False):
"""
Encodes the provided input data, returning a sparse vector of activations.
It solves a dynamic system to find optimal activations, as proposed by
Rozell et al. (2008).
:param data: (array) Data to be encoded (single point or multiple)
:param flatten (bool) Whether or not the data needs to be flattened,
in the case of images for example. Does not
need to be enabled during training.
:return: (array) Array of sparse activations (dimOutput,
numPoints)
"""
if not isinstance(data, np.ndarray):
data = np.array(data)
# flatten if necessary
if flatten:
try:
data = np.reshape(data, (self.filterDim, data.shape[-1]))
except ValueError:
# only one data point
data = np.reshape(data, (self.filterDim, 1))
if data.shape[0] != self.filterDim:
raise ValueError("Data does not have the correct dimension!")
# if single data point, convert to 2-dimensional array for consistency
if len(data.shape) == 1:
data = data[:, np.newaxis]
projection = self.basis.T.dot(data)
representation = self.basis.T.dot(self.basis) - np.eye(self.outputDim)
states = np.zeros((self.outputDim, data.shape[1]))
threshold = 0.5 * np.max(np.abs(projection), axis=0)
activations = self._thresholdNonLinearity(states, threshold)
for _ in xrange(self.numLcaIterations):
# update dynamic system
states *= (1 - self.lcaLearningRate)
states += self.lcaLearningRate * (projection - representation.dot(activations))
activations = self._thresholdNonLinearity(states, threshold)
# decay threshold
threshold *= self.thresholdDecay
threshold[threshold < self.minThreshold] = self.minThreshold
return activations | [
"def",
"encode",
"(",
"self",
",",
"data",
",",
"flatten",
"=",
"False",
")",
":",
"if",
"not",
"isinstance",
"(",
"data",
",",
"np",
".",
"ndarray",
")",
":",
"data",
"=",
"np",
".",
"array",
"(",
"data",
")",
"# flatten if necessary",
"if",
"flatte... | Encodes the provided input data, returning a sparse vector of activations.
It solves a dynamic system to find optimal activations, as proposed by
Rozell et al. (2008).
:param data: (array) Data to be encoded (single point or multiple)
:param flatten (bool) Whether or not the data needs to be flattened,
in the case of images for example. Does not
need to be enabled during training.
:return: (array) Array of sparse activations (dimOutput,
numPoints) | [
"Encodes",
"the",
"provided",
"input",
"data",
"returning",
"a",
"sparse",
"vector",
"of",
"activations",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L156-L205 | train | 198,725 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.plotLoss | def plotLoss(self, filename=None):
"""
Plots the loss history.
:param filename (string) Can be provided to save the figure
"""
plt.figure()
plt.plot(self.losses.keys(), self.losses.values())
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Learning curve for {}".format(self))
if filename is not None:
plt.savefig(filename) | python | def plotLoss(self, filename=None):
"""
Plots the loss history.
:param filename (string) Can be provided to save the figure
"""
plt.figure()
plt.plot(self.losses.keys(), self.losses.values())
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.title("Learning curve for {}".format(self))
if filename is not None:
plt.savefig(filename) | [
"def",
"plotLoss",
"(",
"self",
",",
"filename",
"=",
"None",
")",
":",
"plt",
".",
"figure",
"(",
")",
"plt",
".",
"plot",
"(",
"self",
".",
"losses",
".",
"keys",
"(",
")",
",",
"self",
".",
"losses",
".",
"values",
"(",
")",
")",
"plt",
".",... | Plots the loss history.
:param filename (string) Can be provided to save the figure | [
"Plots",
"the",
"loss",
"history",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L208-L221 | train | 198,726 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.plotBasis | def plotBasis(self, filename=None):
"""
Plots the basis functions, reshaped in 2-dimensional arrays.
This representation makes the most sense for visual input.
:param: filename (string) Can be provided to save the figure
"""
if np.floor(np.sqrt(self.filterDim)) ** 2 != self.filterDim:
print "Basis visualization is not available if filterDim is not a square."
return
dim = int(np.sqrt(self.filterDim))
if np.floor(np.sqrt(self.outputDim)) ** 2 != self.outputDim:
outDimJ = np.sqrt(np.floor(self.outputDim / 2))
outDimI = np.floor(self.outputDim / outDimJ)
if outDimI > outDimJ:
outDimI, outDimJ = outDimJ, outDimI
else:
outDimI = np.floor(np.sqrt(self.outputDim))
outDimJ = outDimI
outDimI, outDimJ = int(outDimI), int(outDimJ)
basis = - np.ones((1 + outDimI * (dim + 1), 1 + outDimJ * (dim + 1)))
# populate array with basis values
k = 0
for i in xrange(outDimI):
for j in xrange(outDimJ):
colorLimit = np.max(np.abs(self.basis[:, k]))
mat = np.reshape(self.basis[:, k], (dim, dim)) / colorLimit
basis[1 + i * (dim + 1) : 1 + i * (dim + 1) + dim, \
1 + j * (dim + 1) : 1 + j * (dim + 1) + dim] = mat
k += 1
plt.figure()
plt.subplot(aspect="equal")
plt.pcolormesh(basis)
plt.axis([0, 1 + outDimJ * (dim + 1), 0, 1 + outDimI * (dim + 1)])
# remove ticks
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
plt.title("Basis functions for {0}".format(self))
if filename is not None:
plt.savefig(filename) | python | def plotBasis(self, filename=None):
"""
Plots the basis functions, reshaped in 2-dimensional arrays.
This representation makes the most sense for visual input.
:param: filename (string) Can be provided to save the figure
"""
if np.floor(np.sqrt(self.filterDim)) ** 2 != self.filterDim:
print "Basis visualization is not available if filterDim is not a square."
return
dim = int(np.sqrt(self.filterDim))
if np.floor(np.sqrt(self.outputDim)) ** 2 != self.outputDim:
outDimJ = np.sqrt(np.floor(self.outputDim / 2))
outDimI = np.floor(self.outputDim / outDimJ)
if outDimI > outDimJ:
outDimI, outDimJ = outDimJ, outDimI
else:
outDimI = np.floor(np.sqrt(self.outputDim))
outDimJ = outDimI
outDimI, outDimJ = int(outDimI), int(outDimJ)
basis = - np.ones((1 + outDimI * (dim + 1), 1 + outDimJ * (dim + 1)))
# populate array with basis values
k = 0
for i in xrange(outDimI):
for j in xrange(outDimJ):
colorLimit = np.max(np.abs(self.basis[:, k]))
mat = np.reshape(self.basis[:, k], (dim, dim)) / colorLimit
basis[1 + i * (dim + 1) : 1 + i * (dim + 1) + dim, \
1 + j * (dim + 1) : 1 + j * (dim + 1) + dim] = mat
k += 1
plt.figure()
plt.subplot(aspect="equal")
plt.pcolormesh(basis)
plt.axis([0, 1 + outDimJ * (dim + 1), 0, 1 + outDimI * (dim + 1)])
# remove ticks
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
plt.title("Basis functions for {0}".format(self))
if filename is not None:
plt.savefig(filename) | [
"def",
"plotBasis",
"(",
"self",
",",
"filename",
"=",
"None",
")",
":",
"if",
"np",
".",
"floor",
"(",
"np",
".",
"sqrt",
"(",
"self",
".",
"filterDim",
")",
")",
"**",
"2",
"!=",
"self",
".",
"filterDim",
":",
"print",
"\"Basis visualization is not a... | Plots the basis functions, reshaped in 2-dimensional arrays.
This representation makes the most sense for visual input.
:param: filename (string) Can be provided to save the figure | [
"Plots",
"the",
"basis",
"functions",
"reshaped",
"in",
"2",
"-",
"dimensional",
"arrays",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L224-L270 | train | 198,727 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet._reset | def _reset(self):
"""
Reinitializes basis functions, iteration number and loss history.
"""
self.basis = np.random.randn(self.filterDim, self.outputDim)
self.basis /= np.sqrt(np.sum(self.basis ** 2, axis=0))
self._iteration = 0
self.losses = {} | python | def _reset(self):
"""
Reinitializes basis functions, iteration number and loss history.
"""
self.basis = np.random.randn(self.filterDim, self.outputDim)
self.basis /= np.sqrt(np.sum(self.basis ** 2, axis=0))
self._iteration = 0
self.losses = {} | [
"def",
"_reset",
"(",
"self",
")",
":",
"self",
".",
"basis",
"=",
"np",
".",
"random",
".",
"randn",
"(",
"self",
".",
"filterDim",
",",
"self",
".",
"outputDim",
")",
"self",
".",
"basis",
"/=",
"np",
".",
"sqrt",
"(",
"np",
".",
"sum",
"(",
... | Reinitializes basis functions, iteration number and loss history. | [
"Reinitializes",
"basis",
"functions",
"iteration",
"number",
"and",
"loss",
"history",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L273-L280 | train | 198,728 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.read | def read(cls, proto):
"""
Reads deserialized data from proto object
:param proto: (DynamicStructBuilder) Proto object
:return (SparseNet) SparseNet instance
"""
sparsenet = object.__new__(cls)
sparsenet.filterDim = proto.filterDim
sparsenet.outputDim = proto.outputDim
sparsenet.batchSize = proto.batchSize
lossHistoryProto = proto.losses
sparsenet.losses = {}
for i in xrange(len(lossHistoryProto)):
sparsenet.losses[lossHistoryProto[i].iteration] = lossHistoryProto[i].loss
sparsenet._iteration = proto.iteration
sparsenet.basis = np.reshape(proto.basis, newshape=(sparsenet.filterDim,
sparsenet.outputDim))
# training parameters
sparsenet.learningRate = proto.learningRate
sparsenet.decayCycle = proto.decayCycle
sparsenet.learningRateDecay = proto.learningRateDecay
# LCA parameters
sparsenet.numLcaIterations = proto.numLcaIterations
sparsenet.lcaLearningRate = proto.lcaLearningRate
sparsenet.thresholdDecay = proto.thresholdDecay
sparsenet.minThreshold = proto.minThreshold
sparsenet.thresholdType = proto.thresholdType
# debugging
sparsenet.verbosity = proto.verbosity
sparsenet.showEvery = proto.showEvery
sparsenet.seed = int(proto.seed)
if sparsenet.seed is not None:
np.random.seed(sparsenet.seed)
random.seed(sparsenet.seed)
return sparsenet | python | def read(cls, proto):
"""
Reads deserialized data from proto object
:param proto: (DynamicStructBuilder) Proto object
:return (SparseNet) SparseNet instance
"""
sparsenet = object.__new__(cls)
sparsenet.filterDim = proto.filterDim
sparsenet.outputDim = proto.outputDim
sparsenet.batchSize = proto.batchSize
lossHistoryProto = proto.losses
sparsenet.losses = {}
for i in xrange(len(lossHistoryProto)):
sparsenet.losses[lossHistoryProto[i].iteration] = lossHistoryProto[i].loss
sparsenet._iteration = proto.iteration
sparsenet.basis = np.reshape(proto.basis, newshape=(sparsenet.filterDim,
sparsenet.outputDim))
# training parameters
sparsenet.learningRate = proto.learningRate
sparsenet.decayCycle = proto.decayCycle
sparsenet.learningRateDecay = proto.learningRateDecay
# LCA parameters
sparsenet.numLcaIterations = proto.numLcaIterations
sparsenet.lcaLearningRate = proto.lcaLearningRate
sparsenet.thresholdDecay = proto.thresholdDecay
sparsenet.minThreshold = proto.minThreshold
sparsenet.thresholdType = proto.thresholdType
# debugging
sparsenet.verbosity = proto.verbosity
sparsenet.showEvery = proto.showEvery
sparsenet.seed = int(proto.seed)
if sparsenet.seed is not None:
np.random.seed(sparsenet.seed)
random.seed(sparsenet.seed)
return sparsenet | [
"def",
"read",
"(",
"cls",
",",
"proto",
")",
":",
"sparsenet",
"=",
"object",
".",
"__new__",
"(",
"cls",
")",
"sparsenet",
".",
"filterDim",
"=",
"proto",
".",
"filterDim",
"sparsenet",
".",
"outputDim",
"=",
"proto",
".",
"outputDim",
"sparsenet",
"."... | Reads deserialized data from proto object
:param proto: (DynamicStructBuilder) Proto object
:return (SparseNet) SparseNet instance | [
"Reads",
"deserialized",
"data",
"from",
"proto",
"object"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L346-L388 | train | 198,729 |
numenta/htmresearch | htmresearch/algorithms/sparse_net.py | SparseNet.write | def write(self, proto):
"""
Writes serialized data to proto object
:param proto: (DynamicStructBuilder) Proto object
"""
proto.filterDim = self.filterDim
proto.outputDim = self.outputDim
proto.batchSize = self.batchSize
lossHistoryProto = proto.init("losses", len(self.losses))
i = 0
for iteration, loss in self.losses.iteritems():
iterationLossHistoryProto = lossHistoryProto[i]
iterationLossHistoryProto.iteration = iteration
iterationLossHistoryProto.loss = float(loss)
i += 1
proto.iteration = self._iteration
proto.basis = list(
self.basis.flatten().astype(type('float', (float,), {}))
)
# training parameters
proto.learningRate = self.learningRate
proto.decayCycle = self.decayCycle
proto.learningRateDecay = self.learningRateDecay
# LCA parameters
proto.numLcaIterations = self.numLcaIterations
proto.lcaLearningRate = self.lcaLearningRate
proto.thresholdDecay = self.thresholdDecay
proto.minThreshold = self.minThreshold
proto.thresholdType = self.thresholdType
# debugging
proto.verbosity = self.verbosity
proto.showEvery = self.showEvery
proto.seed = self.seed | python | def write(self, proto):
"""
Writes serialized data to proto object
:param proto: (DynamicStructBuilder) Proto object
"""
proto.filterDim = self.filterDim
proto.outputDim = self.outputDim
proto.batchSize = self.batchSize
lossHistoryProto = proto.init("losses", len(self.losses))
i = 0
for iteration, loss in self.losses.iteritems():
iterationLossHistoryProto = lossHistoryProto[i]
iterationLossHistoryProto.iteration = iteration
iterationLossHistoryProto.loss = float(loss)
i += 1
proto.iteration = self._iteration
proto.basis = list(
self.basis.flatten().astype(type('float', (float,), {}))
)
# training parameters
proto.learningRate = self.learningRate
proto.decayCycle = self.decayCycle
proto.learningRateDecay = self.learningRateDecay
# LCA parameters
proto.numLcaIterations = self.numLcaIterations
proto.lcaLearningRate = self.lcaLearningRate
proto.thresholdDecay = self.thresholdDecay
proto.minThreshold = self.minThreshold
proto.thresholdType = self.thresholdType
# debugging
proto.verbosity = self.verbosity
proto.showEvery = self.showEvery
proto.seed = self.seed | [
"def",
"write",
"(",
"self",
",",
"proto",
")",
":",
"proto",
".",
"filterDim",
"=",
"self",
".",
"filterDim",
"proto",
".",
"outputDim",
"=",
"self",
".",
"outputDim",
"proto",
".",
"batchSize",
"=",
"self",
".",
"batchSize",
"lossHistoryProto",
"=",
"p... | Writes serialized data to proto object
:param proto: (DynamicStructBuilder) Proto object | [
"Writes",
"serialized",
"data",
"to",
"proto",
"object"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/sparse_net.py#L391-L430 | train | 198,730 |
numenta/htmresearch | htmresearch/algorithms/union_temporal_pooler.py | UnionTemporalPooler.reset | def reset(self):
"""
Reset the state of the Union Temporal Pooler.
"""
# Reset Union Temporal Pooler fields
self._poolingActivation = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self._unionSDR = numpy.array([], dtype=UINT_DTYPE)
self._poolingTimer = numpy.ones(self.getNumColumns(), dtype=REAL_DTYPE) * 1000
self._poolingActivationInitLevel = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self._preActiveInput = numpy.zeros(self.getNumInputs(), dtype=REAL_DTYPE)
self._prePredictedActiveInput = numpy.zeros((self.getNumInputs(), self._historyLength), dtype=REAL_DTYPE)
# Reset Spatial Pooler fields
self.setOverlapDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setActiveDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setMinOverlapDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setBoostFactors(numpy.ones(self.getNumColumns(), dtype=REAL_DTYPE)) | python | def reset(self):
"""
Reset the state of the Union Temporal Pooler.
"""
# Reset Union Temporal Pooler fields
self._poolingActivation = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self._unionSDR = numpy.array([], dtype=UINT_DTYPE)
self._poolingTimer = numpy.ones(self.getNumColumns(), dtype=REAL_DTYPE) * 1000
self._poolingActivationInitLevel = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self._preActiveInput = numpy.zeros(self.getNumInputs(), dtype=REAL_DTYPE)
self._prePredictedActiveInput = numpy.zeros((self.getNumInputs(), self._historyLength), dtype=REAL_DTYPE)
# Reset Spatial Pooler fields
self.setOverlapDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setActiveDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setMinOverlapDutyCycles(numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE))
self.setBoostFactors(numpy.ones(self.getNumColumns(), dtype=REAL_DTYPE)) | [
"def",
"reset",
"(",
"self",
")",
":",
"# Reset Union Temporal Pooler fields",
"self",
".",
"_poolingActivation",
"=",
"numpy",
".",
"zeros",
"(",
"self",
".",
"getNumColumns",
"(",
")",
",",
"dtype",
"=",
"REAL_DTYPE",
")",
"self",
".",
"_unionSDR",
"=",
"n... | Reset the state of the Union Temporal Pooler. | [
"Reset",
"the",
"state",
"of",
"the",
"Union",
"Temporal",
"Pooler",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/union_temporal_pooler.py#L160-L177 | train | 198,731 |
numenta/htmresearch | htmresearch/algorithms/union_temporal_pooler.py | UnionTemporalPooler.compute | def compute(self, activeInput, predictedActiveInput, learn):
"""
Computes one cycle of the Union Temporal Pooler algorithm.
@param activeInput (numpy array) A numpy array of 0's and 1's that comprises the input to the union pooler
@param predictedActiveInput (numpy array) A numpy array of 0's and 1's that comprises the correctly predicted input to the union pooler
@param learn (boolen) A boolen value indicating whether learning should be performed
"""
assert numpy.size(activeInput) == self.getNumInputs()
assert numpy.size(predictedActiveInput) == self.getNumInputs()
self._updateBookeepingVars(learn)
# Compute proximal dendrite overlaps with active and active-predicted inputs
overlapsActive = self._calculateOverlap(activeInput)
overlapsPredictedActive = self._calculateOverlap(predictedActiveInput)
totalOverlap = (overlapsActive * self._activeOverlapWeight +
overlapsPredictedActive *
self._predictedActiveOverlapWeight).astype(REAL_DTYPE)
if learn:
boostFactors = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self.getBoostFactors(boostFactors)
boostedOverlaps = boostFactors * totalOverlap
else:
boostedOverlaps = totalOverlap
activeCells = self._inhibitColumns(boostedOverlaps)
self._activeCells = activeCells
# Decrement pooling activation of all cells
self._decayPoolingActivation()
# Update the poolingActivation of current active Union Temporal Pooler cells
self._addToPoolingActivation(activeCells, overlapsPredictedActive)
# update union SDR
self._getMostActiveCells()
if learn:
# adapt permanence of connections from predicted active inputs to newly active cell
# This step is the spatial pooler learning rule, applied only to the predictedActiveInput
# Todo: should we also include unpredicted active input in this step?
self._adaptSynapses(predictedActiveInput, activeCells, self.getSynPermActiveInc(), self.getSynPermInactiveDec())
# Increase permanence of connections from predicted active inputs to cells in the union SDR
# This is Hebbian learning applied to the current time step
self._adaptSynapses(predictedActiveInput, self._unionSDR, self._synPermPredActiveInc, 0.0)
# adapt permenence of connections from previously predicted inputs to newly active cells
# This is a reinforcement learning rule that considers previous input to the current cell
for i in xrange(self._historyLength):
self._adaptSynapses(self._prePredictedActiveInput[:,i], activeCells, self._synPermPreviousPredActiveInc, 0.0)
# Homeostasis learning inherited from the spatial pooler
self._updateDutyCycles(totalOverlap.astype(UINT_DTYPE), activeCells)
self._bumpUpWeakColumns()
self._updateBoostFactors()
if self._isUpdateRound():
self._updateInhibitionRadius()
self._updateMinDutyCycles()
# save inputs from the previous time step
self._preActiveInput = copy.copy(activeInput)
self._prePredictedActiveInput = numpy.roll(self._prePredictedActiveInput,1,1)
if self._historyLength > 0:
self._prePredictedActiveInput[:, 0] = predictedActiveInput
return self._unionSDR | python | def compute(self, activeInput, predictedActiveInput, learn):
"""
Computes one cycle of the Union Temporal Pooler algorithm.
@param activeInput (numpy array) A numpy array of 0's and 1's that comprises the input to the union pooler
@param predictedActiveInput (numpy array) A numpy array of 0's and 1's that comprises the correctly predicted input to the union pooler
@param learn (boolen) A boolen value indicating whether learning should be performed
"""
assert numpy.size(activeInput) == self.getNumInputs()
assert numpy.size(predictedActiveInput) == self.getNumInputs()
self._updateBookeepingVars(learn)
# Compute proximal dendrite overlaps with active and active-predicted inputs
overlapsActive = self._calculateOverlap(activeInput)
overlapsPredictedActive = self._calculateOverlap(predictedActiveInput)
totalOverlap = (overlapsActive * self._activeOverlapWeight +
overlapsPredictedActive *
self._predictedActiveOverlapWeight).astype(REAL_DTYPE)
if learn:
boostFactors = numpy.zeros(self.getNumColumns(), dtype=REAL_DTYPE)
self.getBoostFactors(boostFactors)
boostedOverlaps = boostFactors * totalOverlap
else:
boostedOverlaps = totalOverlap
activeCells = self._inhibitColumns(boostedOverlaps)
self._activeCells = activeCells
# Decrement pooling activation of all cells
self._decayPoolingActivation()
# Update the poolingActivation of current active Union Temporal Pooler cells
self._addToPoolingActivation(activeCells, overlapsPredictedActive)
# update union SDR
self._getMostActiveCells()
if learn:
# adapt permanence of connections from predicted active inputs to newly active cell
# This step is the spatial pooler learning rule, applied only to the predictedActiveInput
# Todo: should we also include unpredicted active input in this step?
self._adaptSynapses(predictedActiveInput, activeCells, self.getSynPermActiveInc(), self.getSynPermInactiveDec())
# Increase permanence of connections from predicted active inputs to cells in the union SDR
# This is Hebbian learning applied to the current time step
self._adaptSynapses(predictedActiveInput, self._unionSDR, self._synPermPredActiveInc, 0.0)
# adapt permenence of connections from previously predicted inputs to newly active cells
# This is a reinforcement learning rule that considers previous input to the current cell
for i in xrange(self._historyLength):
self._adaptSynapses(self._prePredictedActiveInput[:,i], activeCells, self._synPermPreviousPredActiveInc, 0.0)
# Homeostasis learning inherited from the spatial pooler
self._updateDutyCycles(totalOverlap.astype(UINT_DTYPE), activeCells)
self._bumpUpWeakColumns()
self._updateBoostFactors()
if self._isUpdateRound():
self._updateInhibitionRadius()
self._updateMinDutyCycles()
# save inputs from the previous time step
self._preActiveInput = copy.copy(activeInput)
self._prePredictedActiveInput = numpy.roll(self._prePredictedActiveInput,1,1)
if self._historyLength > 0:
self._prePredictedActiveInput[:, 0] = predictedActiveInput
return self._unionSDR | [
"def",
"compute",
"(",
"self",
",",
"activeInput",
",",
"predictedActiveInput",
",",
"learn",
")",
":",
"assert",
"numpy",
".",
"size",
"(",
"activeInput",
")",
"==",
"self",
".",
"getNumInputs",
"(",
")",
"assert",
"numpy",
".",
"size",
"(",
"predictedAct... | Computes one cycle of the Union Temporal Pooler algorithm.
@param activeInput (numpy array) A numpy array of 0's and 1's that comprises the input to the union pooler
@param predictedActiveInput (numpy array) A numpy array of 0's and 1's that comprises the correctly predicted input to the union pooler
@param learn (boolen) A boolen value indicating whether learning should be performed | [
"Computes",
"one",
"cycle",
"of",
"the",
"Union",
"Temporal",
"Pooler",
"algorithm",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/union_temporal_pooler.py#L180-L246 | train | 198,732 |
numenta/htmresearch | htmresearch/algorithms/union_temporal_pooler.py | UnionTemporalPooler._decayPoolingActivation | def _decayPoolingActivation(self):
"""
Decrements pooling activation of all cells
"""
if self._decayFunctionType == 'NoDecay':
self._poolingActivation = self._decayFunction.decay(self._poolingActivation)
elif self._decayFunctionType == 'Exponential':
self._poolingActivation = self._decayFunction.decay(\
self._poolingActivationInitLevel, self._poolingTimer)
return self._poolingActivation | python | def _decayPoolingActivation(self):
"""
Decrements pooling activation of all cells
"""
if self._decayFunctionType == 'NoDecay':
self._poolingActivation = self._decayFunction.decay(self._poolingActivation)
elif self._decayFunctionType == 'Exponential':
self._poolingActivation = self._decayFunction.decay(\
self._poolingActivationInitLevel, self._poolingTimer)
return self._poolingActivation | [
"def",
"_decayPoolingActivation",
"(",
"self",
")",
":",
"if",
"self",
".",
"_decayFunctionType",
"==",
"'NoDecay'",
":",
"self",
".",
"_poolingActivation",
"=",
"self",
".",
"_decayFunction",
".",
"decay",
"(",
"self",
".",
"_poolingActivation",
")",
"elif",
... | Decrements pooling activation of all cells | [
"Decrements",
"pooling",
"activation",
"of",
"all",
"cells"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/union_temporal_pooler.py#L249-L259 | train | 198,733 |
numenta/htmresearch | htmresearch/algorithms/union_temporal_pooler.py | UnionTemporalPooler._addToPoolingActivation | def _addToPoolingActivation(self, activeCells, overlaps):
"""
Adds overlaps from specified active cells to cells' pooling
activation.
@param activeCells: Indices of those cells winning the inhibition step
@param overlaps: A current set of overlap values for each cell
@return current pooling activation
"""
self._poolingActivation[activeCells] = self._exciteFunction.excite(
self._poolingActivation[activeCells], overlaps[activeCells])
# increase pooling timers for all cells
self._poolingTimer[self._poolingTimer >= 0] += 1
# reset pooling timer for active cells
self._poolingTimer[activeCells] = 0
self._poolingActivationInitLevel[activeCells] = self._poolingActivation[activeCells]
return self._poolingActivation | python | def _addToPoolingActivation(self, activeCells, overlaps):
"""
Adds overlaps from specified active cells to cells' pooling
activation.
@param activeCells: Indices of those cells winning the inhibition step
@param overlaps: A current set of overlap values for each cell
@return current pooling activation
"""
self._poolingActivation[activeCells] = self._exciteFunction.excite(
self._poolingActivation[activeCells], overlaps[activeCells])
# increase pooling timers for all cells
self._poolingTimer[self._poolingTimer >= 0] += 1
# reset pooling timer for active cells
self._poolingTimer[activeCells] = 0
self._poolingActivationInitLevel[activeCells] = self._poolingActivation[activeCells]
return self._poolingActivation | [
"def",
"_addToPoolingActivation",
"(",
"self",
",",
"activeCells",
",",
"overlaps",
")",
":",
"self",
".",
"_poolingActivation",
"[",
"activeCells",
"]",
"=",
"self",
".",
"_exciteFunction",
".",
"excite",
"(",
"self",
".",
"_poolingActivation",
"[",
"activeCell... | Adds overlaps from specified active cells to cells' pooling
activation.
@param activeCells: Indices of those cells winning the inhibition step
@param overlaps: A current set of overlap values for each cell
@return current pooling activation | [
"Adds",
"overlaps",
"from",
"specified",
"active",
"cells",
"to",
"cells",
"pooling",
"activation",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/union_temporal_pooler.py#L262-L280 | train | 198,734 |
numenta/htmresearch | htmresearch/algorithms/union_temporal_pooler.py | UnionTemporalPooler._getMostActiveCells | def _getMostActiveCells(self):
"""
Gets the most active cells in the Union SDR having at least non-zero
activation in sorted order.
@return: a list of cell indices
"""
poolingActivation = self._poolingActivation
nonZeroCells = numpy.argwhere(poolingActivation > 0)[:,0]
# include a tie-breaker before sorting
poolingActivationSubset = poolingActivation[nonZeroCells] + \
self._poolingActivation_tieBreaker[nonZeroCells]
potentialUnionSDR = nonZeroCells[numpy.argsort(poolingActivationSubset)[::-1]]
topCells = potentialUnionSDR[0: self._maxUnionCells]
if max(self._poolingTimer) > self._minHistory:
self._unionSDR = numpy.sort(topCells).astype(UINT_DTYPE)
else:
self._unionSDR = []
return self._unionSDR | python | def _getMostActiveCells(self):
"""
Gets the most active cells in the Union SDR having at least non-zero
activation in sorted order.
@return: a list of cell indices
"""
poolingActivation = self._poolingActivation
nonZeroCells = numpy.argwhere(poolingActivation > 0)[:,0]
# include a tie-breaker before sorting
poolingActivationSubset = poolingActivation[nonZeroCells] + \
self._poolingActivation_tieBreaker[nonZeroCells]
potentialUnionSDR = nonZeroCells[numpy.argsort(poolingActivationSubset)[::-1]]
topCells = potentialUnionSDR[0: self._maxUnionCells]
if max(self._poolingTimer) > self._minHistory:
self._unionSDR = numpy.sort(topCells).astype(UINT_DTYPE)
else:
self._unionSDR = []
return self._unionSDR | [
"def",
"_getMostActiveCells",
"(",
"self",
")",
":",
"poolingActivation",
"=",
"self",
".",
"_poolingActivation",
"nonZeroCells",
"=",
"numpy",
".",
"argwhere",
"(",
"poolingActivation",
">",
"0",
")",
"[",
":",
",",
"0",
"]",
"# include a tie-breaker before sorti... | Gets the most active cells in the Union SDR having at least non-zero
activation in sorted order.
@return: a list of cell indices | [
"Gets",
"the",
"most",
"active",
"cells",
"in",
"the",
"Union",
"SDR",
"having",
"at",
"least",
"non",
"-",
"zero",
"activation",
"in",
"sorted",
"order",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/union_temporal_pooler.py#L283-L304 | train | 198,735 |
numenta/htmresearch | htmresearch/frameworks/layers/laminar_network.py | createNetwork | def createNetwork(networkConfig):
"""
Create and initialize the specified network instance.
@param networkConfig: (dict) the configuration of this network.
@return network: (Network) The actual network
"""
registerAllResearchRegions()
network = Network()
if networkConfig["networkType"] == "L4L2Column":
return createL4L2Column(network, networkConfig, "_0")
elif networkConfig["networkType"] == "MultipleL4L2Columns":
return createMultipleL4L2Columns(network, networkConfig)
elif networkConfig["networkType"] == "MultipleL4L2ColumnsWithTopology":
return createMultipleL4L2ColumnsWithTopology(network, networkConfig)
elif networkConfig["networkType"] == "L2456Columns":
return createL2456Columns(network, networkConfig)
elif networkConfig["networkType"] == "L4L2TMColumn":
return createL4L2TMColumn(network, networkConfig, "_0")
elif networkConfig["networkType"] == "CombinedSequenceColumn":
return createCombinedSequenceColumn(network, networkConfig, "_0") | python | def createNetwork(networkConfig):
"""
Create and initialize the specified network instance.
@param networkConfig: (dict) the configuration of this network.
@return network: (Network) The actual network
"""
registerAllResearchRegions()
network = Network()
if networkConfig["networkType"] == "L4L2Column":
return createL4L2Column(network, networkConfig, "_0")
elif networkConfig["networkType"] == "MultipleL4L2Columns":
return createMultipleL4L2Columns(network, networkConfig)
elif networkConfig["networkType"] == "MultipleL4L2ColumnsWithTopology":
return createMultipleL4L2ColumnsWithTopology(network, networkConfig)
elif networkConfig["networkType"] == "L2456Columns":
return createL2456Columns(network, networkConfig)
elif networkConfig["networkType"] == "L4L2TMColumn":
return createL4L2TMColumn(network, networkConfig, "_0")
elif networkConfig["networkType"] == "CombinedSequenceColumn":
return createCombinedSequenceColumn(network, networkConfig, "_0") | [
"def",
"createNetwork",
"(",
"networkConfig",
")",
":",
"registerAllResearchRegions",
"(",
")",
"network",
"=",
"Network",
"(",
")",
"if",
"networkConfig",
"[",
"\"networkType\"",
"]",
"==",
"\"L4L2Column\"",
":",
"return",
"createL4L2Column",
"(",
"network",
",",... | Create and initialize the specified network instance.
@param networkConfig: (dict) the configuration of this network.
@return network: (Network) The actual network | [
"Create",
"and",
"initialize",
"the",
"specified",
"network",
"instance",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/laminar_network.py#L50-L73 | train | 198,736 |
numenta/htmresearch | htmresearch/frameworks/layers/laminar_network.py | printNetwork | def printNetwork(network):
"""
Given a network, print out regions sorted by phase
"""
print "The network has",len(network.regions.values()),"regions"
for p in range(network.getMaxPhase()):
print "=== Phase",p
for region in network.regions.values():
if network.getPhases(region.name)[0] == p:
print " ",region.name | python | def printNetwork(network):
"""
Given a network, print out regions sorted by phase
"""
print "The network has",len(network.regions.values()),"regions"
for p in range(network.getMaxPhase()):
print "=== Phase",p
for region in network.regions.values():
if network.getPhases(region.name)[0] == p:
print " ",region.name | [
"def",
"printNetwork",
"(",
"network",
")",
":",
"print",
"\"The network has\"",
",",
"len",
"(",
"network",
".",
"regions",
".",
"values",
"(",
")",
")",
",",
"\"regions\"",
"for",
"p",
"in",
"range",
"(",
"network",
".",
"getMaxPhase",
"(",
")",
")",
... | Given a network, print out regions sorted by phase | [
"Given",
"a",
"network",
"print",
"out",
"regions",
"sorted",
"by",
"phase"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/laminar_network.py#L77-L86 | train | 198,737 |
numenta/htmresearch | projects/kdimgrid/prediction.py | fit_params_to_1d_data | def fit_params_to_1d_data(logX):
"""
Fit skewed normal distributions to 1-D capactity data,
and return the distribution parameters.
Args
----
logX:
Logarithm of one-dimensional capacity data,
indexed by module and phase resolution index
"""
m_max = logX.shape[0]
p_max = logX.shape[1]
params = np.zeros((m_max, p_max, 3))
for m_ in range(m_max):
for p_ in range(p_max):
params[m_,p_] = skewnorm.fit(logX[m_,p_])
return params | python | def fit_params_to_1d_data(logX):
"""
Fit skewed normal distributions to 1-D capactity data,
and return the distribution parameters.
Args
----
logX:
Logarithm of one-dimensional capacity data,
indexed by module and phase resolution index
"""
m_max = logX.shape[0]
p_max = logX.shape[1]
params = np.zeros((m_max, p_max, 3))
for m_ in range(m_max):
for p_ in range(p_max):
params[m_,p_] = skewnorm.fit(logX[m_,p_])
return params | [
"def",
"fit_params_to_1d_data",
"(",
"logX",
")",
":",
"m_max",
"=",
"logX",
".",
"shape",
"[",
"0",
"]",
"p_max",
"=",
"logX",
".",
"shape",
"[",
"1",
"]",
"params",
"=",
"np",
".",
"zeros",
"(",
"(",
"m_max",
",",
"p_max",
",",
"3",
")",
")",
... | Fit skewed normal distributions to 1-D capactity data,
and return the distribution parameters.
Args
----
logX:
Logarithm of one-dimensional capacity data,
indexed by module and phase resolution index | [
"Fit",
"skewed",
"normal",
"distributions",
"to",
"1",
"-",
"D",
"capactity",
"data",
"and",
"return",
"the",
"distribution",
"parameters",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/kdimgrid/prediction.py#L4-L23 | train | 198,738 |
numenta/htmresearch | projects/kdimgrid/prediction.py | get_interpolated_params | def get_interpolated_params(m_frac, ph, params):
"""
Get parameters describing a 1-D capactity distribution
for fractional number of modules.
"""
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,0], deg=1)
a = slope*m_frac + offset
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,1], deg=1)
loc = slope*m_frac + offset
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,2], deg=1)
scale = slope*m_frac + offset
return (a, loc, scale) | python | def get_interpolated_params(m_frac, ph, params):
"""
Get parameters describing a 1-D capactity distribution
for fractional number of modules.
"""
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,0], deg=1)
a = slope*m_frac + offset
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,1], deg=1)
loc = slope*m_frac + offset
slope, offset = np.polyfit(np.arange(1,4), params[:3,ph,2], deg=1)
scale = slope*m_frac + offset
return (a, loc, scale) | [
"def",
"get_interpolated_params",
"(",
"m_frac",
",",
"ph",
",",
"params",
")",
":",
"slope",
",",
"offset",
"=",
"np",
".",
"polyfit",
"(",
"np",
".",
"arange",
"(",
"1",
",",
"4",
")",
",",
"params",
"[",
":",
"3",
",",
"ph",
",",
"0",
"]",
"... | Get parameters describing a 1-D capactity distribution
for fractional number of modules. | [
"Get",
"parameters",
"describing",
"a",
"1",
"-",
"D",
"capactity",
"distribution",
"for",
"fractional",
"number",
"of",
"modules",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/kdimgrid/prediction.py#L26-L40 | train | 198,739 |
numenta/htmresearch | htmresearch/frameworks/location/location_network_creation.py | L246aNetwork.rerunExperimentFromLogfile | def rerunExperimentFromLogfile(logFilename):
"""
Create an experiment class according to the sequence of operations in
logFile and return resulting experiment instance. The log file is created
by setting the 'logCalls' constructor parameter to True
"""
callLog = LoggingDecorator.load(logFilename)
# Assume first one is call to constructor
exp = L246aNetwork(*callLog[0][1]["args"], **callLog[0][1]["kwargs"])
# Call subsequent methods, using stored parameters
for call in callLog[1:]:
method = getattr(exp, call[0])
method(*call[1]["args"], **call[1]["kwargs"])
return exp | python | def rerunExperimentFromLogfile(logFilename):
"""
Create an experiment class according to the sequence of operations in
logFile and return resulting experiment instance. The log file is created
by setting the 'logCalls' constructor parameter to True
"""
callLog = LoggingDecorator.load(logFilename)
# Assume first one is call to constructor
exp = L246aNetwork(*callLog[0][1]["args"], **callLog[0][1]["kwargs"])
# Call subsequent methods, using stored parameters
for call in callLog[1:]:
method = getattr(exp, call[0])
method(*call[1]["args"], **call[1]["kwargs"])
return exp | [
"def",
"rerunExperimentFromLogfile",
"(",
"logFilename",
")",
":",
"callLog",
"=",
"LoggingDecorator",
".",
"load",
"(",
"logFilename",
")",
"# Assume first one is call to constructor",
"exp",
"=",
"L246aNetwork",
"(",
"*",
"callLog",
"[",
"0",
"]",
"[",
"1",
"]",... | Create an experiment class according to the sequence of operations in
logFile and return resulting experiment instance. The log file is created
by setting the 'logCalls' constructor parameter to True | [
"Create",
"an",
"experiment",
"class",
"according",
"to",
"the",
"sequence",
"of",
"operations",
"in",
"logFile",
"and",
"return",
"resulting",
"experiment",
"instance",
".",
"The",
"log",
"file",
"is",
"created",
"by",
"setting",
"the",
"logCalls",
"constructor... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/location/location_network_creation.py#L415-L431 | train | 198,740 |
numenta/htmresearch | htmresearch/frameworks/location/location_network_creation.py | L246aNetwork.learn | def learn(self, objects):
"""
Learns all provided objects
:param objects: dict mapping object name to array of sensations, where each
sensation is composed of location and feature SDR for each
column. For example:
{'obj1' : [[[1,1,1],[101,205,523, ..., 1021]],...], ...}
Note: Each column must have the same number of sensations as
the other columns.
:type objects: dict[str, array]
"""
self.setLearning(True)
for objectName, sensationList in objects.iteritems():
self.sendReset()
print "Learning :", objectName
prevLoc = [None] * self.numColumns
numFeatures = len(sensationList[0])
displacement = [0] * self.dimensions
for sensation in xrange(numFeatures):
for col in xrange(self.numColumns):
location = np.array(sensationList[col][sensation][0])
feature = sensationList[col][sensation][1]
# Compute displacement from previous location
if prevLoc[col] is not None:
displacement = location - prevLoc[col]
prevLoc[col] = location
# learn each pattern multiple times
for _ in xrange(self.repeat):
# Sense feature at location
self.motorInput[col].addDataToQueue(displacement)
self.sensorInput[col].addDataToQueue(feature, False, 0)
# Only move to the location on the first sensation.
displacement = [0] * self.dimensions
self.network.run(self.repeat * numFeatures)
# update L2 representations for the object
self.learnedObjects[objectName] = self.getL2Representations() | python | def learn(self, objects):
"""
Learns all provided objects
:param objects: dict mapping object name to array of sensations, where each
sensation is composed of location and feature SDR for each
column. For example:
{'obj1' : [[[1,1,1],[101,205,523, ..., 1021]],...], ...}
Note: Each column must have the same number of sensations as
the other columns.
:type objects: dict[str, array]
"""
self.setLearning(True)
for objectName, sensationList in objects.iteritems():
self.sendReset()
print "Learning :", objectName
prevLoc = [None] * self.numColumns
numFeatures = len(sensationList[0])
displacement = [0] * self.dimensions
for sensation in xrange(numFeatures):
for col in xrange(self.numColumns):
location = np.array(sensationList[col][sensation][0])
feature = sensationList[col][sensation][1]
# Compute displacement from previous location
if prevLoc[col] is not None:
displacement = location - prevLoc[col]
prevLoc[col] = location
# learn each pattern multiple times
for _ in xrange(self.repeat):
# Sense feature at location
self.motorInput[col].addDataToQueue(displacement)
self.sensorInput[col].addDataToQueue(feature, False, 0)
# Only move to the location on the first sensation.
displacement = [0] * self.dimensions
self.network.run(self.repeat * numFeatures)
# update L2 representations for the object
self.learnedObjects[objectName] = self.getL2Representations() | [
"def",
"learn",
"(",
"self",
",",
"objects",
")",
":",
"self",
".",
"setLearning",
"(",
"True",
")",
"for",
"objectName",
",",
"sensationList",
"in",
"objects",
".",
"iteritems",
"(",
")",
":",
"self",
".",
"sendReset",
"(",
")",
"print",
"\"Learning :\"... | Learns all provided objects
:param objects: dict mapping object name to array of sensations, where each
sensation is composed of location and feature SDR for each
column. For example:
{'obj1' : [[[1,1,1],[101,205,523, ..., 1021]],...], ...}
Note: Each column must have the same number of sensations as
the other columns.
:type objects: dict[str, array] | [
"Learns",
"all",
"provided",
"objects"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/location/location_network_creation.py#L512-L555 | train | 198,741 |
numenta/htmresearch | htmresearch/frameworks/location/location_network_creation.py | L246aNetwork.getL2Representations | def getL2Representations(self):
"""
Returns the active representation in L2.
"""
return [set(L2.getSelf()._pooler.getActiveCells()) for L2 in self.L2Regions] | python | def getL2Representations(self):
"""
Returns the active representation in L2.
"""
return [set(L2.getSelf()._pooler.getActiveCells()) for L2 in self.L2Regions] | [
"def",
"getL2Representations",
"(",
"self",
")",
":",
"return",
"[",
"set",
"(",
"L2",
".",
"getSelf",
"(",
")",
".",
"_pooler",
".",
"getActiveCells",
"(",
")",
")",
"for",
"L2",
"in",
"self",
".",
"L2Regions",
"]"
] | Returns the active representation in L2. | [
"Returns",
"the",
"active",
"representation",
"in",
"L2",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/location/location_network_creation.py#L645-L649 | train | 198,742 |
numenta/htmresearch | htmresearch/frameworks/union_temporal_pooling/activation/excite_functions/excite_functions_all.py | LogisticExciteFunction.excite | def excite(self, currentActivation, inputs):
"""
Increases current activation by amount.
@param currentActivation (numpy array) Current activation levels for each cell
@param inputs (numpy array) inputs for each cell
"""
currentActivation += self._minValue + (self._maxValue - self._minValue) / (
1 + numpy.exp(-self._steepness * (inputs - self._xMidpoint)))
return currentActivation | python | def excite(self, currentActivation, inputs):
"""
Increases current activation by amount.
@param currentActivation (numpy array) Current activation levels for each cell
@param inputs (numpy array) inputs for each cell
"""
currentActivation += self._minValue + (self._maxValue - self._minValue) / (
1 + numpy.exp(-self._steepness * (inputs - self._xMidpoint)))
return currentActivation | [
"def",
"excite",
"(",
"self",
",",
"currentActivation",
",",
"inputs",
")",
":",
"currentActivation",
"+=",
"self",
".",
"_minValue",
"+",
"(",
"self",
".",
"_maxValue",
"-",
"self",
".",
"_minValue",
")",
"/",
"(",
"1",
"+",
"numpy",
".",
"exp",
"(",
... | Increases current activation by amount.
@param currentActivation (numpy array) Current activation levels for each cell
@param inputs (numpy array) inputs for each cell | [
"Increases",
"current",
"activation",
"by",
"amount",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/union_temporal_pooling/activation/excite_functions/excite_functions_all.py#L58-L68 | train | 198,743 |
numenta/htmresearch | htmresearch/frameworks/union_temporal_pooling/activation/excite_functions/excite_functions_all.py | LogisticExciteFunction.plot | def plot(self):
"""
plot the activation function
"""
plt.ion()
plt.show()
x = numpy.linspace(0, 15, 100)
y = numpy.zeros(x.shape)
y = self.excite(y, x)
plt.plot(x, y)
plt.xlabel('Input')
plt.ylabel('Persistence')
plt.title('Sigmoid Activation Function') | python | def plot(self):
"""
plot the activation function
"""
plt.ion()
plt.show()
x = numpy.linspace(0, 15, 100)
y = numpy.zeros(x.shape)
y = self.excite(y, x)
plt.plot(x, y)
plt.xlabel('Input')
plt.ylabel('Persistence')
plt.title('Sigmoid Activation Function') | [
"def",
"plot",
"(",
"self",
")",
":",
"plt",
".",
"ion",
"(",
")",
"plt",
".",
"show",
"(",
")",
"x",
"=",
"numpy",
".",
"linspace",
"(",
"0",
",",
"15",
",",
"100",
")",
"y",
"=",
"numpy",
".",
"zeros",
"(",
"x",
".",
"shape",
")",
"y",
... | plot the activation function | [
"plot",
"the",
"activation",
"function"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/union_temporal_pooling/activation/excite_functions/excite_functions_all.py#L70-L82 | train | 198,744 |
numenta/htmresearch | projects/sequence_prediction/discrete_sequences/plotSequenceLengthExperiment.py | computeAccuracyEnding | def computeAccuracyEnding(predictions, truths, iterations,
resets=None, randoms=None, num=None,
sequenceCounter=None):
"""
Compute accuracy on the sequence ending
"""
accuracy = []
numIteration = []
numSequences = []
for i in xrange(len(predictions) - 1):
if num is not None and i > num:
continue
if truths[i] is None:
continue
# identify the end of sequence
if resets is not None or randoms is not None:
if not (resets[i+1] or randoms[i+1]):
continue
correct = truths[i] is None or truths[i] in predictions[i]
accuracy.append(correct)
numSequences.append(sequenceCounter[i])
numIteration.append(iterations[i])
return (accuracy, numIteration, numSequences) | python | def computeAccuracyEnding(predictions, truths, iterations,
resets=None, randoms=None, num=None,
sequenceCounter=None):
"""
Compute accuracy on the sequence ending
"""
accuracy = []
numIteration = []
numSequences = []
for i in xrange(len(predictions) - 1):
if num is not None and i > num:
continue
if truths[i] is None:
continue
# identify the end of sequence
if resets is not None or randoms is not None:
if not (resets[i+1] or randoms[i+1]):
continue
correct = truths[i] is None or truths[i] in predictions[i]
accuracy.append(correct)
numSequences.append(sequenceCounter[i])
numIteration.append(iterations[i])
return (accuracy, numIteration, numSequences) | [
"def",
"computeAccuracyEnding",
"(",
"predictions",
",",
"truths",
",",
"iterations",
",",
"resets",
"=",
"None",
",",
"randoms",
"=",
"None",
",",
"num",
"=",
"None",
",",
"sequenceCounter",
"=",
"None",
")",
":",
"accuracy",
"=",
"[",
"]",
"numIteration"... | Compute accuracy on the sequence ending | [
"Compute",
"accuracy",
"on",
"the",
"sequence",
"ending"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_prediction/discrete_sequences/plotSequenceLengthExperiment.py#L41-L68 | train | 198,745 |
numenta/htmresearch | htmresearch/frameworks/pytorch/dataset_utils.py | createValidationDataSampler | def createValidationDataSampler(dataset, ratio):
"""
Create `torch.utils.data.Sampler`s used to split the dataset into 2 ramdom
sampled subsets. The first should used for training and the second for
validation.
:param dataset: A valid torch.utils.data.Dataset (i.e. torchvision.datasets.MNIST)
:param ratio: The percentage of the dataset to be used for training. The
remaining (1-ratio)% will be used for validation
:return: tuple with 2 torch.utils.data.Sampler. (train, validate)
"""
indices = np.random.permutation(len(dataset))
training_count = int(len(indices) * ratio)
train = torch.utils.data.SubsetRandomSampler(indices=indices[:training_count])
validate = torch.utils.data.SubsetRandomSampler(indices=indices[training_count:])
return (train, validate) | python | def createValidationDataSampler(dataset, ratio):
"""
Create `torch.utils.data.Sampler`s used to split the dataset into 2 ramdom
sampled subsets. The first should used for training and the second for
validation.
:param dataset: A valid torch.utils.data.Dataset (i.e. torchvision.datasets.MNIST)
:param ratio: The percentage of the dataset to be used for training. The
remaining (1-ratio)% will be used for validation
:return: tuple with 2 torch.utils.data.Sampler. (train, validate)
"""
indices = np.random.permutation(len(dataset))
training_count = int(len(indices) * ratio)
train = torch.utils.data.SubsetRandomSampler(indices=indices[:training_count])
validate = torch.utils.data.SubsetRandomSampler(indices=indices[training_count:])
return (train, validate) | [
"def",
"createValidationDataSampler",
"(",
"dataset",
",",
"ratio",
")",
":",
"indices",
"=",
"np",
".",
"random",
".",
"permutation",
"(",
"len",
"(",
"dataset",
")",
")",
"training_count",
"=",
"int",
"(",
"len",
"(",
"indices",
")",
"*",
"ratio",
")",... | Create `torch.utils.data.Sampler`s used to split the dataset into 2 ramdom
sampled subsets. The first should used for training and the second for
validation.
:param dataset: A valid torch.utils.data.Dataset (i.e. torchvision.datasets.MNIST)
:param ratio: The percentage of the dataset to be used for training. The
remaining (1-ratio)% will be used for validation
:return: tuple with 2 torch.utils.data.Sampler. (train, validate) | [
"Create",
"torch",
".",
"utils",
".",
"data",
".",
"Sampler",
"s",
"used",
"to",
"split",
"the",
"dataset",
"into",
"2",
"ramdom",
"sampled",
"subsets",
".",
"The",
"first",
"should",
"used",
"for",
"training",
"and",
"the",
"second",
"for",
"validation",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/dataset_utils.py#L30-L45 | train | 198,746 |
numenta/htmresearch | htmresearch/frameworks/pytorch/benchmark_utils.py | register_nonzero_counter | def register_nonzero_counter(network, stats):
"""
Register forward hooks to count the number of nonzero floating points
values from all the tensors used by the given network during inference.
:param network: The network to attach the counter
:param stats: Dictionary holding the counter.
"""
if hasattr(network, "__counter_nonzero__"):
raise ValueError("nonzero counter was already registered for this network")
if not isinstance(stats, dict):
raise ValueError("stats must be a dictionary")
network.__counter_nonzero__ = stats
handles = []
for name, module in network.named_modules():
handles.append(module.register_forward_hook(_nonzero_counter_hook))
if network != module:
if hasattr(module, "__counter_nonzero__"):
raise ValueError("nonzero counter was already registered for this module")
child_data = dict()
network.__counter_nonzero__[name] = child_data
module.__counter_nonzero__ = child_data
network.__counter_nonzero_handles__ = handles | python | def register_nonzero_counter(network, stats):
"""
Register forward hooks to count the number of nonzero floating points
values from all the tensors used by the given network during inference.
:param network: The network to attach the counter
:param stats: Dictionary holding the counter.
"""
if hasattr(network, "__counter_nonzero__"):
raise ValueError("nonzero counter was already registered for this network")
if not isinstance(stats, dict):
raise ValueError("stats must be a dictionary")
network.__counter_nonzero__ = stats
handles = []
for name, module in network.named_modules():
handles.append(module.register_forward_hook(_nonzero_counter_hook))
if network != module:
if hasattr(module, "__counter_nonzero__"):
raise ValueError("nonzero counter was already registered for this module")
child_data = dict()
network.__counter_nonzero__[name] = child_data
module.__counter_nonzero__ = child_data
network.__counter_nonzero_handles__ = handles | [
"def",
"register_nonzero_counter",
"(",
"network",
",",
"stats",
")",
":",
"if",
"hasattr",
"(",
"network",
",",
"\"__counter_nonzero__\"",
")",
":",
"raise",
"ValueError",
"(",
"\"nonzero counter was already registered for this network\"",
")",
"if",
"not",
"isinstance... | Register forward hooks to count the number of nonzero floating points
values from all the tensors used by the given network during inference.
:param network: The network to attach the counter
:param stats: Dictionary holding the counter. | [
"Register",
"forward",
"hooks",
"to",
"count",
"the",
"number",
"of",
"nonzero",
"floating",
"points",
"values",
"from",
"all",
"the",
"tensors",
"used",
"by",
"the",
"given",
"network",
"during",
"inference",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/benchmark_utils.py#L68-L94 | train | 198,747 |
numenta/htmresearch | htmresearch/frameworks/pytorch/tiny_cifar_experiment.py | TinyCIFARExperiment.initialize | def initialize(self, params, repetition):
"""
Initialize experiment parameters and default values from configuration file.
Called at the beginning of each experiment and each repetition.
"""
super(TinyCIFARExperiment, self).initialize(params, repetition)
self.network_type = params.get("network_type", "sparse") | python | def initialize(self, params, repetition):
"""
Initialize experiment parameters and default values from configuration file.
Called at the beginning of each experiment and each repetition.
"""
super(TinyCIFARExperiment, self).initialize(params, repetition)
self.network_type = params.get("network_type", "sparse") | [
"def",
"initialize",
"(",
"self",
",",
"params",
",",
"repetition",
")",
":",
"super",
"(",
"TinyCIFARExperiment",
",",
"self",
")",
".",
"initialize",
"(",
"params",
",",
"repetition",
")",
"self",
".",
"network_type",
"=",
"params",
".",
"get",
"(",
"\... | Initialize experiment parameters and default values from configuration file.
Called at the beginning of each experiment and each repetition. | [
"Initialize",
"experiment",
"parameters",
"and",
"default",
"values",
"from",
"configuration",
"file",
".",
"Called",
"at",
"the",
"beginning",
"of",
"each",
"experiment",
"and",
"each",
"repetition",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/tiny_cifar_experiment.py#L45-L51 | train | 198,748 |
numenta/htmresearch | htmresearch/frameworks/pytorch/tiny_cifar_experiment.py | TinyCIFARExperiment.logger | def logger(self, iteration, ret):
"""Print out relevant information at each epoch"""
print("Learning rate: {:f}".format(self.lr_scheduler.get_lr()[0]))
entropies = getEntropies(self.model)
print("Entropy and max entropy: ", float(entropies[0]), entropies[1])
print("Training time for epoch=", self.epoch_train_time)
for noise in self.noise_values:
print("Noise= {:3.2f}, loss = {:5.4f}, Accuracy = {:5.3f}%".format(
noise, ret[noise]["loss"], 100.0*ret[noise]["accuracy"]))
print("Full epoch time =", self.epoch_time)
if ret[0.0]["accuracy"] > 0.7:
self.best_noise_score = max(ret[0.1]["accuracy"], self.best_noise_score)
self.best_epoch = iteration | python | def logger(self, iteration, ret):
"""Print out relevant information at each epoch"""
print("Learning rate: {:f}".format(self.lr_scheduler.get_lr()[0]))
entropies = getEntropies(self.model)
print("Entropy and max entropy: ", float(entropies[0]), entropies[1])
print("Training time for epoch=", self.epoch_train_time)
for noise in self.noise_values:
print("Noise= {:3.2f}, loss = {:5.4f}, Accuracy = {:5.3f}%".format(
noise, ret[noise]["loss"], 100.0*ret[noise]["accuracy"]))
print("Full epoch time =", self.epoch_time)
if ret[0.0]["accuracy"] > 0.7:
self.best_noise_score = max(ret[0.1]["accuracy"], self.best_noise_score)
self.best_epoch = iteration | [
"def",
"logger",
"(",
"self",
",",
"iteration",
",",
"ret",
")",
":",
"print",
"(",
"\"Learning rate: {:f}\"",
".",
"format",
"(",
"self",
".",
"lr_scheduler",
".",
"get_lr",
"(",
")",
"[",
"0",
"]",
")",
")",
"entropies",
"=",
"getEntropies",
"(",
"se... | Print out relevant information at each epoch | [
"Print",
"out",
"relevant",
"information",
"at",
"each",
"epoch"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/tiny_cifar_experiment.py#L54-L66 | train | 198,749 |
numenta/htmresearch | projects/combined_sequences/generate_plots.py | plotAccuracyAndMCsDuringDecrementChange | def plotAccuracyAndMCsDuringDecrementChange(results, title="", yaxis=""):
"""
Plot accuracy vs decrement value
"""
decrementRange = []
mcRange = []
for r in results:
if r["basalPredictedSegmentDecrement"] not in decrementRange:
decrementRange.append(r["basalPredictedSegmentDecrement"])
if r["inputSize"] not in mcRange:
mcRange.append(r["inputSize"])
decrementRange.sort()
mcRange.sort()
print decrementRange
print mcRange
########################################################################
#
# Accumulate all the results per column in a convergence array.
#
# accuracy[o,f] = accuracy with o objects in training
# and f unique features.
accuracy = numpy.zeros((len(mcRange), len(decrementRange)))
TMAccuracy = numpy.zeros((len(mcRange), len(decrementRange)))
totals = numpy.zeros((len(mcRange), len(decrementRange)))
for r in results:
dec = r["basalPredictedSegmentDecrement"]
nf = r["inputSize"]
accuracy[mcRange.index(nf), decrementRange.index(dec)] += r["objectAccuracyL2"]
TMAccuracy[mcRange.index(nf), decrementRange.index(dec)] += r["sequenceCorrectClassificationsTM"]
totals[mcRange.index(nf), decrementRange.index(dec)] += 1
for i,f in enumerate(mcRange):
print i, f, accuracy[i] / totals[i]
print i, f, TMAccuracy[i] / totals[i]
print i, f, totals[i]
print | python | def plotAccuracyAndMCsDuringDecrementChange(results, title="", yaxis=""):
"""
Plot accuracy vs decrement value
"""
decrementRange = []
mcRange = []
for r in results:
if r["basalPredictedSegmentDecrement"] not in decrementRange:
decrementRange.append(r["basalPredictedSegmentDecrement"])
if r["inputSize"] not in mcRange:
mcRange.append(r["inputSize"])
decrementRange.sort()
mcRange.sort()
print decrementRange
print mcRange
########################################################################
#
# Accumulate all the results per column in a convergence array.
#
# accuracy[o,f] = accuracy with o objects in training
# and f unique features.
accuracy = numpy.zeros((len(mcRange), len(decrementRange)))
TMAccuracy = numpy.zeros((len(mcRange), len(decrementRange)))
totals = numpy.zeros((len(mcRange), len(decrementRange)))
for r in results:
dec = r["basalPredictedSegmentDecrement"]
nf = r["inputSize"]
accuracy[mcRange.index(nf), decrementRange.index(dec)] += r["objectAccuracyL2"]
TMAccuracy[mcRange.index(nf), decrementRange.index(dec)] += r["sequenceCorrectClassificationsTM"]
totals[mcRange.index(nf), decrementRange.index(dec)] += 1
for i,f in enumerate(mcRange):
print i, f, accuracy[i] / totals[i]
print i, f, TMAccuracy[i] / totals[i]
print i, f, totals[i]
print | [
"def",
"plotAccuracyAndMCsDuringDecrementChange",
"(",
"results",
",",
"title",
"=",
"\"\"",
",",
"yaxis",
"=",
"\"\"",
")",
":",
"decrementRange",
"=",
"[",
"]",
"mcRange",
"=",
"[",
"]",
"for",
"r",
"in",
"results",
":",
"if",
"r",
"[",
"\"basalPredicted... | Plot accuracy vs decrement value | [
"Plot",
"accuracy",
"vs",
"decrement",
"value"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/combined_sequences/generate_plots.py#L263-L300 | train | 198,750 |
numenta/htmresearch | projects/combined_sequences/generate_plots.py | gen4 | def gen4(dirName):
"""Plots 4A and 4B"""
# Generate images similar to those used in the first plot for the section
# "Simulations with Pure Temporal Sequences"
try:
resultsFig4A = os.path.join(dirName, "pure_sequences_example.pkl")
with open(resultsFig4A, "rb") as f:
results = cPickle.load(f)
for trialNum, stat in enumerate(results["statistics"]):
plotOneInferenceRun(
stat,
itemType="a single sequence",
fields=[
("L4 PredictedActive", "Predicted active cells in sensorimotor layer"),
("TM NextPredicted", "Predicted cells in temporal sequence layer"),
("TM PredictedActive",
"Predicted active cells in temporal sequence layer"),
],
basename="pure_sequences",
trialNumber=trialNum,
plotDir=os.path.join(os.path.dirname(os.path.realpath(__file__)),
"detailed_plots")
)
print "Plots for Fig 4A generated in 'detailed_plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4A: "
traceback.print_exc()
print
# Generate the second plot for the section "Simulations with Pure
# Temporal Sequences"
try:
plotAccuracyDuringSequenceInference(
dirName,
title="Relative performance of layers while inferring temporal sequences",
yaxis="Accuracy (%)")
print "Plots for Fig 4B generated in 'plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4B: "
traceback.print_exc()
print
# Generate the accuracy vs number of sequences
try:
plotAccuracyVsSequencesDuringSequenceInference(
dirName,
title="Relative performance of layers while inferring temporal sequences",
yaxis="Accuracy (%)")
print "Plots for Fig 4C generated in 'plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4C: "
traceback.print_exc()
print | python | def gen4(dirName):
"""Plots 4A and 4B"""
# Generate images similar to those used in the first plot for the section
# "Simulations with Pure Temporal Sequences"
try:
resultsFig4A = os.path.join(dirName, "pure_sequences_example.pkl")
with open(resultsFig4A, "rb") as f:
results = cPickle.load(f)
for trialNum, stat in enumerate(results["statistics"]):
plotOneInferenceRun(
stat,
itemType="a single sequence",
fields=[
("L4 PredictedActive", "Predicted active cells in sensorimotor layer"),
("TM NextPredicted", "Predicted cells in temporal sequence layer"),
("TM PredictedActive",
"Predicted active cells in temporal sequence layer"),
],
basename="pure_sequences",
trialNumber=trialNum,
plotDir=os.path.join(os.path.dirname(os.path.realpath(__file__)),
"detailed_plots")
)
print "Plots for Fig 4A generated in 'detailed_plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4A: "
traceback.print_exc()
print
# Generate the second plot for the section "Simulations with Pure
# Temporal Sequences"
try:
plotAccuracyDuringSequenceInference(
dirName,
title="Relative performance of layers while inferring temporal sequences",
yaxis="Accuracy (%)")
print "Plots for Fig 4B generated in 'plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4B: "
traceback.print_exc()
print
# Generate the accuracy vs number of sequences
try:
plotAccuracyVsSequencesDuringSequenceInference(
dirName,
title="Relative performance of layers while inferring temporal sequences",
yaxis="Accuracy (%)")
print "Plots for Fig 4C generated in 'plots'"
except Exception, e:
print "\nCould not generate plots for Fig 4C: "
traceback.print_exc()
print | [
"def",
"gen4",
"(",
"dirName",
")",
":",
"# Generate images similar to those used in the first plot for the section",
"# \"Simulations with Pure Temporal Sequences\"",
"try",
":",
"resultsFig4A",
"=",
"os",
".",
"path",
".",
"join",
"(",
"dirName",
",",
"\"pure_sequences_exam... | Plots 4A and 4B | [
"Plots",
"4A",
"and",
"4B"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/combined_sequences/generate_plots.py#L502-L557 | train | 198,751 |
numenta/htmresearch | htmresearch/regions/RawSensor.py | RawSensor.compute | def compute(self, inputs, outputs):
"""
Get the next record from the queue and encode it. The fields for inputs and
outputs are as defined in the spec above.
"""
if len(self.queue) > 0:
# Take the top element of the data queue
data = self.queue.pop()
else:
raise Exception("RawSensor: No data to encode: queue is empty ")
# Copy data into output vectors
outputs["resetOut"][0] = data["reset"]
outputs["sequenceIdOut"][0] = data["sequenceId"]
outputs["dataOut"][:] = 0
outputs["dataOut"][data["nonZeros"]] = 1
if self.verbosity > 1:
print "RawSensor outputs:"
print "sequenceIdOut: ", outputs["sequenceIdOut"]
print "resetOut: ", outputs["resetOut"]
print "dataOut: ", outputs["dataOut"].nonzero()[0] | python | def compute(self, inputs, outputs):
"""
Get the next record from the queue and encode it. The fields for inputs and
outputs are as defined in the spec above.
"""
if len(self.queue) > 0:
# Take the top element of the data queue
data = self.queue.pop()
else:
raise Exception("RawSensor: No data to encode: queue is empty ")
# Copy data into output vectors
outputs["resetOut"][0] = data["reset"]
outputs["sequenceIdOut"][0] = data["sequenceId"]
outputs["dataOut"][:] = 0
outputs["dataOut"][data["nonZeros"]] = 1
if self.verbosity > 1:
print "RawSensor outputs:"
print "sequenceIdOut: ", outputs["sequenceIdOut"]
print "resetOut: ", outputs["resetOut"]
print "dataOut: ", outputs["dataOut"].nonzero()[0] | [
"def",
"compute",
"(",
"self",
",",
"inputs",
",",
"outputs",
")",
":",
"if",
"len",
"(",
"self",
".",
"queue",
")",
">",
"0",
":",
"# Take the top element of the data queue",
"data",
"=",
"self",
".",
"queue",
".",
"pop",
"(",
")",
"else",
":",
"raise... | Get the next record from the queue and encode it. The fields for inputs and
outputs are as defined in the spec above. | [
"Get",
"the",
"next",
"record",
"from",
"the",
"queue",
"and",
"encode",
"it",
".",
"The",
"fields",
"for",
"inputs",
"and",
"outputs",
"are",
"as",
"defined",
"in",
"the",
"spec",
"above",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/regions/RawSensor.py#L104-L126 | train | 198,752 |
numenta/htmresearch | projects/feedback/feedback_sequences.py | convertSequenceMachineSequence | def convertSequenceMachineSequence(generatedSequences):
"""
Convert a sequence from the SequenceMachine into a list of sequences, such
that each sequence is a list of set of SDRs.
"""
sequenceList = []
currentSequence = []
for s in generatedSequences:
if s is None:
sequenceList.append(currentSequence)
currentSequence = []
else:
currentSequence.append(s)
return sequenceList | python | def convertSequenceMachineSequence(generatedSequences):
"""
Convert a sequence from the SequenceMachine into a list of sequences, such
that each sequence is a list of set of SDRs.
"""
sequenceList = []
currentSequence = []
for s in generatedSequences:
if s is None:
sequenceList.append(currentSequence)
currentSequence = []
else:
currentSequence.append(s)
return sequenceList | [
"def",
"convertSequenceMachineSequence",
"(",
"generatedSequences",
")",
":",
"sequenceList",
"=",
"[",
"]",
"currentSequence",
"=",
"[",
"]",
"for",
"s",
"in",
"generatedSequences",
":",
"if",
"s",
"is",
"None",
":",
"sequenceList",
".",
"append",
"(",
"curre... | Convert a sequence from the SequenceMachine into a list of sequences, such
that each sequence is a list of set of SDRs. | [
"Convert",
"a",
"sequence",
"from",
"the",
"SequenceMachine",
"into",
"a",
"list",
"of",
"sequences",
"such",
"that",
"each",
"sequence",
"is",
"a",
"list",
"of",
"set",
"of",
"SDRs",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/feedback/feedback_sequences.py#L41-L55 | train | 198,753 |
numenta/htmresearch | projects/feedback/feedback_sequences.py | generateSequences | def generateSequences(n=2048, w=40, sequenceLength=5, sequenceCount=2,
sharedRange=None, seed=42):
"""
Generate high order sequences using SequenceMachine
"""
# Lots of room for noise sdrs
patternAlphabetSize = 10*(sequenceLength * sequenceCount)
patternMachine = PatternMachine(n, w, patternAlphabetSize, seed)
sequenceMachine = SequenceMachine(patternMachine, seed)
numbers = sequenceMachine.generateNumbers(sequenceCount, sequenceLength,
sharedRange=sharedRange )
generatedSequences = sequenceMachine.generateFromNumbers(numbers)
return sequenceMachine, generatedSequences, numbers | python | def generateSequences(n=2048, w=40, sequenceLength=5, sequenceCount=2,
sharedRange=None, seed=42):
"""
Generate high order sequences using SequenceMachine
"""
# Lots of room for noise sdrs
patternAlphabetSize = 10*(sequenceLength * sequenceCount)
patternMachine = PatternMachine(n, w, patternAlphabetSize, seed)
sequenceMachine = SequenceMachine(patternMachine, seed)
numbers = sequenceMachine.generateNumbers(sequenceCount, sequenceLength,
sharedRange=sharedRange )
generatedSequences = sequenceMachine.generateFromNumbers(numbers)
return sequenceMachine, generatedSequences, numbers | [
"def",
"generateSequences",
"(",
"n",
"=",
"2048",
",",
"w",
"=",
"40",
",",
"sequenceLength",
"=",
"5",
",",
"sequenceCount",
"=",
"2",
",",
"sharedRange",
"=",
"None",
",",
"seed",
"=",
"42",
")",
":",
"# Lots of room for noise sdrs",
"patternAlphabetSize"... | Generate high order sequences using SequenceMachine | [
"Generate",
"high",
"order",
"sequences",
"using",
"SequenceMachine"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/feedback/feedback_sequences.py#L58-L71 | train | 198,754 |
numenta/htmresearch | projects/feedback/feedback_sequences.py | runInference | def runInference(exp, sequences, enableFeedback=True):
"""
Run inference on this set of sequences and compute error
"""
if enableFeedback:
print "Feedback enabled: "
else:
print "Feedback disabled: "
error = 0
activityTraces = []
responses = []
for i,sequence in enumerate(sequences):
(avgActiveCells, avgPredictedActiveCells, activityTrace, responsesThisSeq) = exp.infer(
sequence, sequenceNumber=i, enableFeedback=enableFeedback)
error += avgActiveCells
activityTraces.append(activityTrace)
responses.append(responsesThisSeq)
print " "
error /= len(sequences)
print "Average error = ",error
return error, activityTraces, responses | python | def runInference(exp, sequences, enableFeedback=True):
"""
Run inference on this set of sequences and compute error
"""
if enableFeedback:
print "Feedback enabled: "
else:
print "Feedback disabled: "
error = 0
activityTraces = []
responses = []
for i,sequence in enumerate(sequences):
(avgActiveCells, avgPredictedActiveCells, activityTrace, responsesThisSeq) = exp.infer(
sequence, sequenceNumber=i, enableFeedback=enableFeedback)
error += avgActiveCells
activityTraces.append(activityTrace)
responses.append(responsesThisSeq)
print " "
error /= len(sequences)
print "Average error = ",error
return error, activityTraces, responses | [
"def",
"runInference",
"(",
"exp",
",",
"sequences",
",",
"enableFeedback",
"=",
"True",
")",
":",
"if",
"enableFeedback",
":",
"print",
"\"Feedback enabled: \"",
"else",
":",
"print",
"\"Feedback disabled: \"",
"error",
"=",
"0",
"activityTraces",
"=",
"[",
"]"... | Run inference on this set of sequences and compute error | [
"Run",
"inference",
"on",
"this",
"set",
"of",
"sequences",
"and",
"compute",
"error"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/feedback/feedback_sequences.py#L161-L182 | train | 198,755 |
numenta/htmresearch | projects/l2_pooling/multi_column.py | runStretch | def runStretch(noiseLevel=None, profile=False):
"""
Stretch test that learns a lot of objects.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference
"""
exp = L4L2Experiment(
"stretch_L10_F10_C2",
numCorticalColumns=2,
)
objects = createObjectMachine(
machineType="simple",
numInputBits=20,
sensorInputSize=1024,
externalInputSize=1024,
numCorticalColumns=2,
)
objects.createRandomObjects(10, 10, numLocations=10, numFeatures=10)
print "Objects are:"
for object, pairs in objects.objects.iteritems():
print str(object) + ": " + str(pairs)
exp.learnObjects(objects.provideObjectsToLearn())
if profile:
exp.printProfile(reset=True)
# For inference, we will check and plot convergence for object 0. We create a
# sequence of random sensations for each column. We will present each
# sensation for 4 time steps to let it settle and ensure it converges.
objectCopy1 = [pair for pair in objects[0]]
objectCopy2 = [pair for pair in objects[0]]
objectCopy3 = [pair for pair in objects[0]]
random.shuffle(objectCopy1)
random.shuffle(objectCopy2)
random.shuffle(objectCopy3)
# stay multiple steps on each sensation
objectSensations1 = []
for pair in objectCopy1:
for _ in xrange(4):
objectSensations1.append(pair)
# stay multiple steps on each sensation
objectSensations2 = []
for pair in objectCopy2:
for _ in xrange(4):
objectSensations2.append(pair)
# stay multiple steps on each sensation
objectSensations3 = []
for pair in objectCopy3:
for _ in xrange(4):
objectSensations3.append(pair)
inferConfig = {
"numSteps": len(objectSensations1),
"noiseLevel": noiseLevel,
"pairs": {
0: objectSensations1,
1: objectSensations2,
# 2: objectSensations3, # Uncomment for 3 columns
}
}
exp.infer(objects.provideObjectToInfer(inferConfig), objectName=0)
if profile:
exp.printProfile()
exp.plotInferenceStats(
fields=["L2 Representation",
"Overlap L2 with object",
"L4 Representation"],
onePlot=False,
) | python | def runStretch(noiseLevel=None, profile=False):
"""
Stretch test that learns a lot of objects.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference
"""
exp = L4L2Experiment(
"stretch_L10_F10_C2",
numCorticalColumns=2,
)
objects = createObjectMachine(
machineType="simple",
numInputBits=20,
sensorInputSize=1024,
externalInputSize=1024,
numCorticalColumns=2,
)
objects.createRandomObjects(10, 10, numLocations=10, numFeatures=10)
print "Objects are:"
for object, pairs in objects.objects.iteritems():
print str(object) + ": " + str(pairs)
exp.learnObjects(objects.provideObjectsToLearn())
if profile:
exp.printProfile(reset=True)
# For inference, we will check and plot convergence for object 0. We create a
# sequence of random sensations for each column. We will present each
# sensation for 4 time steps to let it settle and ensure it converges.
objectCopy1 = [pair for pair in objects[0]]
objectCopy2 = [pair for pair in objects[0]]
objectCopy3 = [pair for pair in objects[0]]
random.shuffle(objectCopy1)
random.shuffle(objectCopy2)
random.shuffle(objectCopy3)
# stay multiple steps on each sensation
objectSensations1 = []
for pair in objectCopy1:
for _ in xrange(4):
objectSensations1.append(pair)
# stay multiple steps on each sensation
objectSensations2 = []
for pair in objectCopy2:
for _ in xrange(4):
objectSensations2.append(pair)
# stay multiple steps on each sensation
objectSensations3 = []
for pair in objectCopy3:
for _ in xrange(4):
objectSensations3.append(pair)
inferConfig = {
"numSteps": len(objectSensations1),
"noiseLevel": noiseLevel,
"pairs": {
0: objectSensations1,
1: objectSensations2,
# 2: objectSensations3, # Uncomment for 3 columns
}
}
exp.infer(objects.provideObjectToInfer(inferConfig), objectName=0)
if profile:
exp.printProfile()
exp.plotInferenceStats(
fields=["L2 Representation",
"Overlap L2 with object",
"L4 Representation"],
onePlot=False,
) | [
"def",
"runStretch",
"(",
"noiseLevel",
"=",
"None",
",",
"profile",
"=",
"False",
")",
":",
"exp",
"=",
"L4L2Experiment",
"(",
"\"stretch_L10_F10_C2\"",
",",
"numCorticalColumns",
"=",
"2",
",",
")",
"objects",
"=",
"createObjectMachine",
"(",
"machineType",
... | Stretch test that learns a lot of objects.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference | [
"Stretch",
"test",
"that",
"learns",
"a",
"lot",
"of",
"objects",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/l2_pooling/multi_column.py#L152-L233 | train | 198,756 |
numenta/htmresearch | htmresearch/frameworks/pytorch/model_utils.py | trainModel | def trainModel(model, loader, optimizer, device, criterion=F.nll_loss,
batches_in_epoch=sys.maxsize, batch_callback=None,
progress_bar=None):
"""
Train the given model by iterating through mini batches. An epoch
ends after one pass through the training set, or if the number of mini
batches exceeds the parameter "batches_in_epoch".
:param model: pytorch model to be trained
:type model: torch.nn.Module
:param loader: train dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param optimizer: Optimizer object used to train the model.
This function will train the model on every batch using this optimizer
and the :func:`torch.nn.functional.nll_loss` function
:param batches_in_epoch: Max number of mini batches to train.
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param criterion: loss function to use
:type criterion: function
:param batch_callback: Callback function to be called on every batch with the
following parameters: model, batch_idx
:type batch_callback: function
:param progress_bar: Optional :class:`tqdm` progress bar args.
None for no progress bar
:type progress_bar: dict or None
"""
model.train()
if progress_bar is not None:
loader = tqdm(loader, **progress_bar)
# update progress bar total based on batches_in_epoch
if batches_in_epoch < len(loader):
loader.total = batches_in_epoch
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_callback is not None:
batch_callback(model=model, batch_idx=batch_idx)
if batch_idx >= batches_in_epoch:
break
if progress_bar is not None:
loader.n = loader.total
loader.close() | python | def trainModel(model, loader, optimizer, device, criterion=F.nll_loss,
batches_in_epoch=sys.maxsize, batch_callback=None,
progress_bar=None):
"""
Train the given model by iterating through mini batches. An epoch
ends after one pass through the training set, or if the number of mini
batches exceeds the parameter "batches_in_epoch".
:param model: pytorch model to be trained
:type model: torch.nn.Module
:param loader: train dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param optimizer: Optimizer object used to train the model.
This function will train the model on every batch using this optimizer
and the :func:`torch.nn.functional.nll_loss` function
:param batches_in_epoch: Max number of mini batches to train.
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param criterion: loss function to use
:type criterion: function
:param batch_callback: Callback function to be called on every batch with the
following parameters: model, batch_idx
:type batch_callback: function
:param progress_bar: Optional :class:`tqdm` progress bar args.
None for no progress bar
:type progress_bar: dict or None
"""
model.train()
if progress_bar is not None:
loader = tqdm(loader, **progress_bar)
# update progress bar total based on batches_in_epoch
if batches_in_epoch < len(loader):
loader.total = batches_in_epoch
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_callback is not None:
batch_callback(model=model, batch_idx=batch_idx)
if batch_idx >= batches_in_epoch:
break
if progress_bar is not None:
loader.n = loader.total
loader.close() | [
"def",
"trainModel",
"(",
"model",
",",
"loader",
",",
"optimizer",
",",
"device",
",",
"criterion",
"=",
"F",
".",
"nll_loss",
",",
"batches_in_epoch",
"=",
"sys",
".",
"maxsize",
",",
"batch_callback",
"=",
"None",
",",
"progress_bar",
"=",
"None",
")",
... | Train the given model by iterating through mini batches. An epoch
ends after one pass through the training set, or if the number of mini
batches exceeds the parameter "batches_in_epoch".
:param model: pytorch model to be trained
:type model: torch.nn.Module
:param loader: train dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param optimizer: Optimizer object used to train the model.
This function will train the model on every batch using this optimizer
and the :func:`torch.nn.functional.nll_loss` function
:param batches_in_epoch: Max number of mini batches to train.
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param criterion: loss function to use
:type criterion: function
:param batch_callback: Callback function to be called on every batch with the
following parameters: model, batch_idx
:type batch_callback: function
:param progress_bar: Optional :class:`tqdm` progress bar args.
None for no progress bar
:type progress_bar: dict or None | [
"Train",
"the",
"given",
"model",
"by",
"iterating",
"through",
"mini",
"batches",
".",
"An",
"epoch",
"ends",
"after",
"one",
"pass",
"through",
"the",
"training",
"set",
"or",
"if",
"the",
"number",
"of",
"mini",
"batches",
"exceeds",
"the",
"parameter",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/model_utils.py#L35-L84 | train | 198,757 |
numenta/htmresearch | htmresearch/frameworks/pytorch/model_utils.py | evaluateModel | def evaluateModel(model, loader, device,
batches_in_epoch=sys.maxsize,
criterion=F.nll_loss, progress=None):
"""
Evaluate pre-trained model using given test dataset loader.
:param model: Pretrained pytorch model
:type model: torch.nn.Module
:param loader: test dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param batches_in_epoch: Max number of mini batches to test on.
:param criterion: loss function to use
:type criterion: function
:param progress: Optional :class:`tqdm` progress bar args. None for no progress bar
:type progress: dict or None
:return: dictionary with computed "accuracy", "loss", "total_correct". The
loss value is computed using :func:`torch.nn.functional.nll_loss`
:rtype: dict
"""
model.eval()
loss = 0
correct = 0
dataset_len = len(loader.sampler)
if progress is not None:
loader = tqdm(loader, **progress)
with torch.no_grad():
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.to(device)
output = model(data)
loss += criterion(output, target, reduction='sum').item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
if batch_idx >= batches_in_epoch:
break
if progress is not None:
loader.close()
loss /= dataset_len
accuracy = correct / dataset_len
return {"total_correct": correct,
"loss": loss,
"accuracy": accuracy} | python | def evaluateModel(model, loader, device,
batches_in_epoch=sys.maxsize,
criterion=F.nll_loss, progress=None):
"""
Evaluate pre-trained model using given test dataset loader.
:param model: Pretrained pytorch model
:type model: torch.nn.Module
:param loader: test dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param batches_in_epoch: Max number of mini batches to test on.
:param criterion: loss function to use
:type criterion: function
:param progress: Optional :class:`tqdm` progress bar args. None for no progress bar
:type progress: dict or None
:return: dictionary with computed "accuracy", "loss", "total_correct". The
loss value is computed using :func:`torch.nn.functional.nll_loss`
:rtype: dict
"""
model.eval()
loss = 0
correct = 0
dataset_len = len(loader.sampler)
if progress is not None:
loader = tqdm(loader, **progress)
with torch.no_grad():
for batch_idx, (data, target) in enumerate(loader):
data, target = data.to(device), target.to(device)
output = model(data)
loss += criterion(output, target, reduction='sum').item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
if batch_idx >= batches_in_epoch:
break
if progress is not None:
loader.close()
loss /= dataset_len
accuracy = correct / dataset_len
return {"total_correct": correct,
"loss": loss,
"accuracy": accuracy} | [
"def",
"evaluateModel",
"(",
"model",
",",
"loader",
",",
"device",
",",
"batches_in_epoch",
"=",
"sys",
".",
"maxsize",
",",
"criterion",
"=",
"F",
".",
"nll_loss",
",",
"progress",
"=",
"None",
")",
":",
"model",
".",
"eval",
"(",
")",
"loss",
"=",
... | Evaluate pre-trained model using given test dataset loader.
:param model: Pretrained pytorch model
:type model: torch.nn.Module
:param loader: test dataset loader
:type loader: :class:`torch.utils.data.DataLoader`
:param device: device to use ('cpu' or 'cuda')
:type device: :class:`torch.device
:param batches_in_epoch: Max number of mini batches to test on.
:param criterion: loss function to use
:type criterion: function
:param progress: Optional :class:`tqdm` progress bar args. None for no progress bar
:type progress: dict or None
:return: dictionary with computed "accuracy", "loss", "total_correct". The
loss value is computed using :func:`torch.nn.functional.nll_loss`
:rtype: dict | [
"Evaluate",
"pre",
"-",
"trained",
"model",
"using",
"given",
"test",
"dataset",
"loader",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/model_utils.py#L88-L137 | train | 198,758 |
numenta/htmresearch | projects/sdr_paper/poirazi_neuron_model/run_dim_classification_experiment.py | run_false_positive_experiment_dim | def run_false_positive_experiment_dim(
numActive = 128,
dim = 500,
numSamples = 1000,
numDendrites = 500,
synapses = 24,
numTrials = 10000,
seed = 42,
nonlinearity = sigmoid_nonlinearity(11.5, 5)):
"""
Run an experiment to test the false positive rate based on number of synapses
per dendrite, dimension and sparsity. Uses two competing neurons, along the
P&M model.
Based on figure 5B in the original SDR paper.
"""
numpy.random.seed(seed)
fps = []
fns = []
totalUnclassified = 0
for trial in range(numTrials):
# data = generate_evenly_distributed_data_sparse(dim = dim,
# num_active = numActive,
# num_samples = numSamples)
# labels = numpy.asarray([1 for i in range(numSamples / 2)] +
# [-1 for i in range(numSamples / 2)])
# flipped_labels = labels * -1
negData = generate_evenly_distributed_data_sparse(dim = dim,
num_active = numActive,
num_samples = numSamples/2)
posData = generate_evenly_distributed_data_sparse(dim = dim,
num_active = numActive,
num_samples = numSamples/2)
halfLabels = numpy.asarray([1 for _ in range(numSamples / 2)])
flippedHalfLabels = halfLabels * -1
neuron = Neuron(size =synapses * numDendrites,
num_dendrites = numDendrites,
dendrite_length = synapses,
dim = dim, nonlinearity = nonlinearity)
neg_neuron = Neuron(size =synapses * numDendrites,
num_dendrites = numDendrites,
dendrite_length = synapses,
dim = dim, nonlinearity = nonlinearity)
neuron.HTM_style_initialize_on_positive_data(posData)
neg_neuron.HTM_style_initialize_on_positive_data(negData)
# Get error for positively labeled data
fp, fn, uc = get_error(posData, halfLabels, [neuron], [neg_neuron])
totalUnclassified += uc
fps.append(fp)
fns.append(fn)
# Get error for negatively labeled data
fp, fn, uc = get_error(negData, flippedHalfLabels, [neuron], [neg_neuron])
totalUnclassified += uc
fps.append(fp)
fns.append(fn)
print "Error with n = {} : {} FP, {} FN, {} unclassified".format(
dim, sum(fps), sum(fns), totalUnclassified)
result = {
"dim": dim,
"totalFP": sum(fps),
"totalFN": sum(fns),
"total mistakes": sum(fns + fps) + totalUnclassified,
"error": float(sum(fns + fps) + totalUnclassified) / (numTrials * numSamples),
"totalSamples": numTrials * numSamples,
"a": numActive,
"num_dendrites": numDendrites,
"totalUnclassified": totalUnclassified,
"synapses": 24,
"seed": seed,
}
return result | python | def run_false_positive_experiment_dim(
numActive = 128,
dim = 500,
numSamples = 1000,
numDendrites = 500,
synapses = 24,
numTrials = 10000,
seed = 42,
nonlinearity = sigmoid_nonlinearity(11.5, 5)):
"""
Run an experiment to test the false positive rate based on number of synapses
per dendrite, dimension and sparsity. Uses two competing neurons, along the
P&M model.
Based on figure 5B in the original SDR paper.
"""
numpy.random.seed(seed)
fps = []
fns = []
totalUnclassified = 0
for trial in range(numTrials):
# data = generate_evenly_distributed_data_sparse(dim = dim,
# num_active = numActive,
# num_samples = numSamples)
# labels = numpy.asarray([1 for i in range(numSamples / 2)] +
# [-1 for i in range(numSamples / 2)])
# flipped_labels = labels * -1
negData = generate_evenly_distributed_data_sparse(dim = dim,
num_active = numActive,
num_samples = numSamples/2)
posData = generate_evenly_distributed_data_sparse(dim = dim,
num_active = numActive,
num_samples = numSamples/2)
halfLabels = numpy.asarray([1 for _ in range(numSamples / 2)])
flippedHalfLabels = halfLabels * -1
neuron = Neuron(size =synapses * numDendrites,
num_dendrites = numDendrites,
dendrite_length = synapses,
dim = dim, nonlinearity = nonlinearity)
neg_neuron = Neuron(size =synapses * numDendrites,
num_dendrites = numDendrites,
dendrite_length = synapses,
dim = dim, nonlinearity = nonlinearity)
neuron.HTM_style_initialize_on_positive_data(posData)
neg_neuron.HTM_style_initialize_on_positive_data(negData)
# Get error for positively labeled data
fp, fn, uc = get_error(posData, halfLabels, [neuron], [neg_neuron])
totalUnclassified += uc
fps.append(fp)
fns.append(fn)
# Get error for negatively labeled data
fp, fn, uc = get_error(negData, flippedHalfLabels, [neuron], [neg_neuron])
totalUnclassified += uc
fps.append(fp)
fns.append(fn)
print "Error with n = {} : {} FP, {} FN, {} unclassified".format(
dim, sum(fps), sum(fns), totalUnclassified)
result = {
"dim": dim,
"totalFP": sum(fps),
"totalFN": sum(fns),
"total mistakes": sum(fns + fps) + totalUnclassified,
"error": float(sum(fns + fps) + totalUnclassified) / (numTrials * numSamples),
"totalSamples": numTrials * numSamples,
"a": numActive,
"num_dendrites": numDendrites,
"totalUnclassified": totalUnclassified,
"synapses": 24,
"seed": seed,
}
return result | [
"def",
"run_false_positive_experiment_dim",
"(",
"numActive",
"=",
"128",
",",
"dim",
"=",
"500",
",",
"numSamples",
"=",
"1000",
",",
"numDendrites",
"=",
"500",
",",
"synapses",
"=",
"24",
",",
"numTrials",
"=",
"10000",
",",
"seed",
"=",
"42",
",",
"n... | Run an experiment to test the false positive rate based on number of synapses
per dendrite, dimension and sparsity. Uses two competing neurons, along the
P&M model.
Based on figure 5B in the original SDR paper. | [
"Run",
"an",
"experiment",
"to",
"test",
"the",
"false",
"positive",
"rate",
"based",
"on",
"number",
"of",
"synapses",
"per",
"dendrite",
"dimension",
"and",
"sparsity",
".",
"Uses",
"two",
"competing",
"neurons",
"along",
"the",
"P&M",
"model",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sdr_paper/poirazi_neuron_model/run_dim_classification_experiment.py#L32-L113 | train | 198,759 |
numenta/htmresearch | htmresearch/frameworks/layers/continuous_location_object_machine.py | ContinuousLocationObjectMachine._getRadius | def _getRadius(self, location):
"""
Returns the radius associated with the given location.
This is a bit of an awkward argument to the CoordinateEncoder, which
specifies the resolution (in was used to encode differently depending on
speed in the GPS encoder). Since the coordinates are object-centric,
for now we use the "point radius" as an heuristic, but this should be
experimented and improved.
"""
# TODO: find better heuristic
return int(math.sqrt(sum([coord ** 2 for coord in location]))) | python | def _getRadius(self, location):
"""
Returns the radius associated with the given location.
This is a bit of an awkward argument to the CoordinateEncoder, which
specifies the resolution (in was used to encode differently depending on
speed in the GPS encoder). Since the coordinates are object-centric,
for now we use the "point radius" as an heuristic, but this should be
experimented and improved.
"""
# TODO: find better heuristic
return int(math.sqrt(sum([coord ** 2 for coord in location]))) | [
"def",
"_getRadius",
"(",
"self",
",",
"location",
")",
":",
"# TODO: find better heuristic",
"return",
"int",
"(",
"math",
".",
"sqrt",
"(",
"sum",
"(",
"[",
"coord",
"**",
"2",
"for",
"coord",
"in",
"location",
"]",
")",
")",
")"
] | Returns the radius associated with the given location.
This is a bit of an awkward argument to the CoordinateEncoder, which
specifies the resolution (in was used to encode differently depending on
speed in the GPS encoder). Since the coordinates are object-centric,
for now we use the "point radius" as an heuristic, but this should be
experimented and improved. | [
"Returns",
"the",
"radius",
"associated",
"with",
"the",
"given",
"location",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/continuous_location_object_machine.py#L329-L340 | train | 198,760 |
numenta/htmresearch | htmresearch/frameworks/layers/continuous_location_object_machine.py | ContinuousLocationObjectMachine._addNoise | def _addNoise(self, pattern, noiseLevel):
"""
Adds noise the given list of patterns and returns a list of noisy copies.
"""
if pattern is None:
return None
newBits = []
for bit in pattern:
if random.random() < noiseLevel:
newBits.append(random.randint(0, max(pattern)))
else:
newBits.append(bit)
return set(newBits) | python | def _addNoise(self, pattern, noiseLevel):
"""
Adds noise the given list of patterns and returns a list of noisy copies.
"""
if pattern is None:
return None
newBits = []
for bit in pattern:
if random.random() < noiseLevel:
newBits.append(random.randint(0, max(pattern)))
else:
newBits.append(bit)
return set(newBits) | [
"def",
"_addNoise",
"(",
"self",
",",
"pattern",
",",
"noiseLevel",
")",
":",
"if",
"pattern",
"is",
"None",
":",
"return",
"None",
"newBits",
"=",
"[",
"]",
"for",
"bit",
"in",
"pattern",
":",
"if",
"random",
".",
"random",
"(",
")",
"<",
"noiseLeve... | Adds noise the given list of patterns and returns a list of noisy copies. | [
"Adds",
"noise",
"the",
"given",
"list",
"of",
"patterns",
"and",
"returns",
"a",
"list",
"of",
"noisy",
"copies",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/layers/continuous_location_object_machine.py#L343-L357 | train | 198,761 |
numenta/htmresearch | htmresearch/frameworks/specific_timing/apical_dependent_sequence_timing_memory.py | ApicalDependentSequenceTimingMemory.apicalCheck | def apicalCheck(self, apicalInput):
"""
Return 'recent' apically predicted cells for each tick of apical timer
- finds active apical segments corresponding to predicted basal segment,
@param apicalInput (numpy array)
List of active input bits for the apical dendrite segments
"""
# Calculate predictions for this timestep
(activeApicalSegments, matchingApicalSegments,
apicalPotentialOverlaps) = self._calculateSegmentActivity(
self.apicalConnections, apicalInput, self.connectedPermanence,
self.activationThreshold, self.minThreshold, self.reducedBasalThreshold)
apicallySupportedCells = self.apicalConnections.mapSegmentsToCells(
activeApicalSegments)
predictedCells = np.intersect1d(
self.basalConnections.mapSegmentsToCells(self.activeBasalSegments),
apicallySupportedCells)
return predictedCells | python | def apicalCheck(self, apicalInput):
"""
Return 'recent' apically predicted cells for each tick of apical timer
- finds active apical segments corresponding to predicted basal segment,
@param apicalInput (numpy array)
List of active input bits for the apical dendrite segments
"""
# Calculate predictions for this timestep
(activeApicalSegments, matchingApicalSegments,
apicalPotentialOverlaps) = self._calculateSegmentActivity(
self.apicalConnections, apicalInput, self.connectedPermanence,
self.activationThreshold, self.minThreshold, self.reducedBasalThreshold)
apicallySupportedCells = self.apicalConnections.mapSegmentsToCells(
activeApicalSegments)
predictedCells = np.intersect1d(
self.basalConnections.mapSegmentsToCells(self.activeBasalSegments),
apicallySupportedCells)
return predictedCells | [
"def",
"apicalCheck",
"(",
"self",
",",
"apicalInput",
")",
":",
"# Calculate predictions for this timestep",
"(",
"activeApicalSegments",
",",
"matchingApicalSegments",
",",
"apicalPotentialOverlaps",
")",
"=",
"self",
".",
"_calculateSegmentActivity",
"(",
"self",
".",
... | Return 'recent' apically predicted cells for each tick of apical timer
- finds active apical segments corresponding to predicted basal segment,
@param apicalInput (numpy array)
List of active input bits for the apical dendrite segments | [
"Return",
"recent",
"apically",
"predicted",
"cells",
"for",
"each",
"tick",
"of",
"apical",
"timer",
"-",
"finds",
"active",
"apical",
"segments",
"corresponding",
"to",
"predicted",
"basal",
"segment"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/specific_timing/apical_dependent_sequence_timing_memory.py#L156-L177 | train | 198,762 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.setupData | def setupData(self, dataPath, numLabels=0, ordered=False, stripCats=False, seed=42, **kwargs):
"""
Main method of this class. Use for setting up a network data file.
@param dataPath (str) Path to CSV file.
@param numLabels (int) Number of columns of category labels.
@param textPreprocess (bool) True will preprocess text while tokenizing.
@param ordered (bool) Keep data samples (sequences) in order,
otherwise randomize.
@param seed (int) Random seed.
@return dataFileName (str) Network data file name; same directory as
input data file.
"""
self.split(dataPath, numLabels, **kwargs)
if not ordered:
self.randomizeData(seed)
filename, ext = os.path.splitext(dataPath)
classificationFileName = "{}_category.json".format(filename)
dataFileName = "{}_network{}".format(filename, ext)
if stripCats:
self.stripCategories()
self.saveData(dataFileName, classificationFileName)
return dataFileName | python | def setupData(self, dataPath, numLabels=0, ordered=False, stripCats=False, seed=42, **kwargs):
"""
Main method of this class. Use for setting up a network data file.
@param dataPath (str) Path to CSV file.
@param numLabels (int) Number of columns of category labels.
@param textPreprocess (bool) True will preprocess text while tokenizing.
@param ordered (bool) Keep data samples (sequences) in order,
otherwise randomize.
@param seed (int) Random seed.
@return dataFileName (str) Network data file name; same directory as
input data file.
"""
self.split(dataPath, numLabels, **kwargs)
if not ordered:
self.randomizeData(seed)
filename, ext = os.path.splitext(dataPath)
classificationFileName = "{}_category.json".format(filename)
dataFileName = "{}_network{}".format(filename, ext)
if stripCats:
self.stripCategories()
self.saveData(dataFileName, classificationFileName)
return dataFileName | [
"def",
"setupData",
"(",
"self",
",",
"dataPath",
",",
"numLabels",
"=",
"0",
",",
"ordered",
"=",
"False",
",",
"stripCats",
"=",
"False",
",",
"seed",
"=",
"42",
",",
"*",
"*",
"kwargs",
")",
":",
"self",
".",
"split",
"(",
"dataPath",
",",
"numL... | Main method of this class. Use for setting up a network data file.
@param dataPath (str) Path to CSV file.
@param numLabels (int) Number of columns of category labels.
@param textPreprocess (bool) True will preprocess text while tokenizing.
@param ordered (bool) Keep data samples (sequences) in order,
otherwise randomize.
@param seed (int) Random seed.
@return dataFileName (str) Network data file name; same directory as
input data file. | [
"Main",
"method",
"of",
"this",
"class",
".",
"Use",
"for",
"setting",
"up",
"a",
"network",
"data",
"file",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L76-L104 | train | 198,763 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator._formatSequence | def _formatSequence(tokens, categories, seqID, uniqueID):
"""Write the sequence of data records for this sample."""
record = {"_category":categories,
"_sequenceId":seqID}
data = []
reset = 1
for t in tokens:
tokenRecord = record.copy()
tokenRecord["_token"] = t
tokenRecord["_reset"] = reset
tokenRecord["ID"] = uniqueID
reset = 0
data.append(tokenRecord)
return data | python | def _formatSequence(tokens, categories, seqID, uniqueID):
"""Write the sequence of data records for this sample."""
record = {"_category":categories,
"_sequenceId":seqID}
data = []
reset = 1
for t in tokens:
tokenRecord = record.copy()
tokenRecord["_token"] = t
tokenRecord["_reset"] = reset
tokenRecord["ID"] = uniqueID
reset = 0
data.append(tokenRecord)
return data | [
"def",
"_formatSequence",
"(",
"tokens",
",",
"categories",
",",
"seqID",
",",
"uniqueID",
")",
":",
"record",
"=",
"{",
"\"_category\"",
":",
"categories",
",",
"\"_sequenceId\"",
":",
"seqID",
"}",
"data",
"=",
"[",
"]",
"reset",
"=",
"1",
"for",
"t",
... | Write the sequence of data records for this sample. | [
"Write",
"the",
"sequence",
"of",
"data",
"records",
"for",
"this",
"sample",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L163-L177 | train | 198,764 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.saveData | def saveData(self, dataOutputFile, categoriesOutputFile):
"""
Save the processed data and the associated category mapping.
@param dataOutputFile (str) Location to save data
@param categoriesOutputFile (str) Location to save category map
@return (str) Path to the saved data file iff
saveData() is successful.
"""
if self.records is None:
return False
if not dataOutputFile.endswith("csv"):
raise TypeError("data output file must be csv.")
if not categoriesOutputFile.endswith("json"):
raise TypeError("category output file must be json")
# Ensure directory exists
dataOutputDirectory = os.path.dirname(dataOutputFile)
if not os.path.exists(dataOutputDirectory):
os.makedirs(dataOutputDirectory)
categoriesOutputDirectory = os.path.dirname(categoriesOutputFile)
if not os.path.exists(categoriesOutputDirectory):
os.makedirs(categoriesOutputDirectory)
with open(dataOutputFile, "w") as f:
# Header
writer = csv.DictWriter(f, fieldnames=self.fieldNames)
writer.writeheader()
# Types
writer.writerow(self.types)
# Special characters
writer.writerow(self.specials)
for data in self.records:
for record in data:
writer.writerow(record)
with open(categoriesOutputFile, "w") as f:
f.write(json.dumps(self.categoryToId,
sort_keys=True,
indent=4,
separators=(",", ": ")))
return dataOutputFile | python | def saveData(self, dataOutputFile, categoriesOutputFile):
"""
Save the processed data and the associated category mapping.
@param dataOutputFile (str) Location to save data
@param categoriesOutputFile (str) Location to save category map
@return (str) Path to the saved data file iff
saveData() is successful.
"""
if self.records is None:
return False
if not dataOutputFile.endswith("csv"):
raise TypeError("data output file must be csv.")
if not categoriesOutputFile.endswith("json"):
raise TypeError("category output file must be json")
# Ensure directory exists
dataOutputDirectory = os.path.dirname(dataOutputFile)
if not os.path.exists(dataOutputDirectory):
os.makedirs(dataOutputDirectory)
categoriesOutputDirectory = os.path.dirname(categoriesOutputFile)
if not os.path.exists(categoriesOutputDirectory):
os.makedirs(categoriesOutputDirectory)
with open(dataOutputFile, "w") as f:
# Header
writer = csv.DictWriter(f, fieldnames=self.fieldNames)
writer.writeheader()
# Types
writer.writerow(self.types)
# Special characters
writer.writerow(self.specials)
for data in self.records:
for record in data:
writer.writerow(record)
with open(categoriesOutputFile, "w") as f:
f.write(json.dumps(self.categoryToId,
sort_keys=True,
indent=4,
separators=(",", ": ")))
return dataOutputFile | [
"def",
"saveData",
"(",
"self",
",",
"dataOutputFile",
",",
"categoriesOutputFile",
")",
":",
"if",
"self",
".",
"records",
"is",
"None",
":",
"return",
"False",
"if",
"not",
"dataOutputFile",
".",
"endswith",
"(",
"\"csv\"",
")",
":",
"raise",
"TypeError",
... | Save the processed data and the associated category mapping.
@param dataOutputFile (str) Location to save data
@param categoriesOutputFile (str) Location to save category map
@return (str) Path to the saved data file iff
saveData() is successful. | [
"Save",
"the",
"processed",
"data",
"and",
"the",
"associated",
"category",
"mapping",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L185-L231 | train | 198,765 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.generateSequence | def generateSequence(self, text, preprocess=False):
"""
Return a list of lists representing the text sequence in network data
format. Does not preprocess the text.
"""
# TODO: enable text preprocessing; abstract out the logic in split() into a common method.
tokens = TextPreprocess().tokenize(text)
cat = [-1]
self.sequenceCount += 1
uniqueID = "q"
data = self._formatSequence(tokens, cat, self.sequenceCount-1, uniqueID)
return data | python | def generateSequence(self, text, preprocess=False):
"""
Return a list of lists representing the text sequence in network data
format. Does not preprocess the text.
"""
# TODO: enable text preprocessing; abstract out the logic in split() into a common method.
tokens = TextPreprocess().tokenize(text)
cat = [-1]
self.sequenceCount += 1
uniqueID = "q"
data = self._formatSequence(tokens, cat, self.sequenceCount-1, uniqueID)
return data | [
"def",
"generateSequence",
"(",
"self",
",",
"text",
",",
"preprocess",
"=",
"False",
")",
":",
"# TODO: enable text preprocessing; abstract out the logic in split() into a common method.",
"tokens",
"=",
"TextPreprocess",
"(",
")",
".",
"tokenize",
"(",
"text",
")",
"c... | Return a list of lists representing the text sequence in network data
format. Does not preprocess the text. | [
"Return",
"a",
"list",
"of",
"lists",
"representing",
"the",
"text",
"sequence",
"in",
"network",
"data",
"format",
".",
"Does",
"not",
"preprocess",
"the",
"text",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L234-L246 | train | 198,766 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.getSamples | def getSamples(netDataFile):
"""
Returns samples joined at reset points.
@param netDataFile (str) Path to file (in the FileRecordStream
format).
@return samples (OrderedDict) Keys are sample number (in order they are
read in). Values are two-tuples of sample
text and category ints.
"""
try:
with open(netDataFile) as f:
reader = csv.reader(f)
header = next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
tokenIdx = header.index("_token")
catIdx = header.index("_category")
idIdx = header.index("ID")
currentSample = []
samples = OrderedDict()
for line in reader:
if int(line[resetIdx]) == 1:
if len(currentSample) != 0:
samples[line[idIdx]] = ([" ".join(currentSample)],
[int(c) for c in line[catIdx].split(" ")])
currentSample = [line[tokenIdx]]
else:
currentSample.append(line[tokenIdx])
samples[line[idIdx]] = ([" ".join(currentSample)],
[int(c) for c in line[catIdx].split(" ")])
return samples
except IOError as e:
print "Could not open the file {}.".format(netDataFile)
raise e | python | def getSamples(netDataFile):
"""
Returns samples joined at reset points.
@param netDataFile (str) Path to file (in the FileRecordStream
format).
@return samples (OrderedDict) Keys are sample number (in order they are
read in). Values are two-tuples of sample
text and category ints.
"""
try:
with open(netDataFile) as f:
reader = csv.reader(f)
header = next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
tokenIdx = header.index("_token")
catIdx = header.index("_category")
idIdx = header.index("ID")
currentSample = []
samples = OrderedDict()
for line in reader:
if int(line[resetIdx]) == 1:
if len(currentSample) != 0:
samples[line[idIdx]] = ([" ".join(currentSample)],
[int(c) for c in line[catIdx].split(" ")])
currentSample = [line[tokenIdx]]
else:
currentSample.append(line[tokenIdx])
samples[line[idIdx]] = ([" ".join(currentSample)],
[int(c) for c in line[catIdx].split(" ")])
return samples
except IOError as e:
print "Could not open the file {}.".format(netDataFile)
raise e | [
"def",
"getSamples",
"(",
"netDataFile",
")",
":",
"try",
":",
"with",
"open",
"(",
"netDataFile",
")",
"as",
"f",
":",
"reader",
"=",
"csv",
".",
"reader",
"(",
"f",
")",
"header",
"=",
"next",
"(",
"reader",
",",
"None",
")",
"next",
"(",
"reader... | Returns samples joined at reset points.
@param netDataFile (str) Path to file (in the FileRecordStream
format).
@return samples (OrderedDict) Keys are sample number (in order they are
read in). Values are two-tuples of sample
text and category ints. | [
"Returns",
"samples",
"joined",
"at",
"reset",
"points",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L250-L286 | train | 198,767 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.getClassifications | def getClassifications(networkDataFile):
"""
Returns the classifications at the indices where the data sequences
reset.
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of string versions of the
classifications
Sample output: ["0 1", "1", "1 2 3"]
"""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
specials = next(reader)
resetIdx = specials.index("R")
classIdx = specials.index("C")
classifications = []
for line in reader:
if int(line[resetIdx]) == 1:
classifications.append(line[classIdx])
return classifications
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | python | def getClassifications(networkDataFile):
"""
Returns the classifications at the indices where the data sequences
reset.
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of string versions of the
classifications
Sample output: ["0 1", "1", "1 2 3"]
"""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
specials = next(reader)
resetIdx = specials.index("R")
classIdx = specials.index("C")
classifications = []
for line in reader:
if int(line[resetIdx]) == 1:
classifications.append(line[classIdx])
return classifications
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | [
"def",
"getClassifications",
"(",
"networkDataFile",
")",
":",
"try",
":",
"with",
"open",
"(",
"networkDataFile",
")",
"as",
"f",
":",
"reader",
"=",
"csv",
".",
"reader",
"(",
"f",
")",
"next",
"(",
"reader",
",",
"None",
")",
"next",
"(",
"reader",
... | Returns the classifications at the indices where the data sequences
reset.
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of string versions of the
classifications
Sample output: ["0 1", "1", "1 2 3"] | [
"Returns",
"the",
"classifications",
"at",
"the",
"indices",
"where",
"the",
"data",
"sequences",
"reset",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L290-L317 | train | 198,768 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.getNumberOfTokens | def getNumberOfTokens(networkDataFile):
"""
Returns the number of tokens for each sequence
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of number of tokens
"""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
count = 0
numTokens = []
for line in reader:
if int(line[resetIdx]) == 1:
if count != 0:
numTokens.append(count)
count = 1
else:
count += 1
numTokens.append(count)
return numTokens
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | python | def getNumberOfTokens(networkDataFile):
"""
Returns the number of tokens for each sequence
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of number of tokens
"""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
count = 0
numTokens = []
for line in reader:
if int(line[resetIdx]) == 1:
if count != 0:
numTokens.append(count)
count = 1
else:
count += 1
numTokens.append(count)
return numTokens
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | [
"def",
"getNumberOfTokens",
"(",
"networkDataFile",
")",
":",
"try",
":",
"with",
"open",
"(",
"networkDataFile",
")",
"as",
"f",
":",
"reader",
"=",
"csv",
".",
"reader",
"(",
"f",
")",
"next",
"(",
"reader",
",",
"None",
")",
"next",
"(",
"reader",
... | Returns the number of tokens for each sequence
@param networkDataFile (str) Path to file in the FileRecordStream
format
@return (list) list of number of tokens | [
"Returns",
"the",
"number",
"of",
"tokens",
"for",
"each",
"sequence"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L321-L349 | train | 198,769 |
numenta/htmresearch | htmresearch/support/network_text_data_generator.py | NetworkDataGenerator.getResetsIndices | def getResetsIndices(networkDataFile):
"""Returns the indices at which the data sequences reset."""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
resets = []
for i, line in enumerate(reader):
if int(line[resetIdx]) == 1:
resets.append(i)
return resets
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | python | def getResetsIndices(networkDataFile):
"""Returns the indices at which the data sequences reset."""
try:
with open(networkDataFile) as f:
reader = csv.reader(f)
next(reader, None)
next(reader, None)
resetIdx = next(reader).index("R")
resets = []
for i, line in enumerate(reader):
if int(line[resetIdx]) == 1:
resets.append(i)
return resets
except IOError as e:
print "Could not open the file {}.".format(networkDataFile)
raise e | [
"def",
"getResetsIndices",
"(",
"networkDataFile",
")",
":",
"try",
":",
"with",
"open",
"(",
"networkDataFile",
")",
"as",
"f",
":",
"reader",
"=",
"csv",
".",
"reader",
"(",
"f",
")",
"next",
"(",
"reader",
",",
"None",
")",
"next",
"(",
"reader",
... | Returns the indices at which the data sequences reset. | [
"Returns",
"the",
"indices",
"at",
"which",
"the",
"data",
"sequences",
"reset",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/network_text_data_generator.py#L353-L370 | train | 198,770 |
numenta/htmresearch | projects/speech_commands/analyze_experiment.py | lastNoiseCurve | def lastNoiseCurve(expPath, suite, iteration="last"):
"""
Print the noise errors from the last iteration of this experiment
"""
noiseValues = ["0.0", "0.05", "0.1", "0.15", "0.2", "0.25", "0.3",
"0.35", "0.4", "0.45", "0.5"]
print("\nNOISE CURVE =====",expPath,"====== ITERATION:",iteration,"=========")
try:
result = suite.get_value(expPath, 0, noiseValues, iteration)
info = []
for k in noiseValues:
info.append([k,result[k]["testerror"]])
print(tabulate(info, headers=["noise","Test Error"], tablefmt="grid"))
print("totalCorrect:", suite.get_value(expPath, 0, "totalCorrect", iteration))
except:
print("Couldn't load experiment",expPath) | python | def lastNoiseCurve(expPath, suite, iteration="last"):
"""
Print the noise errors from the last iteration of this experiment
"""
noiseValues = ["0.0", "0.05", "0.1", "0.15", "0.2", "0.25", "0.3",
"0.35", "0.4", "0.45", "0.5"]
print("\nNOISE CURVE =====",expPath,"====== ITERATION:",iteration,"=========")
try:
result = suite.get_value(expPath, 0, noiseValues, iteration)
info = []
for k in noiseValues:
info.append([k,result[k]["testerror"]])
print(tabulate(info, headers=["noise","Test Error"], tablefmt="grid"))
print("totalCorrect:", suite.get_value(expPath, 0, "totalCorrect", iteration))
except:
print("Couldn't load experiment",expPath) | [
"def",
"lastNoiseCurve",
"(",
"expPath",
",",
"suite",
",",
"iteration",
"=",
"\"last\"",
")",
":",
"noiseValues",
"=",
"[",
"\"0.0\"",
",",
"\"0.05\"",
",",
"\"0.1\"",
",",
"\"0.15\"",
",",
"\"0.2\"",
",",
"\"0.25\"",
",",
"\"0.3\"",
",",
"\"0.35\"",
",",... | Print the noise errors from the last iteration of this experiment | [
"Print",
"the",
"noise",
"errors",
"from",
"the",
"last",
"iteration",
"of",
"this",
"experiment"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_experiment.py#L105-L120 | train | 198,771 |
numenta/htmresearch | projects/speech_commands/analyze_experiment.py | learningCurve | def learningCurve(expPath, suite):
"""
Print the test, validation and other scores from each iteration of this
experiment. We select the test score that corresponds to the iteration with
maximum validation accuracy.
"""
print("\nLEARNING CURVE ================",expPath,"=====================")
try:
headers=["testResults","validation","bgResults","elapsedTime"]
result = suite.get_value(expPath, 0, headers, "all")
info = []
maxValidationAccuracy = -1.0
maxTestAccuracy = -1.0
maxBGAccuracy = -1.0
maxIter = -1
for i,v in enumerate(zip(result["testResults"],result["validation"],
result["bgResults"], result["elapsedTime"])):
info.append([i, v[0]["testerror"], v[1]["testerror"], v[2]["testerror"], int(v[3])])
if v[1]["testerror"] > maxValidationAccuracy:
maxValidationAccuracy = v[1]["testerror"]
maxTestAccuracy = v[0]["testerror"]
maxBGAccuracy = v[2]["testerror"]
maxIter = i
headers.insert(0,"iteration")
print(tabulate(info, headers=headers, tablefmt="grid"))
print("Max validation score =", maxValidationAccuracy, " at iteration", maxIter)
print("Test score at that iteration =", maxTestAccuracy)
print("BG score at that iteration =", maxBGAccuracy)
except:
print("Couldn't load experiment",expPath) | python | def learningCurve(expPath, suite):
"""
Print the test, validation and other scores from each iteration of this
experiment. We select the test score that corresponds to the iteration with
maximum validation accuracy.
"""
print("\nLEARNING CURVE ================",expPath,"=====================")
try:
headers=["testResults","validation","bgResults","elapsedTime"]
result = suite.get_value(expPath, 0, headers, "all")
info = []
maxValidationAccuracy = -1.0
maxTestAccuracy = -1.0
maxBGAccuracy = -1.0
maxIter = -1
for i,v in enumerate(zip(result["testResults"],result["validation"],
result["bgResults"], result["elapsedTime"])):
info.append([i, v[0]["testerror"], v[1]["testerror"], v[2]["testerror"], int(v[3])])
if v[1]["testerror"] > maxValidationAccuracy:
maxValidationAccuracy = v[1]["testerror"]
maxTestAccuracy = v[0]["testerror"]
maxBGAccuracy = v[2]["testerror"]
maxIter = i
headers.insert(0,"iteration")
print(tabulate(info, headers=headers, tablefmt="grid"))
print("Max validation score =", maxValidationAccuracy, " at iteration", maxIter)
print("Test score at that iteration =", maxTestAccuracy)
print("BG score at that iteration =", maxBGAccuracy)
except:
print("Couldn't load experiment",expPath) | [
"def",
"learningCurve",
"(",
"expPath",
",",
"suite",
")",
":",
"print",
"(",
"\"\\nLEARNING CURVE ================\"",
",",
"expPath",
",",
"\"=====================\"",
")",
"try",
":",
"headers",
"=",
"[",
"\"testResults\"",
",",
"\"validation\"",
",",
"\"bgResult... | Print the test, validation and other scores from each iteration of this
experiment. We select the test score that corresponds to the iteration with
maximum validation accuracy. | [
"Print",
"the",
"test",
"validation",
"and",
"other",
"scores",
"from",
"each",
"iteration",
"of",
"this",
"experiment",
".",
"We",
"select",
"the",
"test",
"score",
"that",
"corresponds",
"to",
"the",
"iteration",
"with",
"maximum",
"validation",
"accuracy",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_experiment.py#L123-L153 | train | 198,772 |
numenta/htmresearch | projects/speech_commands/analyze_experiment.py | bestScore | def bestScore(expPath, suite):
"""
Given a single experiment, return the test, validation and other scores from
the iteration with maximum validation accuracy.
"""
maxValidationAccuracy = -1.0
maxTestAccuracy = -1.0
maxTotalAccuracy = -1.0
maxBGAccuracy = -1.0
maxIter = -1
try:
headers=["testResults", "validation", "bgResults", "elapsedTime", "totalCorrect"]
result = suite.get_value(expPath, 0, headers, "all")
for i,v in enumerate(zip(result["testResults"], result["validation"],
result["bgResults"], result["elapsedTime"],
result["totalCorrect"])):
if v[1]["testerror"] > maxValidationAccuracy:
maxValidationAccuracy = v[1]["testerror"]
maxTestAccuracy = v[0]["testerror"]
maxBGAccuracy = v[2]["testerror"]
if v[4] is not None:
maxTotalAccuracy = v[4]
maxIter = i
# print("Max validation score =", maxValidationAccuracy, " at iteration", maxIter)
# print("Test score at that iteration =", maxTestAccuracy)
# print("BG score at that iteration =", maxBGAccuracy)
return maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy
except:
print("Couldn't load experiment",expPath)
return None, None, None, None, None | python | def bestScore(expPath, suite):
"""
Given a single experiment, return the test, validation and other scores from
the iteration with maximum validation accuracy.
"""
maxValidationAccuracy = -1.0
maxTestAccuracy = -1.0
maxTotalAccuracy = -1.0
maxBGAccuracy = -1.0
maxIter = -1
try:
headers=["testResults", "validation", "bgResults", "elapsedTime", "totalCorrect"]
result = suite.get_value(expPath, 0, headers, "all")
for i,v in enumerate(zip(result["testResults"], result["validation"],
result["bgResults"], result["elapsedTime"],
result["totalCorrect"])):
if v[1]["testerror"] > maxValidationAccuracy:
maxValidationAccuracy = v[1]["testerror"]
maxTestAccuracy = v[0]["testerror"]
maxBGAccuracy = v[2]["testerror"]
if v[4] is not None:
maxTotalAccuracy = v[4]
maxIter = i
# print("Max validation score =", maxValidationAccuracy, " at iteration", maxIter)
# print("Test score at that iteration =", maxTestAccuracy)
# print("BG score at that iteration =", maxBGAccuracy)
return maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy
except:
print("Couldn't load experiment",expPath)
return None, None, None, None, None | [
"def",
"bestScore",
"(",
"expPath",
",",
"suite",
")",
":",
"maxValidationAccuracy",
"=",
"-",
"1.0",
"maxTestAccuracy",
"=",
"-",
"1.0",
"maxTotalAccuracy",
"=",
"-",
"1.0",
"maxBGAccuracy",
"=",
"-",
"1.0",
"maxIter",
"=",
"-",
"1",
"try",
":",
"headers"... | Given a single experiment, return the test, validation and other scores from
the iteration with maximum validation accuracy. | [
"Given",
"a",
"single",
"experiment",
"return",
"the",
"test",
"validation",
"and",
"other",
"scores",
"from",
"the",
"iteration",
"with",
"maximum",
"validation",
"accuracy",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_experiment.py#L156-L186 | train | 198,773 |
numenta/htmresearch | projects/speech_commands/analyze_experiment.py | findOptimalResults | def findOptimalResults(expName, suite, outFile):
"""
Go through every experiment in the specified folder. For each experiment, find
the iteration with the best validation score, and return the metrics
associated with that iteration.
"""
writer = csv.writer(outFile)
headers = ["testAccuracy", "bgAccuracy", "maxTotalAccuracy", "experiment path"]
writer.writerow(headers)
info = []
print("\n================",expName,"=====================")
try:
# Retrieve the last totalCorrect from each experiment
# Print them sorted from best to worst
values, params = suite.get_values_fix_params(
expName, 0, "testerror", "last")
for p in params:
expPath = p["name"]
if not "results" in expPath:
expPath = os.path.join("results", expPath)
maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(expPath, suite)
row = [maxTestAccuracy, maxBGAccuracy, maxTotalAccuracy, expPath]
info.append(row)
writer.writerow(row)
print(tabulate(info, headers=headers, tablefmt="grid"))
except:
print("Couldn't analyze experiment",expName) | python | def findOptimalResults(expName, suite, outFile):
"""
Go through every experiment in the specified folder. For each experiment, find
the iteration with the best validation score, and return the metrics
associated with that iteration.
"""
writer = csv.writer(outFile)
headers = ["testAccuracy", "bgAccuracy", "maxTotalAccuracy", "experiment path"]
writer.writerow(headers)
info = []
print("\n================",expName,"=====================")
try:
# Retrieve the last totalCorrect from each experiment
# Print them sorted from best to worst
values, params = suite.get_values_fix_params(
expName, 0, "testerror", "last")
for p in params:
expPath = p["name"]
if not "results" in expPath:
expPath = os.path.join("results", expPath)
maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(expPath, suite)
row = [maxTestAccuracy, maxBGAccuracy, maxTotalAccuracy, expPath]
info.append(row)
writer.writerow(row)
print(tabulate(info, headers=headers, tablefmt="grid"))
except:
print("Couldn't analyze experiment",expName) | [
"def",
"findOptimalResults",
"(",
"expName",
",",
"suite",
",",
"outFile",
")",
":",
"writer",
"=",
"csv",
".",
"writer",
"(",
"outFile",
")",
"headers",
"=",
"[",
"\"testAccuracy\"",
",",
"\"bgAccuracy\"",
",",
"\"maxTotalAccuracy\"",
",",
"\"experiment path\""... | Go through every experiment in the specified folder. For each experiment, find
the iteration with the best validation score, and return the metrics
associated with that iteration. | [
"Go",
"through",
"every",
"experiment",
"in",
"the",
"specified",
"folder",
".",
"For",
"each",
"experiment",
"find",
"the",
"iteration",
"with",
"the",
"best",
"validation",
"score",
"and",
"return",
"the",
"metrics",
"associated",
"with",
"that",
"iteration",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_experiment.py#L189-L216 | train | 198,774 |
numenta/htmresearch | projects/speech_commands/analyze_experiment.py | getErrorBars | def getErrorBars(expPath, suite):
"""
Go through each experiment in the path. Get the best scores for each experiment
based on accuracy on validation set. Print out overall mean, and stdev for
test accuracy, BG accuracy, and noise accuracy.
"""
exps = suite.get_exps(expPath)
testScores = np.zeros(len(exps))
noiseScores = np.zeros(len(exps))
for i,e in enumerate(exps):
maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(
e, suite)
testScores[i] = maxTestAccuracy
noiseScores[i] = maxTotalAccuracy
print(e, maxTestAccuracy, maxTotalAccuracy)
print("")
print("Experiment:", expPath, "Number of sub-experiments", len(exps))
print("test score mean and standard deviation:", testScores.mean(), testScores.std())
print("noise score mean and standard deviation:", noiseScores.mean(), noiseScores.std()) | python | def getErrorBars(expPath, suite):
"""
Go through each experiment in the path. Get the best scores for each experiment
based on accuracy on validation set. Print out overall mean, and stdev for
test accuracy, BG accuracy, and noise accuracy.
"""
exps = suite.get_exps(expPath)
testScores = np.zeros(len(exps))
noiseScores = np.zeros(len(exps))
for i,e in enumerate(exps):
maxTestAccuracy, maxValidationAccuracy, maxBGAccuracy, maxIter, maxTotalAccuracy = bestScore(
e, suite)
testScores[i] = maxTestAccuracy
noiseScores[i] = maxTotalAccuracy
print(e, maxTestAccuracy, maxTotalAccuracy)
print("")
print("Experiment:", expPath, "Number of sub-experiments", len(exps))
print("test score mean and standard deviation:", testScores.mean(), testScores.std())
print("noise score mean and standard deviation:", noiseScores.mean(), noiseScores.std()) | [
"def",
"getErrorBars",
"(",
"expPath",
",",
"suite",
")",
":",
"exps",
"=",
"suite",
".",
"get_exps",
"(",
"expPath",
")",
"testScores",
"=",
"np",
".",
"zeros",
"(",
"len",
"(",
"exps",
")",
")",
"noiseScores",
"=",
"np",
".",
"zeros",
"(",
"len",
... | Go through each experiment in the path. Get the best scores for each experiment
based on accuracy on validation set. Print out overall mean, and stdev for
test accuracy, BG accuracy, and noise accuracy. | [
"Go",
"through",
"each",
"experiment",
"in",
"the",
"path",
".",
"Get",
"the",
"best",
"scores",
"for",
"each",
"experiment",
"based",
"on",
"accuracy",
"on",
"validation",
"set",
".",
"Print",
"out",
"overall",
"mean",
"and",
"stdev",
"for",
"test",
"accu... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_experiment.py#L219-L238 | train | 198,775 |
numenta/htmresearch | htmresearch/support/numpy_helpers.py | setCompare | def setCompare(a, b,
aKey=None, bKey=None,
leftMinusRight=False, rightMinusLeft=False):
"""
Compute the intersection and differences between two arrays, comparing
elements by their key.
@param a (numpy array)
The left set to compare.
@param b (numpy array)
The right set to compare.
@param aKey (numpy array or None)
If specified, elements in "a" are compared by their corresponding entry in
"aKey".
@param bKey (numpy array or None)
If specified, elements in "b" are compared by their corresponding entry in
"bKey".
@param leftMinusRight
If True, also calculate the set difference (a - b)
@param rightMinusLeft
If True, also calculate the set difference (b - a)
@return (numpy array or tuple)
Always returns the intersection of "a" and "b". The elements of this
intersection are values from "a" (which may be different from the values of
"b" or "aKey").
If leftMinusRight or rightMinusLeft are True, it returns a tuple:
- intersection (numpy array)
See above
- leftMinusRight (numpy array)
The elements in a that are not in b
- rightMinusLeft (numpy array)
The elements in b that are not in a
"""
aKey = aKey if aKey is not None else a
bKey = bKey if bKey is not None else b
aWithinBMask = np.in1d(aKey, bKey)
if rightMinusLeft:
bWithinAMask = np.in1d(bKey, aKey)
if leftMinusRight:
return (a[aWithinBMask],
a[~aWithinBMask],
b[bWithinAMask])
else:
return (a[aWithinBMask],
b[~bWithinAMask])
elif leftMinusRight:
return (a[aWithinBMask],
a[~aWithinBMask])
else:
return a[aWithinBMask] | python | def setCompare(a, b,
aKey=None, bKey=None,
leftMinusRight=False, rightMinusLeft=False):
"""
Compute the intersection and differences between two arrays, comparing
elements by their key.
@param a (numpy array)
The left set to compare.
@param b (numpy array)
The right set to compare.
@param aKey (numpy array or None)
If specified, elements in "a" are compared by their corresponding entry in
"aKey".
@param bKey (numpy array or None)
If specified, elements in "b" are compared by their corresponding entry in
"bKey".
@param leftMinusRight
If True, also calculate the set difference (a - b)
@param rightMinusLeft
If True, also calculate the set difference (b - a)
@return (numpy array or tuple)
Always returns the intersection of "a" and "b". The elements of this
intersection are values from "a" (which may be different from the values of
"b" or "aKey").
If leftMinusRight or rightMinusLeft are True, it returns a tuple:
- intersection (numpy array)
See above
- leftMinusRight (numpy array)
The elements in a that are not in b
- rightMinusLeft (numpy array)
The elements in b that are not in a
"""
aKey = aKey if aKey is not None else a
bKey = bKey if bKey is not None else b
aWithinBMask = np.in1d(aKey, bKey)
if rightMinusLeft:
bWithinAMask = np.in1d(bKey, aKey)
if leftMinusRight:
return (a[aWithinBMask],
a[~aWithinBMask],
b[bWithinAMask])
else:
return (a[aWithinBMask],
b[~bWithinAMask])
elif leftMinusRight:
return (a[aWithinBMask],
a[~aWithinBMask])
else:
return a[aWithinBMask] | [
"def",
"setCompare",
"(",
"a",
",",
"b",
",",
"aKey",
"=",
"None",
",",
"bKey",
"=",
"None",
",",
"leftMinusRight",
"=",
"False",
",",
"rightMinusLeft",
"=",
"False",
")",
":",
"aKey",
"=",
"aKey",
"if",
"aKey",
"is",
"not",
"None",
"else",
"a",
"b... | Compute the intersection and differences between two arrays, comparing
elements by their key.
@param a (numpy array)
The left set to compare.
@param b (numpy array)
The right set to compare.
@param aKey (numpy array or None)
If specified, elements in "a" are compared by their corresponding entry in
"aKey".
@param bKey (numpy array or None)
If specified, elements in "b" are compared by their corresponding entry in
"bKey".
@param leftMinusRight
If True, also calculate the set difference (a - b)
@param rightMinusLeft
If True, also calculate the set difference (b - a)
@return (numpy array or tuple)
Always returns the intersection of "a" and "b". The elements of this
intersection are values from "a" (which may be different from the values of
"b" or "aKey").
If leftMinusRight or rightMinusLeft are True, it returns a tuple:
- intersection (numpy array)
See above
- leftMinusRight (numpy array)
The elements in a that are not in b
- rightMinusLeft (numpy array)
The elements in b that are not in a | [
"Compute",
"the",
"intersection",
"and",
"differences",
"between",
"two",
"arrays",
"comparing",
"elements",
"by",
"their",
"key",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/numpy_helpers.py#L29-L88 | train | 198,776 |
numenta/htmresearch | htmresearch/support/numpy_helpers.py | argmaxMulti | def argmaxMulti(a, groupKeys, assumeSorted=False):
"""
This is like numpy's argmax, but it returns multiple maximums.
It gets the indices of the max values of each group in 'a', grouping the
elements by their corresponding value in 'groupKeys'.
@param a (numpy array)
An array of values that will be compared
@param groupKeys (numpy array)
An array with the same length of 'a'. Each entry identifies the group for
each 'a' value.
@param assumeSorted (bool)
If true, group keys must be organized together (e.g. sorted).
@return (numpy array)
The indices of one maximum value per group
@example
_argmaxMulti([5, 4, 7, 2, 9, 8],
[0, 0, 0, 1, 1, 1])
returns
[2, 4]
"""
if not assumeSorted:
# Use a stable sort algorithm
sorter = np.argsort(groupKeys, kind="mergesort")
a = a[sorter]
groupKeys = groupKeys[sorter]
_, indices, lengths = np.unique(groupKeys, return_index=True,
return_counts=True)
maxValues = np.maximum.reduceat(a, indices)
allMaxIndices = np.flatnonzero(np.repeat(maxValues, lengths) == a)
# Break ties by finding the insertion points of the the group start indices
# and using the values currently at those points. This approach will choose
# the first occurrence of each max value.
indices = allMaxIndices[np.searchsorted(allMaxIndices, indices)]
if assumeSorted:
return indices
else:
return sorter[indices] | python | def argmaxMulti(a, groupKeys, assumeSorted=False):
"""
This is like numpy's argmax, but it returns multiple maximums.
It gets the indices of the max values of each group in 'a', grouping the
elements by their corresponding value in 'groupKeys'.
@param a (numpy array)
An array of values that will be compared
@param groupKeys (numpy array)
An array with the same length of 'a'. Each entry identifies the group for
each 'a' value.
@param assumeSorted (bool)
If true, group keys must be organized together (e.g. sorted).
@return (numpy array)
The indices of one maximum value per group
@example
_argmaxMulti([5, 4, 7, 2, 9, 8],
[0, 0, 0, 1, 1, 1])
returns
[2, 4]
"""
if not assumeSorted:
# Use a stable sort algorithm
sorter = np.argsort(groupKeys, kind="mergesort")
a = a[sorter]
groupKeys = groupKeys[sorter]
_, indices, lengths = np.unique(groupKeys, return_index=True,
return_counts=True)
maxValues = np.maximum.reduceat(a, indices)
allMaxIndices = np.flatnonzero(np.repeat(maxValues, lengths) == a)
# Break ties by finding the insertion points of the the group start indices
# and using the values currently at those points. This approach will choose
# the first occurrence of each max value.
indices = allMaxIndices[np.searchsorted(allMaxIndices, indices)]
if assumeSorted:
return indices
else:
return sorter[indices] | [
"def",
"argmaxMulti",
"(",
"a",
",",
"groupKeys",
",",
"assumeSorted",
"=",
"False",
")",
":",
"if",
"not",
"assumeSorted",
":",
"# Use a stable sort algorithm",
"sorter",
"=",
"np",
".",
"argsort",
"(",
"groupKeys",
",",
"kind",
"=",
"\"mergesort\"",
")",
"... | This is like numpy's argmax, but it returns multiple maximums.
It gets the indices of the max values of each group in 'a', grouping the
elements by their corresponding value in 'groupKeys'.
@param a (numpy array)
An array of values that will be compared
@param groupKeys (numpy array)
An array with the same length of 'a'. Each entry identifies the group for
each 'a' value.
@param assumeSorted (bool)
If true, group keys must be organized together (e.g. sorted).
@return (numpy array)
The indices of one maximum value per group
@example
_argmaxMulti([5, 4, 7, 2, 9, 8],
[0, 0, 0, 1, 1, 1])
returns
[2, 4] | [
"This",
"is",
"like",
"numpy",
"s",
"argmax",
"but",
"it",
"returns",
"multiple",
"maximums",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/numpy_helpers.py#L91-L138 | train | 198,777 |
numenta/htmresearch | htmresearch/support/numpy_helpers.py | getAllCellsInColumns | def getAllCellsInColumns(columns, cellsPerColumn):
"""
Calculate all cell indices in the specified columns.
@param columns (numpy array)
@param cellsPerColumn (int)
@return (numpy array)
All cells within the specified columns. The cells are in the same order as the
provided columns, so they're sorted if the columns are sorted.
"""
# Add
# [[beginningOfColumn0],
# [beginningOfColumn1],
# ...]
# to
# [0, 1, 2, ..., cellsPerColumn - 1]
# to get
# [beginningOfColumn0 + 0, beginningOfColumn0 + 1, ...
# beginningOfColumn1 + 0, ...
# ...]
# then flatten it.
return ((columns * cellsPerColumn).reshape((-1, 1)) +
np.arange(cellsPerColumn, dtype="uint32")).flatten() | python | def getAllCellsInColumns(columns, cellsPerColumn):
"""
Calculate all cell indices in the specified columns.
@param columns (numpy array)
@param cellsPerColumn (int)
@return (numpy array)
All cells within the specified columns. The cells are in the same order as the
provided columns, so they're sorted if the columns are sorted.
"""
# Add
# [[beginningOfColumn0],
# [beginningOfColumn1],
# ...]
# to
# [0, 1, 2, ..., cellsPerColumn - 1]
# to get
# [beginningOfColumn0 + 0, beginningOfColumn0 + 1, ...
# beginningOfColumn1 + 0, ...
# ...]
# then flatten it.
return ((columns * cellsPerColumn).reshape((-1, 1)) +
np.arange(cellsPerColumn, dtype="uint32")).flatten() | [
"def",
"getAllCellsInColumns",
"(",
"columns",
",",
"cellsPerColumn",
")",
":",
"# Add",
"# [[beginningOfColumn0],",
"# [beginningOfColumn1],",
"# ...]",
"# to",
"# [0, 1, 2, ..., cellsPerColumn - 1]",
"# to get",
"# [beginningOfColumn0 + 0, beginningOfColumn0 + 1, ...",
... | Calculate all cell indices in the specified columns.
@param columns (numpy array)
@param cellsPerColumn (int)
@return (numpy array)
All cells within the specified columns. The cells are in the same order as the
provided columns, so they're sorted if the columns are sorted. | [
"Calculate",
"all",
"cell",
"indices",
"in",
"the",
"specified",
"columns",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/numpy_helpers.py#L141-L165 | train | 198,778 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | letterSequence | def letterSequence(letters, w=40):
"""
Return a list of input vectors corresponding to sequence of letters.
The vector for each letter has w contiguous bits ON and represented as a
sequence of non-zero indices.
"""
sequence = []
for letter in letters:
i = ord(letter) - ord('A')
sequence.append(set(range(i*w,(i+1)*w)))
return sequence | python | def letterSequence(letters, w=40):
"""
Return a list of input vectors corresponding to sequence of letters.
The vector for each letter has w contiguous bits ON and represented as a
sequence of non-zero indices.
"""
sequence = []
for letter in letters:
i = ord(letter) - ord('A')
sequence.append(set(range(i*w,(i+1)*w)))
return sequence | [
"def",
"letterSequence",
"(",
"letters",
",",
"w",
"=",
"40",
")",
":",
"sequence",
"=",
"[",
"]",
"for",
"letter",
"in",
"letters",
":",
"i",
"=",
"ord",
"(",
"letter",
")",
"-",
"ord",
"(",
"'A'",
")",
"sequence",
".",
"append",
"(",
"set",
"("... | Return a list of input vectors corresponding to sequence of letters.
The vector for each letter has w contiguous bits ON and represented as a
sequence of non-zero indices. | [
"Return",
"a",
"list",
"of",
"input",
"vectors",
"corresponding",
"to",
"sequence",
"of",
"letters",
".",
"The",
"vector",
"for",
"each",
"letter",
"has",
"w",
"contiguous",
"bits",
"ON",
"and",
"represented",
"as",
"a",
"sequence",
"of",
"non",
"-",
"zero... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L41-L51 | train | 198,779 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | getHighOrderSequenceChunk | def getHighOrderSequenceChunk(it, switchover=1000, w=40, n=2048):
"""
Given an iteration index, returns a list of vectors to be appended to the
input stream, as well as a string label identifying the sequence. This
version generates a bunch of high order sequences. The first element always
provides sufficient context to predict the rest of the elements.
After switchover iterations, it will generate a different set of sequences.
"""
if it%10==3:
s = numpy.random.randint(5)
if it <= switchover:
if s==0:
label="XABCDE"
elif s==1:
label="YCBEAF"
elif s==2:
label="GHIJKL"
elif s==3:
label="WABCMN"
else:
label="ZDBCAE"
else:
if s==0:
label="XCBEAF"
elif s==1:
label="YABCDE"
elif s==2:
label="GABCMN"
elif s==3:
label="WHIJKL"
else:
label="ZDHICF"
vecs = letterSequence(label)
else:
vecs= [getRandomVector(w, n)]
label="."
return vecs,label | python | def getHighOrderSequenceChunk(it, switchover=1000, w=40, n=2048):
"""
Given an iteration index, returns a list of vectors to be appended to the
input stream, as well as a string label identifying the sequence. This
version generates a bunch of high order sequences. The first element always
provides sufficient context to predict the rest of the elements.
After switchover iterations, it will generate a different set of sequences.
"""
if it%10==3:
s = numpy.random.randint(5)
if it <= switchover:
if s==0:
label="XABCDE"
elif s==1:
label="YCBEAF"
elif s==2:
label="GHIJKL"
elif s==3:
label="WABCMN"
else:
label="ZDBCAE"
else:
if s==0:
label="XCBEAF"
elif s==1:
label="YABCDE"
elif s==2:
label="GABCMN"
elif s==3:
label="WHIJKL"
else:
label="ZDHICF"
vecs = letterSequence(label)
else:
vecs= [getRandomVector(w, n)]
label="."
return vecs,label | [
"def",
"getHighOrderSequenceChunk",
"(",
"it",
",",
"switchover",
"=",
"1000",
",",
"w",
"=",
"40",
",",
"n",
"=",
"2048",
")",
":",
"if",
"it",
"%",
"10",
"==",
"3",
":",
"s",
"=",
"numpy",
".",
"random",
".",
"randint",
"(",
"5",
")",
"if",
"... | Given an iteration index, returns a list of vectors to be appended to the
input stream, as well as a string label identifying the sequence. This
version generates a bunch of high order sequences. The first element always
provides sufficient context to predict the rest of the elements.
After switchover iterations, it will generate a different set of sequences. | [
"Given",
"an",
"iteration",
"index",
"returns",
"a",
"list",
"of",
"vectors",
"to",
"be",
"appended",
"to",
"the",
"input",
"stream",
"as",
"well",
"as",
"a",
"string",
"label",
"identifying",
"the",
"sequence",
".",
"This",
"version",
"generates",
"a",
"b... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L59-L98 | train | 198,780 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | addNoise | def addNoise(vecs, percent=0.1, n=2048):
"""
Add noise to the given sequence of vectors and return the modified sequence.
A percentage of the on bits are shuffled to other locations.
"""
noisyVecs = []
for vec in vecs:
nv = vec.copy()
for idx in vec:
if numpy.random.random() <= percent:
nv.discard(idx)
nv.add(numpy.random.randint(n))
noisyVecs.append(nv)
return noisyVecs | python | def addNoise(vecs, percent=0.1, n=2048):
"""
Add noise to the given sequence of vectors and return the modified sequence.
A percentage of the on bits are shuffled to other locations.
"""
noisyVecs = []
for vec in vecs:
nv = vec.copy()
for idx in vec:
if numpy.random.random() <= percent:
nv.discard(idx)
nv.add(numpy.random.randint(n))
noisyVecs.append(nv)
return noisyVecs | [
"def",
"addNoise",
"(",
"vecs",
",",
"percent",
"=",
"0.1",
",",
"n",
"=",
"2048",
")",
":",
"noisyVecs",
"=",
"[",
"]",
"for",
"vec",
"in",
"vecs",
":",
"nv",
"=",
"vec",
".",
"copy",
"(",
")",
"for",
"idx",
"in",
"vec",
":",
"if",
"numpy",
... | Add noise to the given sequence of vectors and return the modified sequence.
A percentage of the on bits are shuffled to other locations. | [
"Add",
"noise",
"to",
"the",
"given",
"sequence",
"of",
"vectors",
"and",
"return",
"the",
"modified",
"sequence",
".",
"A",
"percentage",
"of",
"the",
"on",
"bits",
"are",
"shuffled",
"to",
"other",
"locations",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L101-L115 | train | 198,781 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | killCells | def killCells(i, options, tm):
"""
Kill cells as appropriate
"""
# Kill cells if called for
if options.simulation == "killer":
if i == options.switchover:
print "i=",i,"Killing cells for the first time!"
tm.killCells(percent = options.noise)
if i == options.secondKill:
print "i=",i,"Killing cells again up to",options.secondNoise
tm.killCells(percent = options.secondNoise)
elif options.simulation == "killingMeSoftly" and (i%100 == 0):
steps = (options.secondKill - options.switchover)/100
nsteps = (options.secondNoise - options.noise)/steps
noise = options.noise + nsteps*(i-options.switchover)/100
if i in xrange(options.switchover, options.secondKill+1):
print "i=",i,"Killing cells!"
tm.killCells(percent = noise) | python | def killCells(i, options, tm):
"""
Kill cells as appropriate
"""
# Kill cells if called for
if options.simulation == "killer":
if i == options.switchover:
print "i=",i,"Killing cells for the first time!"
tm.killCells(percent = options.noise)
if i == options.secondKill:
print "i=",i,"Killing cells again up to",options.secondNoise
tm.killCells(percent = options.secondNoise)
elif options.simulation == "killingMeSoftly" and (i%100 == 0):
steps = (options.secondKill - options.switchover)/100
nsteps = (options.secondNoise - options.noise)/steps
noise = options.noise + nsteps*(i-options.switchover)/100
if i in xrange(options.switchover, options.secondKill+1):
print "i=",i,"Killing cells!"
tm.killCells(percent = noise) | [
"def",
"killCells",
"(",
"i",
",",
"options",
",",
"tm",
")",
":",
"# Kill cells if called for",
"if",
"options",
".",
"simulation",
"==",
"\"killer\"",
":",
"if",
"i",
"==",
"options",
".",
"switchover",
":",
"print",
"\"i=\"",
",",
"i",
",",
"\"Killing c... | Kill cells as appropriate | [
"Kill",
"cells",
"as",
"appropriate"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L176-L198 | train | 198,782 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | printTemporalMemory | def printTemporalMemory(tm, outFile):
"""
Given an instance of TemporalMemory, print out the relevant parameters
"""
table = PrettyTable(["Parameter name", "Value", ])
table.add_row(["columnDimensions", tm.getColumnDimensions()])
table.add_row(["cellsPerColumn", tm.getCellsPerColumn()])
table.add_row(["activationThreshold", tm.getActivationThreshold()])
table.add_row(["minThreshold", tm.getMinThreshold()])
table.add_row(["maxNewSynapseCount", tm.getMaxNewSynapseCount()])
table.add_row(["permanenceIncrement", tm.getPermanenceIncrement()])
table.add_row(["permanenceDecrement", tm.getPermanenceDecrement()])
table.add_row(["initialPermanence", tm.getInitialPermanence()])
table.add_row(["connectedPermanence", tm.getConnectedPermanence()])
table.add_row(["predictedSegmentDecrement", tm.getPredictedSegmentDecrement()])
print >>outFile, table.get_string().encode("utf-8") | python | def printTemporalMemory(tm, outFile):
"""
Given an instance of TemporalMemory, print out the relevant parameters
"""
table = PrettyTable(["Parameter name", "Value", ])
table.add_row(["columnDimensions", tm.getColumnDimensions()])
table.add_row(["cellsPerColumn", tm.getCellsPerColumn()])
table.add_row(["activationThreshold", tm.getActivationThreshold()])
table.add_row(["minThreshold", tm.getMinThreshold()])
table.add_row(["maxNewSynapseCount", tm.getMaxNewSynapseCount()])
table.add_row(["permanenceIncrement", tm.getPermanenceIncrement()])
table.add_row(["permanenceDecrement", tm.getPermanenceDecrement()])
table.add_row(["initialPermanence", tm.getInitialPermanence()])
table.add_row(["connectedPermanence", tm.getConnectedPermanence()])
table.add_row(["predictedSegmentDecrement", tm.getPredictedSegmentDecrement()])
print >>outFile, table.get_string().encode("utf-8") | [
"def",
"printTemporalMemory",
"(",
"tm",
",",
"outFile",
")",
":",
"table",
"=",
"PrettyTable",
"(",
"[",
"\"Parameter name\"",
",",
"\"Value\"",
",",
"]",
")",
"table",
".",
"add_row",
"(",
"[",
"\"columnDimensions\"",
",",
"tm",
".",
"getColumnDimensions",
... | Given an instance of TemporalMemory, print out the relevant parameters | [
"Given",
"an",
"instance",
"of",
"TemporalMemory",
"print",
"out",
"the",
"relevant",
"parameters"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L306-L323 | train | 198,783 |
numenta/htmresearch | projects/sequence_learning/sequence_simulations.py | printOptions | def printOptions(options, tm, outFile):
"""
Pretty print the set of options
"""
print >>outFile, "TM parameters:"
printTemporalMemory(tm, outFile)
print >>outFile, "Experiment parameters:"
for k,v in options.__dict__.iteritems():
print >>outFile, " %s : %s" % (k,str(v))
outFile.flush() | python | def printOptions(options, tm, outFile):
"""
Pretty print the set of options
"""
print >>outFile, "TM parameters:"
printTemporalMemory(tm, outFile)
print >>outFile, "Experiment parameters:"
for k,v in options.__dict__.iteritems():
print >>outFile, " %s : %s" % (k,str(v))
outFile.flush() | [
"def",
"printOptions",
"(",
"options",
",",
"tm",
",",
"outFile",
")",
":",
"print",
">>",
"outFile",
",",
"\"TM parameters:\"",
"printTemporalMemory",
"(",
"tm",
",",
"outFile",
")",
"print",
">>",
"outFile",
",",
"\"Experiment parameters:\"",
"for",
"k",
","... | Pretty print the set of options | [
"Pretty",
"print",
"the",
"set",
"of",
"options"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_learning/sequence_simulations.py#L326-L335 | train | 198,784 |
numenta/htmresearch | projects/l2_pooling/continuous_location.py | runBasic | def runBasic(noiseLevel=None, profile=False):
"""
Runs a basic experiment on continuous locations, learning a few locations on
four basic objects, and inferring one of them.
This experiment is mostly used for testing the pipeline, as the learned
locations are too random and sparse to actually perform inference.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference
"""
exp = L4L2Experiment(
"basic_continuous",
numCorticalColumns=2
)
objects = createObjectMachine(
machineType="continuous",
numInputBits=21,
sensorInputSize=1024,
externalInputSize=1024,
numCorticalColumns=2,
)
objects.addObject(Sphere(radius=20), name="sphere")
objects.addObject(Cylinder(height=50, radius=20), name="cylinder")
objects.addObject(Box(dimensions=[10, 20, 30,]), name="box")
objects.addObject(Cube(width=20), name="cube")
learnConfig = {
"sphere": [("surface", 10)],
# the two learning config below will be exactly the same
"box": [("face", 5), ("edge", 5), ("vertex", 5)],
"cube": [(feature, 5) for feature in objects["cube"].getFeatures()],
"cylinder": [(feature, 5) for feature in objects["cylinder"].getFeatures()]
}
exp.learnObjects(
objects.provideObjectsToLearn(learnConfig, plot=True),
reset=True
)
if profile:
exp.printProfile()
inferConfig = {
"numSteps": 4,
"noiseLevel": noiseLevel,
"objectName": "cube",
"pairs": {
0: ["face", "face", "edge", "edge"],
1: ["edge", "face", "face", "edge"]
}
}
exp.infer(
objects.provideObjectToInfer(inferConfig, plot=True),
objectName="cube",
reset=True
)
if profile:
exp.printProfile()
exp.plotInferenceStats(
fields=["L2 Representation",
"Overlap L2 with object",
"L4 Representation"],
) | python | def runBasic(noiseLevel=None, profile=False):
"""
Runs a basic experiment on continuous locations, learning a few locations on
four basic objects, and inferring one of them.
This experiment is mostly used for testing the pipeline, as the learned
locations are too random and sparse to actually perform inference.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference
"""
exp = L4L2Experiment(
"basic_continuous",
numCorticalColumns=2
)
objects = createObjectMachine(
machineType="continuous",
numInputBits=21,
sensorInputSize=1024,
externalInputSize=1024,
numCorticalColumns=2,
)
objects.addObject(Sphere(radius=20), name="sphere")
objects.addObject(Cylinder(height=50, radius=20), name="cylinder")
objects.addObject(Box(dimensions=[10, 20, 30,]), name="box")
objects.addObject(Cube(width=20), name="cube")
learnConfig = {
"sphere": [("surface", 10)],
# the two learning config below will be exactly the same
"box": [("face", 5), ("edge", 5), ("vertex", 5)],
"cube": [(feature, 5) for feature in objects["cube"].getFeatures()],
"cylinder": [(feature, 5) for feature in objects["cylinder"].getFeatures()]
}
exp.learnObjects(
objects.provideObjectsToLearn(learnConfig, plot=True),
reset=True
)
if profile:
exp.printProfile()
inferConfig = {
"numSteps": 4,
"noiseLevel": noiseLevel,
"objectName": "cube",
"pairs": {
0: ["face", "face", "edge", "edge"],
1: ["edge", "face", "face", "edge"]
}
}
exp.infer(
objects.provideObjectToInfer(inferConfig, plot=True),
objectName="cube",
reset=True
)
if profile:
exp.printProfile()
exp.plotInferenceStats(
fields=["L2 Representation",
"Overlap L2 with object",
"L4 Representation"],
) | [
"def",
"runBasic",
"(",
"noiseLevel",
"=",
"None",
",",
"profile",
"=",
"False",
")",
":",
"exp",
"=",
"L4L2Experiment",
"(",
"\"basic_continuous\"",
",",
"numCorticalColumns",
"=",
"2",
")",
"objects",
"=",
"createObjectMachine",
"(",
"machineType",
"=",
"\"c... | Runs a basic experiment on continuous locations, learning a few locations on
four basic objects, and inferring one of them.
This experiment is mostly used for testing the pipeline, as the learned
locations are too random and sparse to actually perform inference.
Parameters:
----------------------------
@param noiseLevel (float)
Noise level to add to the locations and features during inference
@param profile (bool)
If True, the network will be profiled after learning and inference | [
"Runs",
"a",
"basic",
"experiment",
"on",
"continuous",
"locations",
"learning",
"a",
"few",
"locations",
"on",
"four",
"basic",
"objects",
"and",
"inferring",
"one",
"of",
"them",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/l2_pooling/continuous_location.py#L35-L107 | train | 198,785 |
numenta/htmresearch | htmresearch/support/sp_paper_utils.py | plotBoostTrace | def plotBoostTrace(sp, inputVectors, columnIndex):
"""
Plot boostfactor for a selected column
Note that learning is ON for SP here
:param sp: sp instance
:param inputVectors: input data
:param columnIndex: index for the column of interest
"""
numInputVector, inputSize = inputVectors.shape
columnNumber = np.prod(sp.getColumnDimensions())
boostFactorsTrace = np.zeros((columnNumber, numInputVector))
activeDutyCycleTrace = np.zeros((columnNumber, numInputVector))
minActiveDutyCycleTrace = np.zeros((columnNumber, numInputVector))
for i in range(numInputVector):
outputColumns = np.zeros(sp.getColumnDimensions(), dtype=uintType)
inputVector = copy.deepcopy(inputVectors[i][:])
sp.compute(inputVector, True, outputColumns)
boostFactors = np.zeros((columnNumber, ), dtype=realDType)
sp.getBoostFactors(boostFactors)
boostFactorsTrace[:, i] = boostFactors
activeDutyCycle = np.zeros((columnNumber, ), dtype=realDType)
sp.getActiveDutyCycles(activeDutyCycle)
activeDutyCycleTrace[:, i] = activeDutyCycle
minActiveDutyCycle = np.zeros((columnNumber, ), dtype=realDType)
sp.getMinActiveDutyCycles(minActiveDutyCycle)
minActiveDutyCycleTrace[:, i] = minActiveDutyCycle
plt.figure()
plt.subplot(2, 1, 1)
plt.plot(boostFactorsTrace[columnIndex, :])
plt.ylabel('Boost Factor')
plt.subplot(2, 1, 2)
plt.plot(activeDutyCycleTrace[columnIndex, :])
plt.plot(minActiveDutyCycleTrace[columnIndex, :])
plt.xlabel(' Time ')
plt.ylabel('Active Duty Cycle') | python | def plotBoostTrace(sp, inputVectors, columnIndex):
"""
Plot boostfactor for a selected column
Note that learning is ON for SP here
:param sp: sp instance
:param inputVectors: input data
:param columnIndex: index for the column of interest
"""
numInputVector, inputSize = inputVectors.shape
columnNumber = np.prod(sp.getColumnDimensions())
boostFactorsTrace = np.zeros((columnNumber, numInputVector))
activeDutyCycleTrace = np.zeros((columnNumber, numInputVector))
minActiveDutyCycleTrace = np.zeros((columnNumber, numInputVector))
for i in range(numInputVector):
outputColumns = np.zeros(sp.getColumnDimensions(), dtype=uintType)
inputVector = copy.deepcopy(inputVectors[i][:])
sp.compute(inputVector, True, outputColumns)
boostFactors = np.zeros((columnNumber, ), dtype=realDType)
sp.getBoostFactors(boostFactors)
boostFactorsTrace[:, i] = boostFactors
activeDutyCycle = np.zeros((columnNumber, ), dtype=realDType)
sp.getActiveDutyCycles(activeDutyCycle)
activeDutyCycleTrace[:, i] = activeDutyCycle
minActiveDutyCycle = np.zeros((columnNumber, ), dtype=realDType)
sp.getMinActiveDutyCycles(minActiveDutyCycle)
minActiveDutyCycleTrace[:, i] = minActiveDutyCycle
plt.figure()
plt.subplot(2, 1, 1)
plt.plot(boostFactorsTrace[columnIndex, :])
plt.ylabel('Boost Factor')
plt.subplot(2, 1, 2)
plt.plot(activeDutyCycleTrace[columnIndex, :])
plt.plot(minActiveDutyCycleTrace[columnIndex, :])
plt.xlabel(' Time ')
plt.ylabel('Active Duty Cycle') | [
"def",
"plotBoostTrace",
"(",
"sp",
",",
"inputVectors",
",",
"columnIndex",
")",
":",
"numInputVector",
",",
"inputSize",
"=",
"inputVectors",
".",
"shape",
"columnNumber",
"=",
"np",
".",
"prod",
"(",
"sp",
".",
"getColumnDimensions",
"(",
")",
")",
"boost... | Plot boostfactor for a selected column
Note that learning is ON for SP here
:param sp: sp instance
:param inputVectors: input data
:param columnIndex: index for the column of interest | [
"Plot",
"boostfactor",
"for",
"a",
"selected",
"column"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/support/sp_paper_utils.py#L166-L206 | train | 198,786 |
numenta/htmresearch | htmresearch/frameworks/pytorch/speech_commands_dataset.py | PreprocessedSpeechDataset.next_epoch | def next_epoch(self):
"""
Load next epoch from disk
"""
epoch = next(self._all_epochs)
folder = os.path.join(self._root, str(epoch), self._subset)
self.data = []
silence = None
gc.disable()
for filename in os.listdir(folder):
command = os.path.splitext(os.path.basename(filename))[0]
with open(os.path.join(folder, filename), "r") as pkl_file:
audio = pickle.load(pkl_file)
# Check for 'silence'
if command == "silence":
silence = audio
else:
target = self.classes.index(os.path.basename(command))
self.data.extend(itertools.product(audio, [target]))
gc.enable()
target = self.classes.index("silence")
self.data += [(silence, target)] * int(len(self.data) * self._silence_percentage)
return epoch | python | def next_epoch(self):
"""
Load next epoch from disk
"""
epoch = next(self._all_epochs)
folder = os.path.join(self._root, str(epoch), self._subset)
self.data = []
silence = None
gc.disable()
for filename in os.listdir(folder):
command = os.path.splitext(os.path.basename(filename))[0]
with open(os.path.join(folder, filename), "r") as pkl_file:
audio = pickle.load(pkl_file)
# Check for 'silence'
if command == "silence":
silence = audio
else:
target = self.classes.index(os.path.basename(command))
self.data.extend(itertools.product(audio, [target]))
gc.enable()
target = self.classes.index("silence")
self.data += [(silence, target)] * int(len(self.data) * self._silence_percentage)
return epoch | [
"def",
"next_epoch",
"(",
"self",
")",
":",
"epoch",
"=",
"next",
"(",
"self",
".",
"_all_epochs",
")",
"folder",
"=",
"os",
".",
"path",
".",
"join",
"(",
"self",
".",
"_root",
",",
"str",
"(",
"epoch",
")",
",",
"self",
".",
"_subset",
")",
"se... | Load next epoch from disk | [
"Load",
"next",
"epoch",
"from",
"disk"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/speech_commands_dataset.py#L199-L226 | train | 198,787 |
numenta/htmresearch | htmresearch/frameworks/pytorch/speech_commands_dataset.py | PreprocessedSpeechDataset.isValid | def isValid(folder, epoch=0):
"""
Check if the given folder is a valid preprocessed dataset
"""
# Validate by checking for the training 'silence.pkl' on the given epoch
# This file is unique to our pre-processed dataset generated by 'process_dataset.py'
return os.path.exists(os.path.join(folder, str(epoch), "train", "silence.pkl")) | python | def isValid(folder, epoch=0):
"""
Check if the given folder is a valid preprocessed dataset
"""
# Validate by checking for the training 'silence.pkl' on the given epoch
# This file is unique to our pre-processed dataset generated by 'process_dataset.py'
return os.path.exists(os.path.join(folder, str(epoch), "train", "silence.pkl")) | [
"def",
"isValid",
"(",
"folder",
",",
"epoch",
"=",
"0",
")",
":",
"# Validate by checking for the training 'silence.pkl' on the given epoch",
"# This file is unique to our pre-processed dataset generated by 'process_dataset.py'",
"return",
"os",
".",
"path",
".",
"exists",
"(",
... | Check if the given folder is a valid preprocessed dataset | [
"Check",
"if",
"the",
"given",
"folder",
"is",
"a",
"valid",
"preprocessed",
"dataset"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/frameworks/pytorch/speech_commands_dataset.py#L246-L252 | train | 198,788 |
numenta/htmresearch | htmresearch/algorithms/faulty_temporal_memory.py | FaultyTemporalMemory.burstColumn | def burstColumn(self, column, columnMatchingSegments, prevActiveCells,
prevWinnerCells, learn):
"""
Activates all of the cells in an unpredicted active column, chooses a winner
cell, and, if learning is turned on, learns on one segment, growing a new
segment if necessary.
@param column (int)
Index of bursting column.
@param columnMatchingSegments (iter)
Matching segments in this column, or None if there aren't any.
@param prevActiveCells (list)
Active cells in `t-1`.
@param prevWinnerCells (list)
Winner cells in `t-1`.
@param learn (bool)
Whether or not learning is enabled.
@return (tuple) Contains:
`cells` (iter),
`winnerCell` (int),
"""
start = self.cellsPerColumn * column
# Strip out destroyed cells before passing along to base _burstColumn()
cellsForColumn = [cellIdx
for cellIdx
in xrange(start, start + self.cellsPerColumn)
if cellIdx not in self.deadCells]
return self._burstColumn(
self.connections, self._random, self.lastUsedIterationForSegment, column,
columnMatchingSegments, prevActiveCells, prevWinnerCells, cellsForColumn,
self.numActivePotentialSynapsesForSegment, self.iteration,
self.maxNewSynapseCount, self.initialPermanence, self.permanenceIncrement,
self.permanenceDecrement, self.maxSegmentsPerCell,
self.maxSynapsesPerSegment, learn) | python | def burstColumn(self, column, columnMatchingSegments, prevActiveCells,
prevWinnerCells, learn):
"""
Activates all of the cells in an unpredicted active column, chooses a winner
cell, and, if learning is turned on, learns on one segment, growing a new
segment if necessary.
@param column (int)
Index of bursting column.
@param columnMatchingSegments (iter)
Matching segments in this column, or None if there aren't any.
@param prevActiveCells (list)
Active cells in `t-1`.
@param prevWinnerCells (list)
Winner cells in `t-1`.
@param learn (bool)
Whether or not learning is enabled.
@return (tuple) Contains:
`cells` (iter),
`winnerCell` (int),
"""
start = self.cellsPerColumn * column
# Strip out destroyed cells before passing along to base _burstColumn()
cellsForColumn = [cellIdx
for cellIdx
in xrange(start, start + self.cellsPerColumn)
if cellIdx not in self.deadCells]
return self._burstColumn(
self.connections, self._random, self.lastUsedIterationForSegment, column,
columnMatchingSegments, prevActiveCells, prevWinnerCells, cellsForColumn,
self.numActivePotentialSynapsesForSegment, self.iteration,
self.maxNewSynapseCount, self.initialPermanence, self.permanenceIncrement,
self.permanenceDecrement, self.maxSegmentsPerCell,
self.maxSynapsesPerSegment, learn) | [
"def",
"burstColumn",
"(",
"self",
",",
"column",
",",
"columnMatchingSegments",
",",
"prevActiveCells",
",",
"prevWinnerCells",
",",
"learn",
")",
":",
"start",
"=",
"self",
".",
"cellsPerColumn",
"*",
"column",
"# Strip out destroyed cells before passing along to base... | Activates all of the cells in an unpredicted active column, chooses a winner
cell, and, if learning is turned on, learns on one segment, growing a new
segment if necessary.
@param column (int)
Index of bursting column.
@param columnMatchingSegments (iter)
Matching segments in this column, or None if there aren't any.
@param prevActiveCells (list)
Active cells in `t-1`.
@param prevWinnerCells (list)
Winner cells in `t-1`.
@param learn (bool)
Whether or not learning is enabled.
@return (tuple) Contains:
`cells` (iter),
`winnerCell` (int), | [
"Activates",
"all",
"of",
"the",
"cells",
"in",
"an",
"unpredicted",
"active",
"column",
"chooses",
"a",
"winner",
"cell",
"and",
"if",
"learning",
"is",
"turned",
"on",
"learns",
"on",
"one",
"segment",
"growing",
"a",
"new",
"segment",
"if",
"necessary",
... | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/faulty_temporal_memory.py#L81-L122 | train | 198,789 |
numenta/htmresearch | htmresearch/algorithms/faulty_temporal_memory.py | FaultyTemporalMemory.printDeadCells | def printDeadCells(self):
"""
Print statistics for the dead cells
"""
columnCasualties = numpy.zeros(self.numberOfColumns())
for cell in self.deadCells:
col = self.columnForCell(cell)
columnCasualties[col] += 1
for col in range(self.numberOfColumns()):
print col, columnCasualties[col] | python | def printDeadCells(self):
"""
Print statistics for the dead cells
"""
columnCasualties = numpy.zeros(self.numberOfColumns())
for cell in self.deadCells:
col = self.columnForCell(cell)
columnCasualties[col] += 1
for col in range(self.numberOfColumns()):
print col, columnCasualties[col] | [
"def",
"printDeadCells",
"(",
"self",
")",
":",
"columnCasualties",
"=",
"numpy",
".",
"zeros",
"(",
"self",
".",
"numberOfColumns",
"(",
")",
")",
"for",
"cell",
"in",
"self",
".",
"deadCells",
":",
"col",
"=",
"self",
".",
"columnForCell",
"(",
"cell",... | Print statistics for the dead cells | [
"Print",
"statistics",
"for",
"the",
"dead",
"cells"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/faulty_temporal_memory.py#L125-L134 | train | 198,790 |
numenta/htmresearch | htmresearch/algorithms/simple_union_pooler.py | SimpleUnionPooler.reset | def reset(self):
"""
Reset Union Pooler, clear active cell history
"""
self._unionSDR = numpy.zeros(shape=(self._numInputs,))
self._activeCellsHistory = [] | python | def reset(self):
"""
Reset Union Pooler, clear active cell history
"""
self._unionSDR = numpy.zeros(shape=(self._numInputs,))
self._activeCellsHistory = [] | [
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"_unionSDR",
"=",
"numpy",
".",
"zeros",
"(",
"shape",
"=",
"(",
"self",
".",
"_numInputs",
",",
")",
")",
"self",
".",
"_activeCellsHistory",
"=",
"[",
"]"
] | Reset Union Pooler, clear active cell history | [
"Reset",
"Union",
"Pooler",
"clear",
"active",
"cell",
"history"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/simple_union_pooler.py#L51-L56 | train | 198,791 |
numenta/htmresearch | htmresearch/algorithms/simple_union_pooler.py | SimpleUnionPooler.getSparsity | def getSparsity(self):
"""
Return the sparsity of the current union SDR
"""
sparsity = numpy.sum(self._unionSDR) / self._numInputs
return sparsity | python | def getSparsity(self):
"""
Return the sparsity of the current union SDR
"""
sparsity = numpy.sum(self._unionSDR) / self._numInputs
return sparsity | [
"def",
"getSparsity",
"(",
"self",
")",
":",
"sparsity",
"=",
"numpy",
".",
"sum",
"(",
"self",
".",
"_unionSDR",
")",
"/",
"self",
".",
"_numInputs",
"return",
"sparsity"
] | Return the sparsity of the current union SDR | [
"Return",
"the",
"sparsity",
"of",
"the",
"current",
"union",
"SDR"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/htmresearch/algorithms/simple_union_pooler.py#L123-L128 | train | 198,792 |
numenta/htmresearch | projects/speech_commands/analyze_nonzero.py | plotDataframe | def plotDataframe(table, title, plotPath):
"""
Plot Panda dataframe.
:param table: Panda dataframe returned by :func:`analyzeWeightPruning`
:type table: :class:`pandas.DataFrame`
:param title: Plot title
:type title: str
:param plotPath: Plot full path
:type plotPath: str
"""
plt.figure()
axes = table.T.plot(subplots=True, sharex=True, grid=True, legend=True,
title=title, figsize=(8, 11))
# Use fixed scale for "accuracy"
accuracy = next(ax for ax in axes if ax.lines[0].get_label() == 'accuracy')
accuracy.set_ylim(0.0, 1.0)
plt.savefig(plotPath)
plt.close() | python | def plotDataframe(table, title, plotPath):
"""
Plot Panda dataframe.
:param table: Panda dataframe returned by :func:`analyzeWeightPruning`
:type table: :class:`pandas.DataFrame`
:param title: Plot title
:type title: str
:param plotPath: Plot full path
:type plotPath: str
"""
plt.figure()
axes = table.T.plot(subplots=True, sharex=True, grid=True, legend=True,
title=title, figsize=(8, 11))
# Use fixed scale for "accuracy"
accuracy = next(ax for ax in axes if ax.lines[0].get_label() == 'accuracy')
accuracy.set_ylim(0.0, 1.0)
plt.savefig(plotPath)
plt.close() | [
"def",
"plotDataframe",
"(",
"table",
",",
"title",
",",
"plotPath",
")",
":",
"plt",
".",
"figure",
"(",
")",
"axes",
"=",
"table",
".",
"T",
".",
"plot",
"(",
"subplots",
"=",
"True",
",",
"sharex",
"=",
"True",
",",
"grid",
"=",
"True",
",",
"... | Plot Panda dataframe.
:param table: Panda dataframe returned by :func:`analyzeWeightPruning`
:type table: :class:`pandas.DataFrame`
:param title: Plot title
:type title: str
:param plotPath: Plot full path
:type plotPath: str | [
"Plot",
"Panda",
"dataframe",
"."
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/speech_commands/analyze_nonzero.py#L169-L190 | train | 198,793 |
numenta/htmresearch | projects/sequence_prediction/continuous_sequence/plotPerformance.py | getDatetimeAxis | def getDatetimeAxis():
"""
use datetime as x-axis
"""
dataSet = 'nyc_taxi'
filePath = './data/' + dataSet + '.csv'
data = pd.read_csv(filePath, header=0, skiprows=[1, 2],
names=['datetime', 'value', 'timeofday', 'dayofweek'])
xaxisDate = pd.to_datetime(data['datetime'])
return xaxisDate | python | def getDatetimeAxis():
"""
use datetime as x-axis
"""
dataSet = 'nyc_taxi'
filePath = './data/' + dataSet + '.csv'
data = pd.read_csv(filePath, header=0, skiprows=[1, 2],
names=['datetime', 'value', 'timeofday', 'dayofweek'])
xaxisDate = pd.to_datetime(data['datetime'])
return xaxisDate | [
"def",
"getDatetimeAxis",
"(",
")",
":",
"dataSet",
"=",
"'nyc_taxi'",
"filePath",
"=",
"'./data/'",
"+",
"dataSet",
"+",
"'.csv'",
"data",
"=",
"pd",
".",
"read_csv",
"(",
"filePath",
",",
"header",
"=",
"0",
",",
"skiprows",
"=",
"[",
"1",
",",
"2",
... | use datetime as x-axis | [
"use",
"datetime",
"as",
"x",
"-",
"axis"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_prediction/continuous_sequence/plotPerformance.py#L45-L55 | train | 198,794 |
numenta/htmresearch | projects/nik/nik_htm.py | NIK.encodeDeltas | def encodeDeltas(self, dx,dy):
"""Return the SDR for dx,dy"""
dxe = self.dxEncoder.encode(dx)
dye = self.dyEncoder.encode(dy)
ex = numpy.outer(dxe,dye)
return ex.flatten().nonzero()[0] | python | def encodeDeltas(self, dx,dy):
"""Return the SDR for dx,dy"""
dxe = self.dxEncoder.encode(dx)
dye = self.dyEncoder.encode(dy)
ex = numpy.outer(dxe,dye)
return ex.flatten().nonzero()[0] | [
"def",
"encodeDeltas",
"(",
"self",
",",
"dx",
",",
"dy",
")",
":",
"dxe",
"=",
"self",
".",
"dxEncoder",
".",
"encode",
"(",
"dx",
")",
"dye",
"=",
"self",
".",
"dyEncoder",
".",
"encode",
"(",
"dy",
")",
"ex",
"=",
"numpy",
".",
"outer",
"(",
... | Return the SDR for dx,dy | [
"Return",
"the",
"SDR",
"for",
"dx",
"dy"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/nik/nik_htm.py#L192-L197 | train | 198,795 |
numenta/htmresearch | projects/nik/nik_htm.py | NIK.encodeThetas | def encodeThetas(self, theta1, theta2):
"""Return the SDR for theta1 and theta2"""
# print >> sys.stderr, "encoded theta1 value = ", theta1
# print >> sys.stderr, "encoded theta2 value = ", theta2
t1e = self.theta1Encoder.encode(theta1)
t2e = self.theta2Encoder.encode(theta2)
# print >> sys.stderr, "encoded theta1 = ", t1e.nonzero()[0]
# print >> sys.stderr, "encoded theta2 = ", t2e.nonzero()[0]
ex = numpy.outer(t2e,t1e)
return ex.flatten().nonzero()[0] | python | def encodeThetas(self, theta1, theta2):
"""Return the SDR for theta1 and theta2"""
# print >> sys.stderr, "encoded theta1 value = ", theta1
# print >> sys.stderr, "encoded theta2 value = ", theta2
t1e = self.theta1Encoder.encode(theta1)
t2e = self.theta2Encoder.encode(theta2)
# print >> sys.stderr, "encoded theta1 = ", t1e.nonzero()[0]
# print >> sys.stderr, "encoded theta2 = ", t2e.nonzero()[0]
ex = numpy.outer(t2e,t1e)
return ex.flatten().nonzero()[0] | [
"def",
"encodeThetas",
"(",
"self",
",",
"theta1",
",",
"theta2",
")",
":",
"# print >> sys.stderr, \"encoded theta1 value = \", theta1",
"# print >> sys.stderr, \"encoded theta2 value = \", theta2",
"t1e",
"=",
"self",
".",
"theta1Encoder",
".",
"encode",
"(",
"theta1",
")... | Return the SDR for theta1 and theta2 | [
"Return",
"the",
"SDR",
"for",
"theta1",
"and",
"theta2"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/nik/nik_htm.py#L200-L209 | train | 198,796 |
numenta/htmresearch | projects/nik/nik_htm.py | NIK.decodeThetas | def decodeThetas(self, predictedCells):
"""
Given the set of predicted cells, return the predicted theta1 and theta2
"""
a = numpy.zeros(self.bottomUpInputSize)
a[predictedCells] = 1
a = a.reshape((self.theta1Encoder.getWidth(), self.theta1Encoder.getWidth()))
theta1PredictedBits = a.mean(axis=0).nonzero()[0]
theta2PredictedBits = a.mean(axis=1).nonzero()[0]
# To decode it we need to create a flattened array again and pass it
# to encoder.
# TODO: We use encoder's topDownCompute method - not sure if that is best.
t1 = numpy.zeros(self.theta1Encoder.getWidth())
t1[theta1PredictedBits] = 1
t1Prediction = self.theta1Encoder.topDownCompute(t1)[0].value
t2 = numpy.zeros(self.theta2Encoder.getWidth())
t2[theta2PredictedBits] = 1
t2Prediction = self.theta2Encoder.topDownCompute(t2)[0].value
# print >> sys.stderr, "predicted cells = ", predictedCells
# print >> sys.stderr, "decoded theta1 bits = ", theta1PredictedBits
# print >> sys.stderr, "decoded theta2 bits = ", theta2PredictedBits
# print >> sys.stderr, "decoded theta1 value = ", t1Prediction
# print >> sys.stderr, "decoded theta2 value = ", t2Prediction
return t1Prediction, t2Prediction | python | def decodeThetas(self, predictedCells):
"""
Given the set of predicted cells, return the predicted theta1 and theta2
"""
a = numpy.zeros(self.bottomUpInputSize)
a[predictedCells] = 1
a = a.reshape((self.theta1Encoder.getWidth(), self.theta1Encoder.getWidth()))
theta1PredictedBits = a.mean(axis=0).nonzero()[0]
theta2PredictedBits = a.mean(axis=1).nonzero()[0]
# To decode it we need to create a flattened array again and pass it
# to encoder.
# TODO: We use encoder's topDownCompute method - not sure if that is best.
t1 = numpy.zeros(self.theta1Encoder.getWidth())
t1[theta1PredictedBits] = 1
t1Prediction = self.theta1Encoder.topDownCompute(t1)[0].value
t2 = numpy.zeros(self.theta2Encoder.getWidth())
t2[theta2PredictedBits] = 1
t2Prediction = self.theta2Encoder.topDownCompute(t2)[0].value
# print >> sys.stderr, "predicted cells = ", predictedCells
# print >> sys.stderr, "decoded theta1 bits = ", theta1PredictedBits
# print >> sys.stderr, "decoded theta2 bits = ", theta2PredictedBits
# print >> sys.stderr, "decoded theta1 value = ", t1Prediction
# print >> sys.stderr, "decoded theta2 value = ", t2Prediction
return t1Prediction, t2Prediction | [
"def",
"decodeThetas",
"(",
"self",
",",
"predictedCells",
")",
":",
"a",
"=",
"numpy",
".",
"zeros",
"(",
"self",
".",
"bottomUpInputSize",
")",
"a",
"[",
"predictedCells",
"]",
"=",
"1",
"a",
"=",
"a",
".",
"reshape",
"(",
"(",
"self",
".",
"theta1... | Given the set of predicted cells, return the predicted theta1 and theta2 | [
"Given",
"the",
"set",
"of",
"predicted",
"cells",
"return",
"the",
"predicted",
"theta1",
"and",
"theta2"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/nik/nik_htm.py#L212-L239 | train | 198,797 |
numenta/htmresearch | projects/nik/nik_htm.py | NIK.inferTM | def inferTM(self, bottomUp, externalInput):
"""
Run inference and return the set of predicted cells
"""
self.reset()
# print >> sys.stderr, "Bottom up: ", bottomUp
# print >> sys.stderr, "ExternalInput: ",externalInput
self.tm.compute(bottomUp,
basalInput=externalInput,
learn=False)
# print >> sys.stderr, ("new active cells " + str(self.tm.getActiveCells()))
# print >> sys.stderr, ("new predictive cells " + str(self.tm.getPredictiveCells()))
return self.tm.getPredictiveCells() | python | def inferTM(self, bottomUp, externalInput):
"""
Run inference and return the set of predicted cells
"""
self.reset()
# print >> sys.stderr, "Bottom up: ", bottomUp
# print >> sys.stderr, "ExternalInput: ",externalInput
self.tm.compute(bottomUp,
basalInput=externalInput,
learn=False)
# print >> sys.stderr, ("new active cells " + str(self.tm.getActiveCells()))
# print >> sys.stderr, ("new predictive cells " + str(self.tm.getPredictiveCells()))
return self.tm.getPredictiveCells() | [
"def",
"inferTM",
"(",
"self",
",",
"bottomUp",
",",
"externalInput",
")",
":",
"self",
".",
"reset",
"(",
")",
"# print >> sys.stderr, \"Bottom up: \", bottomUp",
"# print >> sys.stderr, \"ExternalInput: \",externalInput",
"self",
".",
"tm",
".",
"compute",
"(",
"botto... | Run inference and return the set of predicted cells | [
"Run",
"inference",
"and",
"return",
"the",
"set",
"of",
"predicted",
"cells"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/nik/nik_htm.py#L261-L273 | train | 198,798 |
numenta/htmresearch | projects/sequence_prediction/discrete_sequences/lstm/suite.py | BasicEncoder.classify | def classify(self, encoding, num=1):
"""
Classify with basic one-hot local incoding
"""
probDist = numpy.exp(encoding) / numpy.sum(numpy.exp(encoding))
sortIdx = numpy.argsort(probDist)
return sortIdx[-num:].tolist() | python | def classify(self, encoding, num=1):
"""
Classify with basic one-hot local incoding
"""
probDist = numpy.exp(encoding) / numpy.sum(numpy.exp(encoding))
sortIdx = numpy.argsort(probDist)
return sortIdx[-num:].tolist() | [
"def",
"classify",
"(",
"self",
",",
"encoding",
",",
"num",
"=",
"1",
")",
":",
"probDist",
"=",
"numpy",
".",
"exp",
"(",
"encoding",
")",
"/",
"numpy",
".",
"sum",
"(",
"numpy",
".",
"exp",
"(",
"encoding",
")",
")",
"sortIdx",
"=",
"numpy",
"... | Classify with basic one-hot local incoding | [
"Classify",
"with",
"basic",
"one",
"-",
"hot",
"local",
"incoding"
] | 70c096b09a577ea0432c3f3bfff4442d4871b7aa | https://github.com/numenta/htmresearch/blob/70c096b09a577ea0432c3f3bfff4442d4871b7aa/projects/sequence_prediction/discrete_sequences/lstm/suite.py#L73-L79 | train | 198,799 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.