text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plotlyFrequencyHistogram(counts):
""" x-axis is a count of how many times a bit was active y-axis is number of bits that have that frequency """ |
data = [
go.Histogram(
x=tuple(count for _, _, count in counts.getNonZerosSorted())
)
]
py.plot(data, filename=os.environ.get("HEATMAP_NAME",
str(datetime.datetime.now()))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getSparseTensor(numNonzeros, inputSize, outputSize, onlyPositive=False, fixedRange=1.0/24):
""" Return a random tensor that is initialized like a weight matrix Size is outputSize X inputSize, where weightSparsity% of each row is non-zero """ |
# Initialize weights in the typical fashion.
w = torch.Tensor(outputSize, inputSize, )
if onlyPositive:
w.data.uniform_(0, fixedRange)
else:
w.data.uniform_(-fixedRange, fixedRange)
# Zero out weights for sparse weight matrices
if numNonzeros < inputSize:
numZeros = inputSize - numNonzeros
outputIndices = np.arange(outputSize)
inputIndices = np.array([np.random.permutation(inputSize)[:numZeros]
for _ in outputIndices], dtype=np.long)
# Create tensor indices for all non-zero weights
zeroIndices = np.empty((outputSize, numZeros, 2), dtype=np.long)
zeroIndices[:, :, 0] = outputIndices[:, None]
zeroIndices[:, :, 1] = inputIndices
zeroIndices = torch.LongTensor(zeroIndices.reshape(-1, 2))
zeroWts = (zeroIndices[:, 0], zeroIndices[:, 1])
w.data[zeroWts] = 0.0
return w |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPermutedTensors(W, kw, n, m2, noisePct):
""" Generate m2 noisy versions of W. Noisy version of W is generated by randomly permuting noisePct of the non-zero components to other components. :param W: :param n: :param m2: :param noisePct: :return: """ |
W2 = W.repeat(m2, 1)
nz = W[0].nonzero()
numberToZero = int(round(noisePct * kw))
for i in range(m2):
indices = np.random.permutation(kw)[0:numberToZero]
for j in indices:
W2[i,nz[j]] = 0
return W2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTheta(k, nTrials=100000):
""" Estimate a reasonable value of theta for this k. """ |
theDots = np.zeros(nTrials)
w1 = getSparseTensor(k, k, nTrials, fixedRange=1.0/k)
for i in range(nTrials):
theDots[i] = w1[i].dot(w1[i])
dotMean = theDots.mean()
print("k=", k, "min/mean/max diag of w dot products",
theDots.min(), dotMean, theDots.max())
theta = dotMean / 2.0
print("Using theta as mean / 2.0 = ", theta)
return theta, theDots |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def returnFalseNegatives(kw, noisePct, n, theta):
""" Generate a weight vector W, with kw non-zero components. Generate 1000 noisy versions of W and return the match statistics. Noisy version of W is generated by randomly setting noisePct of the non-zero components to zero. :param kw: k for the weight vectors :param noisePct: percent noise, from 0 to 1 :param n: dimensionality of input vector :param theta: threshold for matching after dot product :return: percent that matched, number that matched, total match comparisons """ |
W = getSparseTensor(kw, n, 1, fixedRange=1.0 / kw)
# Get permuted versions of W and see how many match
m2 = 10
inputVectors = getPermutedTensors(W, kw, n, m2, noisePct)
dot = inputVectors.matmul(W.t())
numMatches = ((dot >= theta).sum()).item()
pctMatches = numMatches / float(m2)
return pctMatches, numMatches, m2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def computeScaledProbabilities( listOfScales=[1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0], listofkValues=[64, 128, 256], kw=32, n=1000, numWorkers=10, nTrials=1000, ):
""" Compute the impact of S on match probabilities for a fixed value of n. """ |
# Create arguments for the possibilities we want to test
args = []
theta, _ = getTheta(kw)
for ki, k in enumerate(listofkValues):
for si, s in enumerate(listOfScales):
args.append({
"k": k, "kw": kw, "n": n, "theta": theta,
"nTrials": nTrials, "inputScaling": s,
"errorIndex": [ki, si],
})
result = computeMatchProbabilityParallel(args, numWorkers)
errors = np.zeros((len(listofkValues), len(listOfScales)))
for r in result:
errors[r["errorIndex"][0], r["errorIndex"][1]] = r["pctMatches"]
print("Errors using scaled inputs, for kw=", kw)
print(repr(errors))
plotScaledMatches(listofkValues, listOfScales, errors,
"images/scalar_effect_of_scale_kw" + str(kw) + ".pdf") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def computeMatchProbabilityOmega(k, bMax, theta, nTrials=100):
""" The Omega match probability estimates the probability of matching when both vectors have exactly b components in common. This function computes this probability for b=1 to bMax. For each value of b this function: 1) Creates nTrials instances of Xw(b) which are vectors with b components where each component is uniform in [-1/k, 1/k]. 2) Creates nTrials instances of Xi(b) which are vectors with b components where each component is uniform in [0, 2/k]. 3) Does every possible dot product of Xw(b) dot Xi(b), i.e. nTrials * nTrials dot products. 4) Counts the fraction of cases where Xw(b) dot Xi(b) >= theta Returns an array with bMax entries, where each entry contains the probability computed in 4). """ |
omegaProb = np.zeros(bMax+1)
for b in range(1, bMax+1):
xwb = getSparseTensor(b, b, nTrials, fixedRange=1.0/k)
xib = getSparseTensor(b, b, nTrials, onlyPositive=True, fixedRange=2.0/k)
r = xwb.matmul(xib.t())
numMatches = ((r >= theta).sum()).item()
omegaProb[b] = numMatches / float(nTrials * nTrials)
print(omegaProb)
return omegaProb |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plotMatches2(listofNValues, errors, listOfScales, scaleErrors, fileName = "images/scalar_matches.pdf"):
""" Plot two figures side by side in an aspect ratio appropriate for the paper. """ |
w, h = figaspect(0.4)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(w,h))
plotMatches(listofNValues, errors, fileName=None, fig=fig, ax=ax1)
plotScaledMatches(listOfScales, scaleErrors, fileName=None, fig=fig, ax=ax2)
plt.savefig(fileName)
plt.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createPregeneratedGraphs():
""" Creates graphs based on previous runs of the scripts. Useful for editing graph format for writeups. """ |
# Graph for computeMatchProbabilities(kw=32, nTrials=3000)
listofNValues = [250, 500, 1000, 1500, 2000, 2500]
kw = 32
errors = np.array([
[3.65083333e-03, 3.06166667e-04, 1.89166667e-05,
4.16666667e-06, 1.50000000e-06, 9.16666667e-07],
[2.44633333e-02, 3.64491667e-03, 3.16083333e-04,
6.93333333e-05, 2.16666667e-05, 8.66666667e-06],
[7.61641667e-02, 2.42496667e-02, 3.75608333e-03,
9.78333333e-04, 3.33250000e-04, 1.42250000e-04],
[2.31302500e-02, 2.38609167e-02, 2.28072500e-02,
2.33225000e-02, 2.30650000e-02, 2.33988333e-02]
])
# Graph for computeScaledProbabilities(nTrials=3000)
listOfScales = [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
scaleErrors = np.array([
[1.94166667e-05, 1.14900000e-03, 7.20725000e-03, 1.92405833e-02,
3.60794167e-02, 5.70276667e-02, 7.88510833e-02],
[3.12500000e-04, 7.07616667e-03, 2.71600000e-02, 5.72415833e-02,
8.95497500e-02, 1.21294333e-01, 1.50582500e-01],
[3.97708333e-03, 3.31468333e-02, 8.04755833e-02, 1.28687750e-01,
1.71220000e-01, 2.07019250e-01, 2.34703167e-01]
])
plotMatches2(listofNValues, errors,
listOfScales, scaleErrors,
"images/scalar_matches_kw" + str(kw) + ".pdf") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _learn(connections, rng, learningSegments, activeInput, potentialOverlaps, initialPermanence, sampleSize, permanenceIncrement, permanenceDecrement, maxSynapsesPerSegment):
""" Adjust synapse permanences, grow new synapses, and grow new segments. @param learningActiveSegments (numpy array) @param learningMatchingSegments (numpy array) @param segmentsToPunish (numpy array) @param activeInput (numpy array) @param potentialOverlaps (numpy array) """ |
# Learn on existing segments
connections.adjustSynapses(learningSegments, activeInput,
permanenceIncrement, -permanenceDecrement)
# Grow new synapses. Calculate "maxNew", the maximum number of synapses to
# grow per segment. "maxNew" might be a number or it might be a list of
# numbers.
if sampleSize == -1:
maxNew = len(activeInput)
else:
maxNew = sampleSize - potentialOverlaps[learningSegments]
if maxSynapsesPerSegment != -1:
synapseCounts = connections.mapSegmentsToSynapseCounts(
learningSegments)
numSynapsesToReachMax = maxSynapsesPerSegment - synapseCounts
maxNew = np.where(maxNew <= numSynapsesToReachMax,
maxNew, numSynapsesToReachMax)
connections.growSynapsesToSample(learningSegments, activeInput,
maxNew, initialPermanence, rng) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, sensorToBodyByColumn, sensorToSpecificObjectByColumn):
""" Compute the "body's location relative to a specific object" from an array of "sensor's location relative to a specific object" and an array of "sensor's location relative to body" These arrays consist of one module per cortical column. This is a metric computation, similar to that of the SensorToSpecificObjectModule, but with voting. In effect, the columns vote on "the body's location relative to a specific object". Note: Each column can vote for an arbitrary number of cells, but it can't vote for a single cell more than once. This is necessary because we don't want ambiguity in a column to cause some cells to get extra votes. There are a few ways that this could be biologically plausible: - Explanation 1: Nearby dendritic segments are independent coincidence detectors, but perhaps their dendritic spikes don't sum. Meanwhile, maybe dendritic spikes from far away dendritic segments do sum. - Explanation 2: Dendritic spikes from different columns are separated temporally, not spatially. All the spikes from one column "arrive" at the cell at the same time, but the dendritic spikes from other columns arrive at other times. With each of these temporally-separated dendritic spikes, the unsupported cells are inhibited, or the spikes' effects are summed. - Explanation 3: Another population of cells within the cortical column might calculate the "body's location relative to a specific object" in this same "metric" way, but without tallying any votes. Then it relays this SDR subcortically, voting 0 or 1 times for each cell. @param sensorToBodyInputs (list of numpy arrays) The "sensor's location relative to the body" input from each cortical column @param sensorToSpecificObjectInputs (list of numpy arrays) The "sensor's location relative to specific object" input from each cortical column """ |
votesByCell = np.zeros(self.cellCount, dtype="int")
self.activeSegmentsByColumn = []
for (connections,
activeSensorToBodyCells,
activeSensorToSpecificObjectCells) in zip(self.connectionsByColumn,
sensorToBodyByColumn,
sensorToSpecificObjectByColumn):
overlaps = connections.computeActivity({
"sensorToBody": activeSensorToBodyCells,
"sensorToSpecificObject": activeSensorToSpecificObjectCells,
})
activeSegments = np.where(overlaps >= 2)[0]
votes = connections.mapSegmentsToCells(activeSegments)
votes = np.unique(votes) # Only allow a column to vote for a cell once.
votesByCell[votes] += 1
self.activeSegmentsByColumn.append(activeSegments)
candidates = np.where(votesByCell == np.max(votesByCell))[0]
# If possible, select only from current active cells.
#
# If we were to always activate all candidates, there would be an explosive
# back-and-forth between this layer and the sensorToSpecificObject layer.
self.activeCells = np.intersect1d(self.activeCells, candidates)
if self.activeCells.size == 0:
# Otherwise, activate all cells with the maximum number of active
# segments.
self.activeCells = candidates
self.inhibitedCells = np.setdiff1d(np.where(votesByCell > 0)[0],
self.activeCells) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metricCompute(self, sensorToBody, bodyToSpecificObject):
""" Compute the "sensor's location relative to a specific object" from the "body's location relative to a specific object" and the "sensor's location relative to body" @param sensorToBody (numpy array) Active cells of a single module that represents the sensor's location relative to the body @param bodyToSpecificObject (numpy array) Active cells of a single module that represents the body's location relative to a specific object """ |
overlaps = self.metricConnections.computeActivity({
"bodyToSpecificObject": bodyToSpecificObject,
"sensorToBody": sensorToBody,
})
self.activeMetricSegments = np.where(overlaps >= 2)[0]
self.activeCells = np.unique(
self.metricConnections.mapSegmentsToCells(
self.activeMetricSegments)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def anchorCompute(self, anchorInput, learn):
""" Compute the "sensor's location relative to a specific object" from the feature-location pair. @param anchorInput (numpy array) Active cells in the feature-location pair layer @param learn (bool) If true, maintain current cell activity and learn this input on the currently active cells """ |
if learn:
self._anchorComputeLearningMode(anchorInput)
else:
overlaps = self.anchorConnections.computeActivity(
anchorInput, self.connectedPermanence)
self.activeSegments = np.where(overlaps >= self.activationThreshold)[0]
self.activeCells = np.unique(
self.anchorConnections.mapSegmentsToCells(self.activeSegments)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, egocentricLocation):
""" Compute the new active cells from the given "sensor location relative to body" vector. @param egocentricLocation (pair of floats) [di, dj] offset of the sensor from the body. """ |
offsetInCellFields = (np.matmul(self.rotationMatrix, egocentricLocation) *
self.cellFieldsPerUnitDistance)
np.mod(offsetInCellFields, self.cellDimensions, out=offsetInCellFields)
self.activeCells = np.unique(
np.ravel_multi_index(np.floor(offsetInCellFields).T.astype('int'),
self.cellDimensions)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def htmresearchCorePrereleaseInstalled():
""" Make an attempt to determine if a pre-release version of htmresearch-core is installed already. @return: boolean """ |
try:
coreDistribution = pkg_resources.get_distribution("htmresearch-core")
if pkg_resources.parse_version(coreDistribution.version).is_prerelease:
# A pre-release dev version of htmresearch-core is installed.
return True
except pkg_resources.DistributionNotFound:
pass # Silently ignore. The absence of htmresearch-core will be handled by
# setuptools by default
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def infer(self, sensationList, reset=True, objectName=None):
""" Infer on a given set of sensations for a single object. The provided sensationList is a list of sensations, and each sensation is a mapping from cortical column to a tuple of three SDR's respectively corresponding to the locationInput, the coarseSensorInput, and the sensorInput. For example, the input can look as follows, if we are inferring a simple object with two sensations (with very few active bits for simplicity):
sensationList = [ { # location, coarse feature, fine feature for CC0, sensation 1 0: ( [1, 5, 10], [9, 32, 75], [6, 12, 52] ), # location, coarse feature, fine feature for CC1, sensation 1 1: ( [6, 2, 15], [11, 42, 92], [7, 11, 50] ), }, { # location, coarse feature, fine feature for CC0, sensation 2 0: ( [2, 9, 10], [10, 35, 78], [6, 12, 52] ), # location, coarse feature, fine feature for CC1, sensation 2 1: ( [1, 4, 12], [10, 32, 52], [6, 10, 52] ), }, ] If the object is known by the caller, an object name can be specified as an optional argument, and must match the objects given while learning. This is used later when evaluating inference statistics. Parameters: @param objects (dict) Objects to learn, in the canonical format specified above @param reset (bool) If set to True (which is the default value), the network will be reset after learning. @param objectName (str) Name of the objects (must match the names given during learning). """ |
self._unsetLearningMode()
statistics = collections.defaultdict(list)
if objectName is not None:
if objectName not in self.objectRepresentationsL2:
raise ValueError("The provided objectName was not given during"
" learning")
for sensations in sensationList:
# feed all columns with sensations
for col in xrange(self.numColumns):
location, coarseFeature, fineFeature = sensations[col]
self.locationInputs[col].addDataToQueue(list(location), 0, 0)
self.coarseSensors[col].addDataToQueue(list(coarseFeature), 0, 0)
self.sensors[col].addDataToQueue(list(fineFeature), 0, 0)
self.network.run(1)
self._updateInferenceStats(statistics, objectName)
if reset:
# send reset signal
self._sendReset()
# save statistics
statistics["numSteps"] = len(sensationList)
statistics["object"] = objectName if objectName is not None else "Unknown"
self.statistics.append(statistics) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getDefaultParams(self):
""" Returns a good default set of parameters to use in L2456 regions """ |
return {
"sensorParams": {
"outputWidth": self.sensorInputSize,
},
"coarseSensorParams": {
"outputWidth": self.sensorInputSize,
},
"locationParams": {
"activeBits": 41,
"outputWidth": self.sensorInputSize,
"radius": 2,
"verbosity": 0,
},
"L4Params": {
"columnCount": self.sensorInputSize,
"cellsPerColumn": 8,
"learn": True,
"learnOnOneCell": False,
"initialPermanence": 0.51,
"connectedPermanence": 0.6,
"permanenceIncrement": 0.1,
"permanenceDecrement": 0.02,
"minThreshold": 10,
"basalPredictedSegmentDecrement": 0.002,
"activationThreshold": 13,
"sampleSize": 20,
"implementation": "ApicalTiebreakCPP",
},
"L2Params": {
"inputWidth": self.sensorInputSize * 8,
"cellCount": 4096,
"sdrSize": 40,
"synPermProximalInc": 0.1,
"synPermProximalDec": 0.001,
"initialProximalPermanence": 0.6,
"minThresholdProximal": 10,
"sampleSizeProximal": 20,
"connectedPermanenceProximal": 0.5,
"synPermDistalInc": 0.1,
"synPermDistalDec": 0.001,
"initialDistalPermanence": 0.41,
"activationThresholdDistal": 13,
"sampleSizeDistal": 20,
"connectedPermanenceDistal": 0.5,
"learningMode": True,
},
"L6Params": {
"columnCount": self.sensorInputSize,
"cellsPerColumn": 8,
"learn": True,
"learnOnOneCell": False,
"initialPermanence": 0.51,
"connectedPermanence": 0.6,
"permanenceIncrement": 0.1,
"permanenceDecrement": 0.02,
"minThreshold": 10,
"basalPredictedSegmentDecrement": 0.004,
"activationThreshold": 13,
"sampleSize": 20,
},
"L5Params": {
"inputWidth": self.sensorInputSize * 8,
"cellCount": 4096,
"sdrSize": 40,
"synPermProximalInc": 0.1,
"synPermProximalDec": 0.001,
"initialProximalPermanence": 0.6,
"minThresholdProximal": 10,
"sampleSizeProximal": 20,
"connectedPermanenceProximal": 0.5,
"synPermDistalInc": 0.1,
"synPermDistalDec": 0.001,
"initialDistalPermanence": 0.41,
"activationThresholdDistal": 13,
"sampleSizeDistal": 20,
"connectedPermanenceDistal": 0.5,
"learningMode": True,
},
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _retrieveRegions(self):
""" Retrieve and store Python region instances for each column """ |
self.sensors = []
self.coarseSensors = []
self.locationInputs = []
self.L4Columns = []
self.L2Columns = []
self.L5Columns = []
self.L6Columns = []
for i in xrange(self.numColumns):
self.sensors.append(
self.network.regions["sensorInput_" + str(i)].getSelf()
)
self.coarseSensors.append(
self.network.regions["coarseSensorInput_" + str(i)].getSelf()
)
self.locationInputs.append(
self.network.regions["locationInput_" + str(i)].getSelf()
)
self.L4Columns.append(
self.network.regions["L4Column_" + str(i)].getSelf()
)
self.L2Columns.append(
self.network.regions["L2Column_" + str(i)].getSelf()
)
self.L5Columns.append(
self.network.regions["L5Column_" + str(i)].getSelf()
)
self.L6Columns.append(
self.network.regions["L6Column_" + str(i)].getSelf()
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plotSuccessRate_varyNumColumns(noiseSigma, noiseEverywhere):
""" Run and plot the experiment, varying the number of cortical columns. """ |
#
# Run the experiment
#
noiseLevels = [x * 0.01 for x in xrange(0, 101, 5)]
l2Overrides = {"sampleSizeDistal": 20}
columnCounts = [1, 2, 3, 4]
results = defaultdict(list)
for trial in xrange(1):
print "trial", trial
objectDescriptions = createRandomObjectDescriptions(10, 10)
for numColumns in columnCounts:
print "numColumns", numColumns
for noiseLevel in noiseLevels:
r = doExperiment(numColumns, l2Overrides, objectDescriptions,
noiseLevel, noiseSigma, numInitialTraversals=6,
noiseEverywhere=noiseEverywhere)
results[(numColumns, noiseLevel)].extend(r)
#
# Plot it
#
numCorrectActiveThreshold = 30
numIncorrectActiveThreshold = 10
plt.figure()
colors = dict(zip(columnCounts,
('r', 'k', 'g', 'b')))
markers = dict(zip(columnCounts,
('o', '*', 'D', 'x')))
for numColumns in columnCounts:
y = []
for noiseLevel in noiseLevels:
trials = results[(numColumns, noiseLevel)]
numPassed = len([True for numCorrect, numIncorrect in trials
if numCorrect >= numCorrectActiveThreshold
and numIncorrect <= numIncorrectActiveThreshold])
y.append(numPassed / float(len(trials)))
plt.plot(noiseLevels, y,
color=colors[numColumns],
marker=markers[numColumns])
lgnd = plt.legend(["%d columns" % numColumns
for numColumns in columnCounts],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.0)
plt.xlabel("Mean feedforward noise level")
plt.xticks([0.01 * n for n in xrange(0, 101, 10)])
plt.ylabel("Success rate")
plt.yticks([0.0, 0.2, 0.4, 0.6, 0.8, 1.0])
plt.title("Inference with normally distributed noise (stdev=%.2f)" % noiseSigma)
plotPath = os.path.join("plots",
"successRate_varyColumnCount_sigma%.2f_%s.pdf"
% (noiseSigma, time.strftime("%Y%m%d-%H%M%S")))
plt.savefig(plotPath, bbox_extra_artists=(lgnd,), bbox_inches="tight")
print "Saved file %s" % plotPath |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def randomTraversal(sensations, numTraversals):
""" Given a list of sensations, return the SDRs that would be obtained by numTraversals random traversals of that set of sensations. Each sensation is a dict mapping cortical column index to a pair of SDR's (one location and one feature). """ |
newSensations = []
for _ in range(numTraversals):
s = copy.deepcopy(sensations)
random.shuffle(s)
newSensations += s
return newSensations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, xt1, yt1, xt, yt, theta1t1, theta2t1, theta1, theta2):
""" Accumulate the various inputs. """ |
dx = xt - xt1
dy = yt - yt1
if self.numPoints < self.maxPoints:
self.dxValues[self.numPoints,0] = dx
self.dxValues[self.numPoints,1] = dy
self.thetaValues[self.numPoints,0] = theta1
self.thetaValues[self.numPoints,1] = theta2
self.numPoints += 1
# print >>sys.stderr, "Xt's: ", xt1, yt1, xt, yt, "Delta's: ", dx, dy
# print >>sys.stderr, "Theta t-1: ", theta1t1, theta2t1, "t:",theta1, theta2
elif self.numPoints == self.maxPoints:
print >> sys.stderr,"Max points exceeded, analyzing ",self.maxPoints,"points only"
self.numPoints += 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bind(cell1, cell2, moduleDimensions):
"""Return transform index for given cells. Convert to coordinate space, calculate transform, and convert back to an index. In coordinate space, the transform represents `C2 - C1`. """ |
cell1Coords = np.unravel_index(cell1, moduleDimensions)
cell2Coords = np.unravel_index(cell2, moduleDimensions)
transformCoords = [(c2 - c1) % m
for c1, c2, m in itertools.izip(cell1Coords, cell2Coords,
moduleDimensions)]
return np.ravel_multi_index(transformCoords, moduleDimensions) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unbind(cell1, transform, moduleDimensions):
"""Return the cell index corresponding to the other half of the transform. Assumes that `transform = bind(cell1, cell2)` and, given `cell1` and `transform`, returns `cell2`. """ |
cell1Coords = np.unravel_index(cell1, moduleDimensions)
transformCoords = np.unravel_index(transform, moduleDimensions)
cell2Coords = [(t + c1) % m
for c1, t, m in itertools.izip(cell1Coords, transformCoords,
moduleDimensions)]
return np.ravel_multi_index(cell2Coords, moduleDimensions) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def updatePlaceWeights(self):
""" We use a simplified version of Hebbian learning to learn place weights. Cells above the boost target are wired to the currently-active places, cells below it have their connection strength to them reduced. """ |
self.weightsPI += np.outer(self.activationsI - self.boostTarget,
self.activationsP)*self.dt*\
self.learnFactorP*self.learningRate
self.weightsPEL += np.outer(self.activationsEL - self.boostTarget,
self.activationsP)*self.dt*\
self.learnFactorP*self.learningRate
self.weightsPER += np.outer(self.activationsER - self.boostTarget,
self.activationsP)*self.dt*\
self.learnFactorP*self.learningRate
np.minimum(self.weightsPI, 1, self.weightsPI)
np.minimum(self.weightsPEL, 1, self.weightsPEL)
np.minimum(self.weightsPER, 1, self.weightsPER) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_union_mnist_dataset():
""" Create a UnionDataset composed of two versions of the MNIST datasets where each item in the dataset contains 2 distinct images superimposed """ |
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))])
mnist1 = datasets.MNIST('data', train=False, download=True, transform=transform)
data1 = zip(mnist1.test_data, mnist1.test_labels)
# Randomize second dataset
mnist2 = datasets.MNIST('data', train=False, download=True, transform=transform)
data2 = zip(mnist2.test_data, mnist2.test_labels)
random.shuffle(data2)
# Reorder images of second dataset with same label as first dataset
for i in range(len(data2)):
if data1[i][1] == data2[i][1]:
# Swap indices with same label to a location with diffent label
for j in range(len(data1)):
if data1[i][1] != data2[j][1] and data2[i][1] != data1[j][1]:
swap = data2[j]
data2[j] = data2[i]
data2[i] = swap
break
# Update second dataset with new item order
mnist2.test_data, mnist2.test_labels = zip(*data2)
# Combine the images of both datasets using the maximum value for each pixel
return UnionDataset(datasets=[mnist1, mnist2],
transform=lambda x, y: torch.max(x, y)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def noisy(pattern, noiseLevel, totalNumCells):
""" Generate a noisy copy of a pattern. Given number of active bits w = len(pattern), deactivate noiseLevel*w cells, and activate noiseLevel*w other cells. @param pattern (set) A set of active indices @param noiseLevel (float) The percentage of the bits to shuffle @param totalNumCells (int) The number of cells in the SDR, active and inactive @return (set) A noisy set of active indices """ |
n = int(noiseLevel * len(pattern))
noised = set(pattern)
noised.difference_update(random.sample(noised, n))
for _ in xrange(n):
while True:
v = random.randint(0, totalNumCells - 1)
if v not in pattern and v not in noised:
noised.add(v)
break
return noised |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createRandomObjects(numObjects, locationsPerObject, featurePoolSize):
""" Generate random objects. @param numObjects (int) The number of objects to generate @param locationsPerObject (int) The number of points on each object @param featurePoolSize (int) The number of possible features @return For example, { 0: [0, 1, 2], 1: [0, 2, 1], 2: [2, 0, 1], } is 3 objects. The first object has Feature 0 and Location 0, Feature 1 at Location 1, Feature 2 at location 2, etc. """ |
allFeatures = range(featurePoolSize)
allLocations = range(locationsPerObject)
objects = dict((name,
[random.choice(allFeatures) for _ in xrange(locationsPerObject)])
for name in xrange(numObjects))
return objects |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createL4L2Column(network, networkConfig, suffix=""):
""" Create a a single column containing one L4 and one L2. networkConfig is a dict that must contain the following keys (additional keys ok):
{ "enableFeedback": True, "externalInputSize": 1024, "sensorInputSize": 1024, "L4RegionType": "py.ApicalTMPairRegion", "L4Params": { <constructor parameters for the L4 region> }, "L2Params": { <constructor parameters for ColumnPoolerRegion> }, "lateralSPParams": { <constructor parameters for optional SPRegion> }, "feedForwardSPParams": { <constructor parameters for optional SPRegion> } } Region names are externalInput, sensorInput, L4Column, and ColumnPoolerRegion. Each name has an optional string suffix appended to it. Configuration options: "lateralSPParams" and "feedForwardSPParams" are optional. If included appropriate spatial pooler regions will be added to the network. If externalInputSize is 0, the externalInput sensor (and SP if appropriate) will NOT be created. In this case it is expected that L4 is a sequence memory region (e.g. ApicalTMSequenceRegion) """ |
externalInputName = "externalInput" + suffix
sensorInputName = "sensorInput" + suffix
L4ColumnName = "L4Column" + suffix
L2ColumnName = "L2Column" + suffix
L4Params = copy.deepcopy(networkConfig["L4Params"])
L4Params["basalInputWidth"] = networkConfig["externalInputSize"]
L4Params["apicalInputWidth"] = networkConfig["L2Params"]["cellCount"]
if networkConfig["externalInputSize"] > 0:
network.addRegion(
externalInputName, "py.RawSensor",
json.dumps({"outputWidth": networkConfig["externalInputSize"]}))
network.addRegion(
sensorInputName, "py.RawSensor",
json.dumps({"outputWidth": networkConfig["sensorInputSize"]}))
# Fixup network to include SP, if defined in networkConfig
if networkConfig["externalInputSize"] > 0:
_addLateralSPRegion(network, networkConfig, suffix)
_addFeedForwardSPRegion(network, networkConfig, suffix)
network.addRegion(
L4ColumnName, networkConfig["L4RegionType"],
json.dumps(L4Params))
network.addRegion(
L2ColumnName, "py.ColumnPoolerRegion",
json.dumps(networkConfig["L2Params"]))
# Set phases appropriately so regions are executed in the proper sequence
# This is required when we create multiple columns - the order of execution
# is not the same as the order of region creation.
if networkConfig["externalInputSize"] > 0:
network.setPhases(externalInputName,[0])
network.setPhases(sensorInputName,[0])
_setLateralSPPhases(network, networkConfig)
_setFeedForwardSPPhases(network, networkConfig)
# L4 and L2 regions always have phases 2 and 3, respectively
network.setPhases(L4ColumnName,[2])
network.setPhases(L2ColumnName,[3])
# Link SP region(s), if applicable
if networkConfig["externalInputSize"] > 0:
_linkLateralSPRegion(network, networkConfig, externalInputName, L4ColumnName)
_linkFeedForwardSPRegion(network, networkConfig, sensorInputName, L4ColumnName)
# Link L4 to L2
network.link(L4ColumnName, L2ColumnName, "UniformLink", "",
srcOutput="activeCells", destInput="feedforwardInput")
network.link(L4ColumnName, L2ColumnName, "UniformLink", "",
srcOutput="predictedActiveCells",
destInput="feedforwardGrowthCandidates")
# Link L2 feedback to L4
if networkConfig.get("enableFeedback", True):
network.link(L2ColumnName, L4ColumnName, "UniformLink", "",
srcOutput="feedForwardOutput", destInput="apicalInput",
propagationDelay=1)
# Link reset output to L2 and L4
network.link(sensorInputName, L2ColumnName, "UniformLink", "",
srcOutput="resetOut", destInput="resetIn")
network.link(sensorInputName, L4ColumnName, "UniformLink", "",
srcOutput="resetOut", destInput="resetIn")
enableProfiling(network)
return network |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createMultipleL4L2Columns(network, networkConfig):
""" Create a network consisting of multiple columns. Each column contains one L4 and one L2, is identical in structure to the network created by createL4L2Column. In addition all the L2 columns are fully connected to each other through their lateral inputs. Region names have a column number appended as in externalInput_0, externalInput_1, etc. networkConfig must be of the following format (see createL4L2Column for further documentation):
{ "networkType": "MultipleL4L2Columns", "numCorticalColumns": 3, "externalInputSize": 1024, "sensorInputSize": 1024, "L4Params": { <constructor parameters for ApicalTMPairRegion }, "L2Params": { <constructor parameters for ColumnPoolerRegion> }, "lateralSPParams": { <constructor parameters for optional SPRegion> }, "feedForwardSPParams": { <constructor parameters for optional SPRegion> } } """ |
# Create each column
numCorticalColumns = networkConfig["numCorticalColumns"]
for i in xrange(numCorticalColumns):
networkConfigCopy = copy.deepcopy(networkConfig)
layerConfig = networkConfigCopy["L2Params"]
layerConfig["seed"] = layerConfig.get("seed", 42) + i
layerConfig["numOtherCorticalColumns"] = numCorticalColumns - 1
suffix = "_" + str(i)
network = createL4L2Column(network, networkConfigCopy, suffix)
# Now connect the L2 columns laterally
for i in range(networkConfig["numCorticalColumns"]):
suffixSrc = "_" + str(i)
for j in range(networkConfig["numCorticalColumns"]):
if i != j:
suffixDest = "_" + str(j)
network.link(
"L2Column" + suffixSrc, "L2Column" + suffixDest,
"UniformLink", "",
srcOutput="feedForwardOutput", destInput="lateralInput",
propagationDelay=1)
enableProfiling(network)
return network |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createMultipleL4L2ColumnsWithTopology(network, networkConfig):
""" Create a network consisting of multiple columns. Each column contains one L4 and one L2, is identical in structure to the network created by createL4L2Column. In addition the L2 columns are connected to each other through their lateral inputs, based on the topological information provided. Region names have a column number appended as in externalInput_0, externalInput_1, etc. networkConfig must be of the following format (see createL4L2Column for further documentation):
{ "networkType": "MultipleL4L2Columns", "numCorticalColumns": 3, "externalInputSize": 1024, "sensorInputSize": 1024, "columnPositions": a list of 2D coordinates, one for each column. Used to calculate the connections between columns. By convention, coordinates are integers. "maxConnectionDistance": should be a value >= 1. Determines how distant of columns will be connected to each other. Useful specific values are 1 and 1.5, which typically create grids without and with diagonal connections, respectively. "longDistanceConnections": Should be a value in [0,1). This is the probability that a column forms a connection with a distant column (i.e. beyond its normal connection distance). If this value is not provided, it defaults to 0, and all connections will be in the local vicinity. "L4Params": { <constructor parameters for ApicalTMPairRegion> }, "L2Params": { <constructor parameters for ColumnPoolerRegion> }, "lateralSPParams": { <constructor parameters for optional SPRegion> }, "feedForwardSPParams": { <constructor parameters for optional SPRegion> } } """ |
numCorticalColumns = networkConfig["numCorticalColumns"]
output_lateral_connections = [[] for i in
xrange(numCorticalColumns)]
input_lateral_connections = [[] for i in
xrange(numCorticalColumns)]
# If no column positions are provided, create a grid by default.
# This is typically what the user wants, so it makes sense to have it as
# a default.
columnPositions = networkConfig.get("columnPositions", None)
if columnPositions is None:
columnPositions = []
side_length = int(numpy.ceil(numpy.sqrt(numCorticalColumns)))
for i in range(side_length):
for j in range(side_length):
columnPositions.append((i, j))
columnPositions = columnPositions[:numCorticalColumns]
# Determine which columns will be mutually connected.
# This has to be done before the actual creation of the network, as each
# individual column need to know how many columns it is laterally connected
# to. These results are then used to actually connect the columns, once
# the network is created. It's awkward, but unavoidable.
longDistanceConnections = networkConfig.get("longDistanceConnections", 0.)
for i, src_pos in enumerate(columnPositions):
for j, dest_pos in enumerate(columnPositions):
if i != j:
if (numpy.linalg.norm(numpy.asarray(src_pos) -
numpy.asarray(dest_pos)) <=
networkConfig["maxConnectionDistance"] or
numpy.random.rand() < longDistanceConnections):
output_lateral_connections[i].append(j)
input_lateral_connections[j].append(i)
# Create each column
for i in xrange(numCorticalColumns):
networkConfigCopy = copy.deepcopy(networkConfig)
layerConfig = networkConfigCopy["L2Params"]
layerConfig["seed"] = layerConfig.get("seed", 42) + i
layerConfig["numOtherCorticalColumns"] = len(input_lateral_connections[i])
suffix = "_" + str(i)
network = createL4L2Column(network, networkConfigCopy, suffix)
# Now connect the L2 columns laterally
for i, connections in enumerate(output_lateral_connections):
suffixSrc = "_" + str(i)
for j in connections:
suffixDest = "_" + str(j)
network.link("L2Column" + suffixSrc, "L2Column" + suffixDest,
"UniformLink", "", srcOutput="feedForwardOutput",
destInput="lateralInput", propagationDelay=1)
enableProfiling(network)
return network |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def loadExperimentData(folder, area):
""" Loads the experiment's data from a MATLAB file into a python friendly structure. :param folder: Experiment data folder :param area: Experiament area to load. It should be 'V1' or 'AL' :return: The data as scipy matlab structure with the following fields: :Spiketrain.st: spike timing during stimulus (grating or naturalistic movie). :Spiketrain.st_gray: the spike timing during gray screen. The unit for spike timing is sampling frame. And spike timing is a 20X3 cell (corresponding to 20 repeats, and 3 stimulus (grating, and two naturalistic stimuli)). The same for the spike timing of gray screen. :imgPara.stim_type: 3 type of stimulus, grating and two naturalistic stimuli. :imgPara.stim_time: the length of each stimulus is 32 sec. :imgPara.updatefr: the frame rate of stimulus on screen is 60 Hz. :imgPara.intertime: the time between two stimulus, or gray screen is 8 sec. :imgPara.dt: the sample rate, ~0.075 :imgPara.F: number of sampling frames during stimulus is 32/0.075~=426 :imgPara.F_gray: number of sampling frames during gray screen is 8/0.075~=106 :ROI: the location of each neuron in the field. """ |
filename = os.path.join(folder, "Combo3_{}.mat".format(area))
contents = sio.loadmat(filename, variable_names=['data'],
struct_as_record=False, squeeze_me=True)
return contents['data'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def classifierPredict(testVector, storedVectors):
""" Return overlap of the testVector with stored representations for each object. """ |
numClasses = storedVectors.shape[0]
output = np.zeros((numClasses,))
for i in range(numClasses):
output[i] = np.sum(np.minimum(testVector, storedVectors[i, :]))
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_multiple_column_experiment():
""" Compare the ideal observer against a multi-column sensorimotor network. """ |
# Create the objects
featureRange = [5, 10, 20, 30]
pointRange = 1
objectRange = [100]
numLocations = [10]
numPoints = 10
numTrials = 10
columnRange = [1, 2, 3, 4, 5, 6, 7, 8]
useLocation = 1
resultsDir = os.path.dirname(os.path.realpath(__file__))
args = []
for c in reversed(columnRange):
for o in reversed(objectRange):
for l in numLocations:
for f in featureRange:
for t in range(numTrials):
args.append(
{"numObjects": o,
"numLocations": l,
"numFeatures": f,
"numColumns": c,
"trialNum": t,
"pointRange": pointRange,
"numPoints": numPoints,
"useLocation": useLocation
}
)
print "Number of experiments:",len(args)
idealResultsFile = os.path.join(resultsDir,
"ideal_multi_column_useLocation_{}.pkl".format(useLocation))
pool = Pool(processes=cpu_count())
result = pool.map(run_ideal_classifier, args)
# Pickle results for later use
with open(idealResultsFile, "wb") as f:
cPickle.dump(result, f)
htmResultsFile = os.path.join(resultsDir, "column_convergence_results.pkl")
runExperimentPool(
numObjects=objectRange,
numLocations=[10],
numFeatures=featureRange,
numColumns=columnRange,
numPoints=10,
nTrials=numTrials,
numWorkers=cpu_count(),
resultsName=htmResultsFile)
with open(htmResultsFile, "rb") as f:
results = cPickle.load(f)
with open(idealResultsFile, "rb") as f:
resultsIdeal = cPickle.load(f)
plt.figure()
plotConvergenceByColumn(results, columnRange, featureRange, numTrials)
plotConvergenceByColumn(resultsIdeal, columnRange, featureRange, numTrials,
"--")
plt.savefig('plots/ideal_observer_multiple_column.pdf') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(callLog, logFilename):
""" Save the call log history into this file. @param logFilename (path) Filename in which to save a pickled version of the call logs. """ |
with open(logFilename, "wb") as outp:
cPickle.dump(callLog, outp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getDefaultCombinedL4Params(self, numInputBits, inputSize, numExternalInputBits, externalInputSize, L2CellCount):
""" Returns a good default set of parameters to use in a combined L4 region. """ |
sampleSize = numExternalInputBits + numInputBits
activationThreshold = int(max(numExternalInputBits, numInputBits) * .6)
minThreshold = activationThreshold
return {
"columnCount": inputSize,
"cellsPerColumn": 16,
"learn": True,
"learnOnOneCell": False,
"initialPermanence": 0.41,
"connectedPermanence": 0.6,
"permanenceIncrement": 0.1,
"permanenceDecrement": 0.02,
"minThreshold": minThreshold,
"basalPredictedSegmentDecrement": 0.001,
"apicalPredictedSegmentDecrement": 0.0,
"reducedBasalThreshold": int(activationThreshold*0.6),
"activationThreshold": activationThreshold,
"sampleSize": sampleSize,
"implementation": "ApicalTiebreak",
"seed": self.seed,
"basalInputWidth": inputSize*16 + externalInputSize,
"apicalInputWidth": L2CellCount,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def findBinomialNsWithExpectedSampleMinimum(desiredValuesSorted, p, numSamples, nMax):
""" For each desired value, find an approximate n for which the sample minimum has a expected value equal to this value. For each value, find an adjacent pair of n values whose expected sample minima are below and above the desired value, respectively, and return a linearly-interpolated n between these two values. @param p (float) The p if the binomial distribution. @param numSamples (int) The number of samples in the sample minimum distribution. @return A list of results. Each result contains (interpolated_n, lower_value, upper_value). where each lower_value and upper_value are the expected sample minimum for floor(interpolated_n) and ceil(interpolated_n) """ |
# mapping from n -> expected value
actualValues = [
getExpectedValue(
SampleMinimumDistribution(numSamples,
BinomialDistribution(n, p, cache=True)))
for n in xrange(nMax + 1)]
results = []
n = 0
for desiredValue in desiredValuesSorted:
while n + 1 <= nMax and actualValues[n + 1] < desiredValue:
n += 1
if n + 1 > nMax:
break
interpolated = n + ((desiredValue - actualValues[n]) /
(actualValues[n+1] - actualValues[n]))
result = (interpolated, actualValues[n], actualValues[n + 1])
results.append(result)
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def findBinomialNsWithLowerBoundSampleMinimum(confidence, desiredValuesSorted, p, numSamples, nMax):
""" For each desired value, find an approximate n for which the sample minimum has a probabilistic lower bound equal to this value. For each value, find an adjacent pair of n values whose lower bound sample minima are below and above the desired value, respectively, and return a linearly-interpolated n between these two values. @param confidence (float) For the probabilistic lower bound, this specifies the probability. If this is 0.8, that means that there's an 80% chance that the sample minimum is >= the desired value, and 20% chance that it's < the desired value. @param p (float) The p if the binomial distribution. @param numSamples (int) The number of samples in the sample minimum distribution. @return A list of results. Each result contains (interpolated_n, lower_value, upper_value). where each lower_value and upper_value are the probabilistic lower bound sample minimum for floor(interpolated_n) and ceil(interpolated_n) respectively. """ |
def P(n, numOccurrences):
"""
Given n, return probability than the sample minimum is >= numOccurrences
"""
return 1 - SampleMinimumDistribution(numSamples, BinomialDistribution(n, p)).cdf(
numOccurrences - 1)
results = []
n = 0
for desiredValue in desiredValuesSorted:
while n + 1 <= nMax and P(n + 1, desiredValue) < confidence:
n += 1
if n + 1 > nMax:
break
left = P(n, desiredValue)
right = P(n + 1, desiredValue)
interpolated = n + ((confidence - left) /
(right - left))
result = (interpolated, left, right)
results.append(result)
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tempoAdjust1(self, tempoFactor):
""" Adjust tempo based on recent active apical input only :param tempoFactor: scaling signal to MC clock from last sequence item :return: adjusted scaling signal """ |
if self.apicalIntersect.any():
tempoFactor = tempoFactor * 0.5
else:
tempoFactor = tempoFactor * 2
return tempoFactor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tempoAdjust2(self, tempoFactor):
""" Adjust tempo by aggregating active basal cell votes for pre vs. post :param tempoFactor: scaling signal to MC clock from last sequence item :return: adjusted scaling signal """ |
late_votes = (len(self.adtm.getNextBasalPredictedCells()) - len(self.apicalIntersect)) * -1
early_votes = len(self.apicalIntersect)
votes = late_votes + early_votes
print('vote tally', votes)
if votes > 0:
tempoFactor = tempoFactor * 0.5
print 'speed up'
elif votes < 0:
tempoFactor = tempoFactor * 2
print 'slow down'
elif votes == 0:
print 'pick randomly'
if random.random() > 0.5:
tempoFactor = tempoFactor * 0.5
print 'random pick: speed up'
else:
tempoFactor = tempoFactor * 2
print 'random pick: slow down'
return tempoFactor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _countWhereGreaterEqualInRows(sparseMatrix, rows, threshold):
""" Like countWhereGreaterOrEqual, but for an arbitrary selection of rows, and without any column filtering. """ |
return sum(sparseMatrix.countWhereGreaterOrEqual(row, row+1,
0, sparseMatrix.nCols(),
threshold)
for row in rows) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, feedforwardInput=(), lateralInputs=(), feedforwardGrowthCandidates=None, learn=True, predictedInput = None,):
""" Runs one time step of the column pooler algorithm. @param feedforwardInput (sequence) Sorted indices of active feedforward input bits @param lateralInputs (list of sequences) For each lateral layer, a list of sorted indices of active lateral input bits @param feedforwardGrowthCandidates (sequence or None) Sorted indices of feedforward input bits that active cells may grow new synapses to. If None, the entire feedforwardInput is used. @param learn (bool) If True, we are learning a new object @param predictedInput (sequence) Sorted indices of predicted cells in the TM layer. """ |
if feedforwardGrowthCandidates is None:
feedforwardGrowthCandidates = feedforwardInput
# inference step
if not learn:
self._computeInferenceMode(feedforwardInput, lateralInputs)
# learning step
elif not self.onlineLearning:
self._computeLearningMode(feedforwardInput, lateralInputs,
feedforwardGrowthCandidates)
# online learning step
else:
if (predictedInput is not None and
len(predictedInput) > self.predictedInhibitionThreshold):
predictedActiveInput = numpy.intersect1d(feedforwardInput,
predictedInput)
predictedGrowthCandidates = numpy.intersect1d(
feedforwardGrowthCandidates, predictedInput)
self._computeInferenceMode(predictedActiveInput, lateralInputs)
self._computeLearningMode(predictedActiveInput, lateralInputs,
feedforwardGrowthCandidates)
elif not self.minSdrSize <= len(self.activeCells) <= self.maxSdrSize:
# If the pooler doesn't have a single representation, try to infer one,
# before actually attempting to learn.
self._computeInferenceMode(feedforwardInput, lateralInputs)
self._computeLearningMode(feedforwardInput, lateralInputs,
feedforwardGrowthCandidates)
else:
# If there isn't predicted input and we have a single SDR,
# we are extending that representation and should just learn.
self._computeLearningMode(feedforwardInput, lateralInputs,
feedforwardGrowthCandidates) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numberOfConnectedProximalSynapses(self, cells=None):
""" Returns the number of proximal connected synapses on these cells. Parameters: @param cells (iterable) Indices of the cells. If None return count for all cells. """ |
if cells is None:
cells = xrange(self.numberOfCells())
return _countWhereGreaterEqualInRows(self.proximalPermanences, cells,
self.connectedPermanenceProximal) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numberOfProximalSynapses(self, cells=None):
""" Returns the number of proximal synapses with permanence>0 on these cells. Parameters: @param cells (iterable) Indices of the cells. If None return count for all cells. """ |
if cells is None:
cells = xrange(self.numberOfCells())
n = 0
for cell in cells:
n += self.proximalPermanences.nNonZerosOnRow(cell)
return n |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numberOfDistalSegments(self, cells=None):
""" Returns the total number of distal segments for these cells. A segment "exists" if its row in the matrix has any permanence values > 0. Parameters: @param cells (iterable) Indices of the cells """ |
if cells is None:
cells = xrange(self.numberOfCells())
n = 0
for cell in cells:
if self.internalDistalPermanences.nNonZerosOnRow(cell) > 0:
n += 1
for permanences in self.distalPermanences:
if permanences.nNonZerosOnRow(cell) > 0:
n += 1
return n |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numberOfConnectedDistalSynapses(self, cells=None):
""" Returns the number of connected distal synapses on these cells. Parameters: @param cells (iterable) Indices of the cells. If None return count for all cells. """ |
if cells is None:
cells = xrange(self.numberOfCells())
n = _countWhereGreaterEqualInRows(self.internalDistalPermanences, cells,
self.connectedPermanenceDistal)
for permanences in self.distalPermanences:
n += _countWhereGreaterEqualInRows(permanences, cells,
self.connectedPermanenceDistal)
return n |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _learn(# mutated args permanences, rng, # activity activeCells, activeInput, growthCandidateInput, # configuration sampleSize, initialPermanence, permanenceIncrement, permanenceDecrement, connectedPermanence):
""" For each active cell, reinforce active synapses, punish inactive synapses, and grow new synapses to a subset of the active input bits that the cell isn't already connected to. Parameters: @param permanences (SparseMatrix) Matrix of permanences, with cells as rows and inputs as columns @param rng (Random) Random number generator @param activeCells (sorted sequence) Sorted list of the cells that are learning @param activeInput (sorted sequence) Sorted list of active bits in the input @param growthCandidateInput (sorted sequence) Sorted list of active bits in the input that the activeCells may grow new synapses to For remaining parameters, see the __init__ docstring. """ |
permanences.incrementNonZerosOnOuter(
activeCells, activeInput, permanenceIncrement)
permanences.incrementNonZerosOnRowsExcludingCols(
activeCells, activeInput, -permanenceDecrement)
permanences.clipRowsBelowAndAbove(
activeCells, 0.0, 1.0)
if sampleSize == -1:
permanences.setZerosOnOuter(
activeCells, activeInput, initialPermanence)
else:
existingSynapseCounts = permanences.nNonZerosPerRowOnCols(
activeCells, activeInput)
maxNewByCell = numpy.empty(len(activeCells), dtype="int32")
numpy.subtract(sampleSize, existingSynapseCounts, out=maxNewByCell)
permanences.setRandomZerosOnOuter(
activeCells, growthCandidateInput, maxNewByCell, initialPermanence, rng) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def runExperiment(n, w, threshold, cellsPerColumn, folder, numTrials=5, cleverTMSDRs=False):
""" Run a PoolOfPairsLocation1DExperiment various union sizes. """ |
if not os.path.exists(folder):
try:
os.makedirs(folder)
except OSError:
# Multiple parallel tasks might create the folder. That's fine.
pass
filename = "{}/n_{}_w_{}_threshold_{}_cellsPerColumn_{}.json".format(
folder, n, w, threshold, cellsPerColumn)
if len(glob.glob(filename)) == 0:
print("Starting: {}/n_{}_w_{}_threshold_{}_cellsPerColumn_{}".format(
folder, n, w, threshold, cellsPerColumn))
result = defaultdict(list)
for _ in xrange(numTrials):
exp = PoolOfPairsLocation1DExperiment(**{
"numMinicolumns": n,
"numActiveMinicolumns": w,
"poolingThreshold": threshold,
"cellsPerColumn": cellsPerColumn,
"minicolumnSDRs": generateMinicolumnSDRs(n=n, w=w, threshold=threshold),
})
if cleverTMSDRs:
exp.trainWithSpecificPairSDRs(carefullyCollideContexts(
numContexts=25, numCells=cellsPerColumn, numMinicolumns = n))
else:
exp.train()
for unionSize in [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25]:
additionalSDRCounts = exp.testInferenceOnUnions(unionSize)
result[unionSize] += additionalSDRCounts
with open(filename, "w") as fOut:
json.dump(sorted(result.items(), key=lambda x: x[0]),
fOut)
print("Wrote:", filename) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def train(self):
""" Train the pair layer and pooling layer. """ |
for iDriving, cDriving in enumerate(self.drivingOperandSDRs):
minicolumnSDR = self.minicolumnSDRs[iDriving]
self.pairLayerProximalConnections.associate(minicolumnSDR, cDriving)
for iContext, cContext in enumerate(self.contextOperandSDRs):
iResult = (iContext + iDriving) % self.numLocations
cResult = self.resultSDRs[iResult]
self.pairLayer.compute(minicolumnSDR, basalInput=cContext)
cPair = self.pairLayer.getWinnerCells()
self.poolingLayer.associate(cResult, cPair) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_noise_experiment(num_neurons = 1, a = 128, dim = 6000, test_noise_levels = range(15, 100, 5), num_samples = 500, num_dendrites = 500, dendrite_length = 30, theta = 8, num_trials = 100):
""" Tests the impact of noise on a neuron, using an HTM approach to a P&M model of a neuron. Nonlinearity is a simple threshold at theta, as in the original version of this experiment, and each dendrite is bound by the initialization to a single pattern. Only one neuron is used, unlike in the P&M classification experiment, and a successful identification is simply defined as at least one dendrite having theta active synapses. Training is done via HTM-style initialization. In the event that the init fails to produce an error rate of 0 without noise (which anecdotally never occurs), we simple reinitialize. Results are saved to the file noise_FN_{theta}.txt. This corresponds to the false negative vs. noise level figure in the paper. To generate the results shown, we used theta = 8, theta = 12 and theta = 16, with noise levels in range(15, 85, 5), 500 dendrites and 30 synapses per dendrite. We generated 500 sample SDRs, one per dendrite, and ran 100 trials at each noise level. Each SDR had a = 128, dim = 6000. """ |
nonlinearity = threshold_nonlinearity(theta)
for noise in test_noise_levels:
fps = []
fns = []
for trial in range(num_trials):
successful_initialization = False
while not successful_initialization:
neuron = Neuron(size = dendrite_length*num_dendrites, num_dendrites = num_dendrites, dendrite_length = dendrite_length, dim = dim, nonlinearity = nonlinearity)
data = generate_evenly_distributed_data_sparse(dim = dim, num_active = a, num_samples = num_samples)
labels = [1 for i in range(num_samples)]
neuron.HTM_style_initialize_on_data(data, labels)
error, fp, fn = get_error(data, labels, [neuron])
print "Initialization error is {}, with {} false positives and {} false negatives".format(error, fp, fn)
if error == 0:
successful_initialization = True
else:
print "Repeating to get a successful initialization"
apply_noise(data, noise)
error, fp, fn = get_error(data, labels, [neuron])
fps.append(fp)
fns.append(fn)
print "Error at noise {} is {}, with {} false positives and {} false negatives".format(noise, error, fp, fn)
with open("noise_FN_{}.txt".format(theta), "a") as f:
f.write(str(noise) + ", " + str(numpy.sum(fns)) + ", " + str(num_trials*num_samples) + "\n") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reset(self, params, repetition):
"""Called at the beginning of each experiment and each repetition""" |
pprint.pprint(params)
self.initialize(params, repetition)
# Load CIFAR dataset
dataDir = params.get('dataDir', 'data')
self.transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
self.trainset = datasets.CIFAR10(root=dataDir, train=True, download=True,
transform=self.transform_train)
self.createModel(params, repetition)
print("Torch reports", torch.cuda.device_count(), "GPUs available")
if torch.cuda.device_count() > 1:
self.model = torch.nn.DataParallel(self.model)
self.model.to(self.device)
self.optimizer = self.createOptimizer(self.model)
self.lr_scheduler = self.createLearningRateScheduler(self.optimizer)
self.test_loaders = self.createTestLoaders(self.noise_values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def killCellRegion(self, centerColumn, radius):
""" Kill cells around a centerColumn, within radius """ |
self.deadCols = topology.wrappingNeighborhood(centerColumn,
radius,
self._columnDimensions)
self.deadColumnInputSpan = self.getConnectedSpan(self.deadCols)
self.removeDeadColumns() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, inputVector, learn, activeArray):
""" This is the primary public method of the SpatialPooler class. This function takes a input vector and outputs the indices of the active columns. If 'learn' is set to True, this method also updates the permanences of the columns. @param inputVector: A numpy array of 0's and 1's that comprises the input to the spatial pooler. The array will be treated as a one dimensional array, therefore the dimensions of the array do not have to match the exact dimensions specified in the class constructor. In fact, even a list would suffice. The number of input bits in the vector must, however, match the number of bits specified by the call to the constructor. Therefore there must be a '0' or '1' in the array for every input bit. @param learn: A boolean value indicating whether learning should be performed. Learning entails updating the permanence values of the synapses, and hence modifying the 'state' of the model. Setting learning to 'off' freezes the SP and has many uses. For example, you might want to feed in various inputs and examine the resulting SDR's. @param activeArray: An array whose size is equal to the number of columns. Before the function returns this array will be populated with 1's at the indices of the active columns, and 0's everywhere else. """ |
if not isinstance(inputVector, numpy.ndarray):
raise TypeError("Input vector must be a numpy array, not %s" %
str(type(inputVector)))
if inputVector.size != self._numInputs:
raise ValueError(
"Input vector dimensions don't match. Expecting %s but got %s" % (
inputVector.size, self._numInputs))
self._updateBookeepingVars(learn)
inputVector = numpy.array(inputVector, dtype=realDType)
inputVector.reshape(-1)
self._overlaps = self._calculateOverlap(inputVector)
# self._overlaps[self.deadCols] = 0
# Apply boosting when learning is on
if learn:
self._boostedOverlaps = self._boostFactors * self._overlaps
else:
self._boostedOverlaps = self._overlaps
# Apply inhibition to determine the winning columns
activeColumns = self._inhibitColumns(self._boostedOverlaps)
if learn:
self._adaptSynapses(inputVector, activeColumns)
self._updateDutyCycles(self._overlaps, activeColumns)
self._bumpUpWeakColumns()
self._updateTargetActivityDensity()
self._updateBoostFactors()
if self._isUpdateRound():
self._updateInhibitionRadius()
self._updateMinDutyCycles()
# self.growRandomSynapses()
activeArray.fill(0)
activeArray[activeColumns] = 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getConstructorArguments():
""" Return constructor argument associated with ColumnPooler. @return defaults (list) a list of args and default values for each argument """ |
argspec = inspect.getargspec(ColumnPooler.__init__)
return argspec.args[1:], argspec.defaults |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def initialize(self):
""" Initialize the internal objects. """ |
if self._pooler is None:
params = {
"inputWidth": self.inputWidth,
"lateralInputWidths": [self.cellCount] * self.numOtherCorticalColumns,
"cellCount": self.cellCount,
"sdrSize": self.sdrSize,
"onlineLearning": self.onlineLearning,
"maxSdrSize": self.maxSdrSize,
"minSdrSize": self.minSdrSize,
"synPermProximalInc": self.synPermProximalInc,
"synPermProximalDec": self.synPermProximalDec,
"initialProximalPermanence": self.initialProximalPermanence,
"minThresholdProximal": self.minThresholdProximal,
"sampleSizeProximal": self.sampleSizeProximal,
"connectedPermanenceProximal": self.connectedPermanenceProximal,
"predictedInhibitionThreshold": self.predictedInhibitionThreshold,
"synPermDistalInc": self.synPermDistalInc,
"synPermDistalDec": self.synPermDistalDec,
"initialDistalPermanence": self.initialDistalPermanence,
"activationThresholdDistal": self.activationThresholdDistal,
"sampleSizeDistal": self.sampleSizeDistal,
"connectedPermanenceDistal": self.connectedPermanenceDistal,
"inertiaFactor": self.inertiaFactor,
"seed": self.seed,
}
self._pooler = ColumnPooler(**params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, inputs, outputs):
""" Run one iteration of compute. Note that if the reset signal is True (1) we assume this iteration represents the *end* of a sequence. The output will contain the representation to this point and any history will then be reset. The output at the next compute will start fresh, presumably with bursting columns. """ |
# Handle reset first (should be sent with an empty signal)
if "resetIn" in inputs:
assert len(inputs["resetIn"]) == 1
if inputs["resetIn"][0] != 0:
# send empty output
self.reset()
outputs["feedForwardOutput"][:] = 0
outputs["activeCells"][:] = 0
return
feedforwardInput = numpy.asarray(inputs["feedforwardInput"].nonzero()[0],
dtype="uint32")
if "feedforwardGrowthCandidates" in inputs:
feedforwardGrowthCandidates = numpy.asarray(
inputs["feedforwardGrowthCandidates"].nonzero()[0], dtype="uint32")
else:
feedforwardGrowthCandidates = feedforwardInput
if "lateralInput" in inputs:
lateralInputs = tuple(numpy.asarray(singleInput.nonzero()[0],
dtype="uint32")
for singleInput
in numpy.split(inputs["lateralInput"],
self.numOtherCorticalColumns))
else:
lateralInputs = ()
if "predictedInput" in inputs:
predictedInput = numpy.asarray(
inputs["predictedInput"].nonzero()[0], dtype="uint32")
else:
predictedInput = None
# Send the inputs into the Column Pooler.
self._pooler.compute(feedforwardInput, lateralInputs,
feedforwardGrowthCandidates, learn=self.learningMode,
predictedInput = predictedInput)
# Extract the active / predicted cells and put them into binary arrays.
outputs["activeCells"][:] = 0
outputs["activeCells"][self._pooler.getActiveCells()] = 1
# Send appropriate output to feedForwardOutput.
if self.defaultOutputType == "active":
outputs["feedForwardOutput"][:] = outputs["activeCells"]
else:
raise Exception("Unknown outputType: " + self.defaultOutputType) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def progress(params, rep):
""" Helper function to calculate the progress made on one experiment. """ |
name = params['name']
fullpath = os.path.join(params['path'], params['name'])
logname = os.path.join(fullpath, '%i.log'%rep)
if os.path.exists(logname):
logfile = open(logname, 'r')
lines = logfile.readlines()
logfile.close()
return int(100 * len(lines) / params['iterations'])
else:
return 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_param_to_dirname(param):
""" Helper function to convert a parameter value to a valid directory name. """ |
if type(param) == types.StringType:
return param
else:
return re.sub("0+$", '0', '%f'%param) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_opt(self):
""" parses the command line options for different settings. """ |
optparser = optparse.OptionParser()
optparser.add_option('-c', '--config',
action='store', dest='config', type='string', default='experiments.cfg',
help="your experiments config file")
optparser.add_option('-n', '--numcores',
action='store', dest='ncores', type='int', default=cpu_count(),
help="number of processes you want to use, default is %i"%cpu_count())
optparser.add_option('-d', '--del',
action='store_true', dest='delete', default=False,
help="delete experiment folder if it exists")
optparser.add_option('-e', '--experiment',
action='append', dest='experiments', type='string',
help="run only selected experiments, by default run all experiments in config file.")
optparser.add_option('-b', '--browse',
action='store_true', dest='browse', default=False,
help="browse existing experiments.")
optparser.add_option('-B', '--Browse',
action='store_true', dest='browse_big', default=False,
help="browse existing experiments, more verbose than -b")
optparser.add_option('-p', '--progress',
action='store_true', dest='progress', default=False,
help="like browse, but only shows name and progress bar")
options, args = optparser.parse_args()
self.options = options
return options, args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_cfg(self):
""" parses the given config file for experiments. """ |
self.cfgparser = ConfigParser()
if not self.cfgparser.read(self.options.config):
raise SystemExit('config file %s not found.'%self.options.config)
# Change the current working directory to be relative to 'experiments.cfg'
projectDir = os.path.dirname(self.options.config)
projectDir = os.path.abspath(projectDir)
os.chdir(projectDir) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mkdir(self, path):
""" create a directory if it does not exist. """ |
if not os.path.exists(path):
os.makedirs(path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_config_file(self, params, path):
""" write a config file for this single exp in the folder path. """ |
cfgp = ConfigParser()
cfgp.add_section(params['name'])
for p in params:
if p == 'name':
continue
cfgp.set(params['name'], p, params[p])
f = open(os.path.join(path, 'experiment.cfg'), 'w')
cfgp.write(f)
f.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_history(self, exp, rep, tags):
""" returns the whole history for one experiment and one repetition. tags can be a string or a list of strings. if tags is a string, the history is returned as list of values, if tags is a list of strings or 'all', history is returned as a dictionary of lists of values. """ |
params = self.get_params(exp)
if params == None:
raise SystemExit('experiment %s not found.'%exp)
# make list of tags, even if it is only one
if tags != 'all' and not hasattr(tags, '__iter__'):
tags = [tags]
results = {}
logfile = os.path.join(exp, '%i.log'%rep)
try:
f = open(logfile)
except IOError:
if len(tags) == 1:
return []
else:
return {}
for line in f:
dic = json.loads(line)
for tag in tags:
if not tag in results:
results[tag] = []
if tag in dic:
results[tag].append(dic[tag])
else:
results[tag].append(None)
f.close()
if len(results) == 0:
if len(tags) == 1:
return []
else:
return {}
# raise ValueError('tag(s) not found: %s'%str(tags))
if len(tags) == 1:
return results[results.keys()[0]]
else:
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_dir(self, params, delete=False):
""" creates a subdirectory for the experiment, and deletes existing files, if the delete flag is true. then writes the current experiment.cfg file in the folder. """ |
# create experiment path and subdir
fullpath = os.path.join(params['path'], params['name'])
self.mkdir(fullpath)
# delete old histories if --del flag is active
if delete and os.path.exists(fullpath):
os.system('rm %s/*' % fullpath)
# write a config file for this single exp. in the folder
self.write_config_file(params, fullpath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
""" starts the experiments as given in the config file. """ |
self.parse_opt()
self.parse_cfg()
# if -b, -B or -p option is set, only show information, don't
# start the experiments
if self.options.browse or self.options.browse_big or self.options.progress:
self.browse()
raise SystemExit
# read main configuration file
paramlist = []
for exp in self.cfgparser.sections():
if not self.options.experiments or exp in self.options.experiments:
params = self.items_to_params(self.cfgparser.items(exp))
params['name'] = exp
paramlist.append(params)
self.do_experiment(paramlist) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_rep(self, params, rep):
""" run a single repetition including directory creation, log files, etc. """ |
try:
name = params['name']
fullpath = os.path.join(params['path'], params['name'])
logname = os.path.join(fullpath, '%i.log'%rep)
# check if repetition exists and has been completed
restore = 0
if os.path.exists(logname):
logfile = open(logname, 'r')
lines = logfile.readlines()
logfile.close()
# if completed, continue loop
if 'iterations' in params and len(lines) == params['iterations']:
return False
# if not completed, check if restore_state is supported
if not self.restore_supported:
# not supported, delete repetition and start over
# print 'restore not supported, deleting %s' % logname
os.remove(logname)
restore = 0
else:
restore = len(lines)
self.reset(params, rep)
if restore:
logfile = open(logname, 'a')
self.restore_state(params, rep, restore)
else:
logfile = open(logname, 'w')
# loop through iterations and call iterate
for it in xrange(restore, params['iterations']):
dic = self.iterate(params, rep, it) or {}
dic['iteration'] = it
if self.restore_supported:
self.save_state(params, rep, it)
if dic is not None:
json.dump(dic, logfile)
logfile.write('\n')
logfile.flush()
logfile.close()
self.finalize(params, rep)
except:
import traceback
traceback.print_exc()
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getClusterPrototypes(self, numClusters, numPrototypes=1):
""" Create numClusters flat clusters and find approximately numPrototypes prototypes per flat cluster. Returns an array with each row containing the indices of the prototypes for a single flat cluster. @param numClusters (int) Number of flat clusters to return (approximate). @param numPrototypes (int) Number of prototypes to return per cluster. @returns (tuple of numpy.ndarray) The first element is an array with rows containing the indices of the prototypes for a single flat cluster. If a cluster has less than numPrototypes members, missing indices are filled in with -1. The second element is an array of number of elements in each cluster. """ |
linkage = self.getLinkageMatrix()
linkage[:, 2] -= linkage[:, 2].min()
clusters = scipy.cluster.hierarchy.fcluster(
linkage, numClusters, criterion="maxclust")
prototypes = []
clusterSizes = []
for cluster_id in numpy.unique(clusters):
ids = numpy.arange(len(clusters))[clusters == cluster_id]
clusterSizes.append(len(ids))
if len(ids) > numPrototypes:
cluster_prototypes = HierarchicalClustering._getPrototypes(
ids, self._overlaps, numPrototypes)
else:
cluster_prototypes = numpy.ones(numPrototypes) * -1
cluster_prototypes[:len(ids)] = ids
prototypes.append(cluster_prototypes)
return numpy.vstack(prototypes).astype(int), numpy.array(clusterSizes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getPrototypes(indices, overlaps, topNumber=1):
""" Given a compressed overlap array and a set of indices specifying a subset of those in that array, return the set of topNumber indices of vectors that have maximum average overlap with other vectors in `indices`. @param indices (arraylike) Array of indices for which to get prototypes. @param overlaps (numpy.ndarray) Condensed array of overlaps of the form returned by _computeOverlaps(). @param topNumber (int) The number of prototypes to return. Optional, defaults to 1. @returns (numpy.ndarray) Array of indices of prototypes """ |
# find the number of data points based on the length of the overlap array
# solves for n: len(overlaps) = n(n-1)/2
n = numpy.roots([1, -1, -2 * len(overlaps)]).max()
k = len(indices)
indices = numpy.array(indices, dtype=int)
rowIdxs = numpy.ndarray((k, k-1), dtype=int)
colIdxs = numpy.ndarray((k, k-1), dtype=int)
for i in xrange(k):
rowIdxs[i, :] = indices[i]
colIdxs[i, :i] = indices[:i]
colIdxs[i, i:] = indices[i+1:]
idx = HierarchicalClustering._condensedIndex(rowIdxs, colIdxs, n)
subsampledOverlaps = overlaps[idx]
meanSubsampledOverlaps = subsampledOverlaps.mean(1)
biggestOverlapSubsetIdxs = numpy.argsort(
-meanSubsampledOverlaps)[:topNumber]
return indices[biggestOverlapSubsetIdxs] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def analyzeParameters(expName, suite):
""" Analyze the impact of each list parameter in this experiment """ |
print("\n================",expName,"=====================")
try:
expParams = suite.get_params(expName)
pprint.pprint(expParams)
for p in ["boost_strength", "k", "learning_rate", "weight_sparsity",
"k_inference_factor", "boost_strength_factor",
"c1_out_channels", "c1_k", "learning_rate_factor",
"batches_in_epoch",
]:
if p in expParams and type(expParams[p]) == list:
print("\n",p)
for v1 in expParams[p]:
# Retrieve the last totalCorrect from each experiment
# Print them sorted from best to worst
values, params = suite.get_values_fix_params(
expName, 0, "testerror", "last", **{p:v1})
v = np.array(values)
try:
print("Average/min/max for", p, v1, "=", v.mean(), v.min(), v.max())
# sortedIndices = v.argsort()
# for i in sortedIndices[::-1]:
# print(v[i],params[i]["name"])
except:
print("Can't compute stats for",p)
except:
print("Couldn't load experiment",expName) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def summarizeResults(expName, suite):
""" Summarize the totalCorrect value from the last iteration for each experiment in the directory tree. """ |
print("\n================",expName,"=====================")
try:
# Retrieve the last totalCorrect from each experiment
# Print them sorted from best to worst
values, params = suite.get_values_fix_params(
expName, 0, "totalCorrect", "last")
v = np.array(values)
sortedIndices = v.argsort()
for i in sortedIndices[::-1]:
print(v[i], params[i]["name"])
print()
except:
print("Couldn't analyze experiment",expName)
try:
# Retrieve the last totalCorrect from each experiment
# Print them sorted from best to worst
values, params = suite.get_values_fix_params(
expName, 0, "testerror", "last")
v = np.array(values)
sortedIndices = v.argsort()
for i in sortedIndices[::-1]:
print(v[i], params[i]["name"])
print()
except:
print("Couldn't analyze experiment",expName) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def learningCurve(expPath, suite):
""" Print the test and overall noise errors from each iteration of this experiment """ |
print("\nLEARNING CURVE ================",expPath,"=====================")
try:
headers=["testerror","totalCorrect","elapsedTime","entropy"]
result = suite.get_value(expPath, 0, headers, "all")
info = []
for i,v in enumerate(zip(result["testerror"],result["totalCorrect"],
result["elapsedTime"],result["entropy"])):
info.append([i, v[0], v[1], int(v[2]), v[3]])
headers.insert(0,"iteration")
print(tabulate(info, headers=headers, tablefmt="grid"))
except:
print("Couldn't load experiment",expPath) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, inputs, outputs):
""" Get the next record from the queue and encode it. @param inputs This parameter is ignored. The data comes from the queue @param outputs See definition in the spec above. """ |
if len(self.queue) > 0:
data = self.queue.pop()
else:
raise Exception("CoordinateSensor: No data to encode: queue is empty")
outputs["resetOut"][0] = data["reset"]
outputs["sequenceIdOut"][0] = data["sequenceId"]
sdr = self.encoder.encode((numpy.array(data["coordinate"]), self.radius))
outputs["dataOut"][:] = sdr
if self.verbosity > 1:
print "CoordinateSensor outputs:"
print "Coordinate = ", data["coordinate"]
print "sequenceIdOut: ", outputs["sequenceIdOut"]
print "resetOut: ", outputs["resetOut"]
print "dataOut: ", outputs["dataOut"].nonzero()[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printDiagnostics(exp, sequences, objects, args, verbosity=0):
"""Useful diagnostics for debugging.""" |
print "Experiment start time:", time.ctime()
print "\nExperiment arguments:"
pprint.pprint(args)
r = sequences.objectConfusion()
print "Average common pairs in sequences=", r[0],
print ", features=",r[2]
r = objects.objectConfusion()
print "Average common pairs in objects=", r[0],
print ", locations=",r[1],
print ", features=",r[2]
# For detailed debugging
if verbosity > 0:
print "\nObjects are:"
for o in objects:
pairs = objects[o]
pairs.sort()
print str(o) + ": " + str(pairs)
print "\nSequences:"
for i in sequences:
print i,sequences[i]
print "\nNetwork parameters:"
pprint.pprint(exp.config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createArgs(**kwargs):
""" Each kwarg is a list. Return a list of dicts representing all possible combinations of the kwargs. """ |
if len(kwargs) == 0: return [{}]
kargs = deepcopy(kwargs)
k1 = kargs.keys()[0]
values = kargs.pop(k1)
args = []
# Get all other combinations
otherArgs = createArgs(**kargs)
# Create combinations for values associated with k1
for v in values:
newArgs = deepcopy(otherArgs)
arg = {k1: v}
for newArg in newArgs:
newArg.update(arg)
args.append(newArg)
return args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def randomizeSequence(sequence, symbolsPerSequence, numColumns, sparsity, p = 0.25):
""" Takes a sequence as input and randomizes a percentage p of it by choosing SDRs at random while preserving the remaining invariant. @param sequence (array) sequence to be randomized @param symbolsPerSequence (int) number of symbols per sequence @param numColumns (int) number of columns in the TM @param sparsity (float) percentage of sparsity @p (float) percentage of symbols to be replaced @return randomizedSequence (array) sequence that contains p percentage of new SDRs """ |
randomizedSequence = []
sparseCols = int(numColumns * sparsity)
numSymbolsToChange = int(symbolsPerSequence * p)
symIndices = np.random.permutation(np.arange(symbolsPerSequence))
for symbol in range(symbolsPerSequence):
randomizedSequence.append(sequence[symbol])
i = 0
while numSymbolsToChange > 0:
randomizedSequence[symIndices[i]] = generateRandomSymbol(numColumns, sparseCols)
i += 1
numSymbolsToChange -= 1
return randomizedSequence |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generateHOSequence(sequence, symbolsPerSequence, numColumns, sparsity):
""" Generates a high-order sequence by taking an initial sequence and the changing its first and last SDRs by random SDRs @param sequence (array) sequence to be randomized @param symbolsPerSequence (int) number of symbols per sequence @param numColumns (int) number of columns in the TM @param sparsity (float) percentage of sparsity @return randomizedSequence (array) sequence that contains p percentage of new SDRs """ |
sequenceHO = []
sparseCols = int(numColumns * sparsity)
for symbol in range(symbolsPerSequence):
if symbol == 0 or symbol == (symbolsPerSequence - 1):
sequenceHO.append(generateRandomSymbol(numColumns, sparseCols))
else:
sequenceHO.append(sequence[symbol])
return sequenceHO |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def percentOverlap(x1, x2, numColumns):
""" Calculates the percentage of overlap between two SDRs @param x1 (array) SDR @param x2 (array) SDR @return percentageOverlap (float) percentage overlap between x1 and x2 """ |
nonZeroX1 = np.count_nonzero(x1)
nonZeroX2 = np.count_nonzero(x2)
sparseCols = min(nonZeroX1, nonZeroX2)
# transform input vector specifying columns into binary vector
binX1 = np.zeros(numColumns, dtype="uint32")
binX2 = np.zeros(numColumns, dtype="uint32")
for i in range(sparseCols):
binX1[x1[i]] = 1
binX2[x2[i]] = 1
return float(np.dot(binX1, binX2))/float(sparseCols) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generateRandomSymbol(numColumns, sparseCols):
""" Generates a random SDR with sparseCols number of active columns @param numColumns (int) number of columns in the temporal memory @param sparseCols (int) number of sparse columns for desired SDR @return symbol (list) SDR """ |
symbol = list()
remainingCols = sparseCols
while remainingCols > 0:
col = random.randrange(numColumns)
if col not in symbol:
symbol.append(col)
remainingCols -= 1
return symbol |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generateRandomSequence(numSymbols, numColumns, sparsity):
""" Generate a random sequence comprising numSymbols SDRs @param numSymbols (int) number of SDRs in random sequence @param numColumns (int) number of columns in the temporal memory @param sparsity (float) percentage of sparsity (real number between 0 and 1) @return sequence (array) random sequence generated """ |
sequence = []
sparseCols = int(numColumns * sparsity)
for _ in range(numSymbols):
sequence.append(generateRandomSymbol(numColumns, sparseCols))
return sequence |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def accuracy(current, predicted):
""" Computes the accuracy of the TM at time-step t based on the prediction at time-step t-1 and the current active columns at time-step t. @param current (array) binary vector containing current active columns @param predicted (array) binary vector containing predicted active columns @return acc (float) prediction accuracy of the TM at time-step t """ |
acc = 0
if np.count_nonzero(predicted) > 0:
acc = float(np.dot(current, predicted))/float(np.count_nonzero(predicted))
return acc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sampleCellsWithinColumns(numCellPairs, cellsPerColumn, numColumns, seed=42):
""" Generate indices of cell pairs, each pair of cells are from the same column @return cellPairs (list) list of cell pairs """ |
np.random.seed(seed)
cellPairs = []
for i in range(numCellPairs):
randCol = np.random.randint(numColumns)
randCells = np.random.choice(np.arange(cellsPerColumn), (2, ), replace=False)
cellsPair = randCol * cellsPerColumn + randCells
cellPairs.append(cellsPair)
return cellPairs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sampleCellsAcrossColumns(numCellPairs, cellsPerColumn, numColumns, seed=42):
""" Generate indices of cell pairs, each pair of cells are from different column @return cellPairs (list) list of cell pairs """ |
np.random.seed(seed)
cellPairs = []
for i in range(numCellPairs):
randCols = np.random.choice(np.arange(numColumns), (2, ), replace=False)
randCells = np.random.choice(np.arange(cellsPerColumn), (2, ), replace=False)
cellsPair = np.zeros((2, ))
for j in range(2):
cellsPair[j] = randCols[j] * cellsPerColumn + randCells[j]
cellPairs.append(cellsPair.astype('int32'))
return cellPairs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subSample(spikeTrains, numCells, totalCells, currentTS, timeWindow):
""" Obtains a random sample of cells from the whole spike train matrix consisting of numCells cells from the start of simulation time up to currentTS @param spikeTrains (array) array containing the spike trains of cells in the TM @param numCells (int) number of cells to be sampled from the matrix of spike trains @param totalCells (int) total number of cells in the TM @param currentTS (int) time-step upper bound of sample (sample will go from time-step 0 up to currentTS) @param timeWindow (int) number of time-steps to sample from the spike trains @return subSpikeTrains (array) spike train matrix sampled from the total spike train matrix """ |
indices = np.random.permutation(np.arange(totalCells))
if currentTS > 0 and currentTS < timeWindow:
subSpikeTrains = np.zeros((numCells, currentTS), dtype = "uint32")
for i in range(numCells):
subSpikeTrains[i,:] = spikeTrains[indices[i],:]
elif currentTS > 0 and currentTS >= timeWindow:
subSpikeTrains = np.zeros((numCells, timeWindow), dtype = "uint32")
for i in range(numCells):
subSpikeTrains[i,:] = spikeTrains[indices[i],(currentTS-timeWindow):currentTS]
elif currentTS == 0:
# This option takes the whole spike train history
totalTS = np.shape(spikeTrains)[1]
subSpikeTrains = np.zeros((numCells, totalTS), dtype = "uint32")
for i in range(numCells):
subSpikeTrains[i,:] = spikeTrains[indices[i],:]
elif currentTS < 0:
# This option takes a timestep at random and a time window
# specified by the user after the chosen time step
totalTS = np.shape(spikeTrains)[1]
subSpikeTrains = np.zeros((numCells, timeWindow), dtype = "uint32")
rnd = random.randrange(totalTS - timeWindow)
print "Starting from timestep: " + str(rnd)
for i in range(numCells):
subSpikeTrains[i,:] = spikeTrains[indices[i],rnd:(rnd+timeWindow)]
return subSpikeTrains |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def subSampleWholeColumn(spikeTrains, colIndices, cellsPerColumn, currentTS, timeWindow):
""" Obtains subsample from matrix of spike trains by considering the cells in columns specified by colIndices. Thus, it returns a matrix of spike trains of cells within the same column. @param spikeTrains (array) array containing the spike trains of cells in the TM @param colIndices (array) array containing the indices of columns whose spike trains should be sampled @param cellsPerColumn (int) number of cells per column in the TM @param currentTS (int) time-step upper bound of sample (sample will go from time-step 0 up to currentTS) @param timeWindow (int) number of time-steps to sample from the spike trains @return subSpikeTrains (array) spike train matrix sampled from the total spike train matrix """ |
numColumns = np.shape(colIndices)[0]
numCells = numColumns * cellsPerColumn
if currentTS > 0 and currentTS < timeWindow:
subSpikeTrains = np.zeros((numCells, currentTS), dtype = "uint32")
for i in range(numColumns):
currentCol = colIndices[i]
initialCell = cellsPerColumn * currentCol
for j in range(cellsPerColumn):
subSpikeTrains[(cellsPerColumn*i) + j,:] = spikeTrains[initialCell + j,:]
elif currentTS > 0 and currentTS >= timeWindow:
subSpikeTrains = np.zeros((numCells, timeWindow), dtype = "uint32")
for i in range(numColumns):
currentCol = colIndices[i]
initialCell = cellsPerColumn * currentCol
for j in range(cellsPerColumn):
subSpikeTrains[(cellsPerColumn*i) + j,:] = spikeTrains[initialCell + j,(currentTS-timeWindow):currentTS]
elif currentTS == 0:
# This option takes the whole spike train history
totalTS = np.shape(spikeTrains)[1]
subSpikeTrains = np.zeros((numCells, totalTS), dtype = "uint32")
for i in range(numColumns):
currentCol = colIndices[i]
initialCell = cellsPerColumn * currentCol
for j in range(cellsPerColumn):
subSpikeTrains[(cellsPerColumn*i) + j,:] = spikeTrains[initialCell + j,:]
elif currentTS < 0:
totalTS = np.shape(spikeTrains)[1]
subSpikeTrains = np.zeros((numCells, timeWindow), dtype = "uint32")
rnd = random.randrange(totalTS - timeWindow)
print "Starting from timestep: " + str(rnd)
for i in range(numColumns):
currentCol = colIndices[i]
initialCell = cellsPerColumn * currentCol
for j in range(cellsPerColumn):
subSpikeTrains[(cellsPerColumn*i) + j,:] = spikeTrains[initialCell + j,rnd:(rnd+timeWindow)]
return subSpikeTrains |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def computeEntropy(spikeTrains):
""" Estimates entropy in spike trains. @param spikeTrains (array) matrix of spike trains @return entropy (float) entropy """ |
MIN_ACTIVATION_PROB = 0.000001
activationProb = np.mean(spikeTrains, 1)
activationProb[activationProb < MIN_ACTIVATION_PROB] = MIN_ACTIVATION_PROB
activationProb = activationProb / np.sum(activationProb)
entropy = -np.dot(activationProb, np.log2(activationProb))
return entropy |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def computeISI(spikeTrains):
""" Estimates the inter-spike interval from a spike train matrix. @param spikeTrains (array) matrix of spike trains @return isi (array) matrix with the inter-spike interval obtained from the spike train. Each entry in this matrix represents the number of time-steps in-between 2 spikes as the algorithm scans the spike train matrix. """ |
zeroCount = 0
isi = []
cells = 0
for i in range(np.shape(spikeTrains)[0]):
if cells > 0 and cells % 250 == 0:
print str(cells) + " cells processed"
for j in range(np.shape(spikeTrains)[1]):
if spikeTrains[i][j] == 0:
zeroCount += 1
elif zeroCount > 0:
isi.append(zeroCount)
zeroCount = 0
zeroCount = 0
cells += 1
print "**All cells processed**"
return isi |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def poissonSpikeGenerator(firingRate, nBins, nTrials):
""" Generates a Poisson spike train. @param firingRate (int) firing rate of sample of Poisson spike trains to be generated @param nBins (int) number of bins or timesteps for the Poisson spike train @param nTrials (int) number of trials (or cells) in the spike train @return poissonSpikeTrain (array) """ |
dt = 0.001 # we are simulating a ms as a single bin in a vector, ie 1sec = 1000bins
poissonSpikeTrain = np.zeros((nTrials, nBins), dtype = "uint32")
for i in range(nTrials):
for j in range(int(nBins)):
if random.random() < firingRate*dt:
poissonSpikeTrain[i,j] = 1
return poissonSpikeTrain |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def raster(event_times_list, color='k'):
""" Creates a raster from spike trains. @param event_times_list (array) matrix containing times in which a cell fired @param color (string) color of spike in raster @return ax (int) position of plot axes """ |
ax = plt.gca()
for ith, trial in enumerate(event_times_list):
plt.vlines(trial, ith + .5, ith + 1.5, color=color)
plt.ylim(.5, len(event_times_list) + .5)
return ax |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rasterPlot(spikeTrain, model):
""" Plots raster and saves figure in working directory @param spikeTrain (array) matrix of spike trains @param model (string) string specifying the name of the origin of the spike trains for the purpose of concatenating it to the filename (either TM or Poisson) """ |
nTrials = np.shape(spikeTrain)[0]
spikes = []
for i in range(nTrials):
spikes.append(spikeTrain[i].nonzero()[0].tolist())
plt.figure()
ax = raster(spikes)
plt.xlabel('Time')
plt.ylabel('Neuron')
# plt.show()
plt.savefig("raster" + str(model))
plt.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def saveTM(tm):
""" Saves the temporal memory and the sequences generated for its training. @param tm (TemporalMemory) temporal memory used during the experiment """ |
# Save the TM to a file for future use
proto1 = TemporalMemoryProto_capnp.TemporalMemoryProto.new_message()
tm.write(proto1)
# Write the proto to a file and read it back into a new proto
with open('tm.nta', 'wb') as f:
proto1.write(f) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mapLabelRefs(dataDict):
""" Replace the label strings in dataDict with corresponding ints. @return (tuple) (ordered list of category names, dataDict with names replaced by array of category indices) """ |
labelRefs = [label for label in set(
itertools.chain.from_iterable([x[1] for x in dataDict.values()]))]
for recordNumber, data in dataDict.iteritems():
dataDict[recordNumber] = (data[0], numpy.array(
[labelRefs.index(label) for label in data[1]]), data[2])
return labelRefs, dataDict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bucketCSVs(csvFile, bucketIdx=2):
"""Write the individual buckets in csvFile to their own CSV files.""" |
try:
with open(csvFile, "rU") as f:
reader = csv.reader(f)
headers = next(reader, None)
dataDict = OrderedDict()
for lineNumber, line in enumerate(reader):
if line[bucketIdx] in dataDict:
dataDict[line[bucketIdx]].append(line)
else:
# new bucket
dataDict[line[bucketIdx]] = [line]
except IOError as e:
print e
filePaths = []
for i, (_, lines) in enumerate(dataDict.iteritems()):
bucketFile = csvFile.replace(".", "_"+str(i)+".")
writeCSV(lines, headers, bucketFile)
filePaths.append(bucketFile)
return filePaths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def readDir(dirPath, numLabels, modify=False):
""" Reads in data from a directory of CSV files; assumes the directory only contains CSV files. @param dirPath (str) Path to the directory. @param numLabels (int) Number of columns of category labels. @param modify (bool) Map the unix friendly category names to the actual names. 0 -> /, _ -> " " @return samplesDict (defaultdict) Keys are CSV names, values are OrderedDicts, where the keys/values are as specified in readCSV(). """ |
samplesDict = defaultdict(list)
for _, _, files in os.walk(dirPath):
for f in files:
basename, extension = os.path.splitext(os.path.basename(f))
if "." in basename and extension == ".csv":
category = basename.split(".")[-1]
if modify:
category = category.replace("0", "/")
category = category.replace("_", " ")
samplesDict[category] = readCSV(
os.path.join(dirPath, f), numLabels=numLabels)
return samplesDict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def writeCSV(data, headers, csvFile):
"""Write data with column headers to a CSV.""" |
with open(csvFile, "wb") as f:
writer = csv.writer(f, delimiter=",")
writer.writerow(headers)
writer.writerows(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def writeFromDict(dataDict, headers, csvFile):
""" Write dictionary to a CSV, where keys are row numbers and values are a list. """ |
with open(csvFile, "wb") as f:
writer = csv.writer(f, delimiter=",")
writer.writerow(headers)
for row in sorted(dataDict.keys()):
writer.writerow(dataDict[row]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def readDataAndReshuffle(args, categoriesInOrderOfInterest=None):
""" Read data file specified in args, optionally reshuffle categories, print out some statistics, and return various data structures. This routine is pretty specific and only used in some simple test scripts. categoriesInOrderOfInterest (list) Optional list of integers representing the priority order of various categories. The categories in the original data file will be reshuffled to the order in this array, up to args.numLabels, if specified. Returns the tuple: (dataset, labelRefs, documentCategoryMap, documentTextMap) Return format: dataset = [ ["fox eats carrots", [0], docId], ["fox eats peppers", [0], docId], ["carrots are healthy", [1], docId], ["peppers is healthy", [1], docId], ] documentCategoryMap = { : } documentTextMap = { docId: documentText, docId: documentText, : } """ |
# Read data
dataDict = readCSV(args.dataPath, 1)
labelRefs, dataDict = mapLabelRefs(dataDict)
if "numLabels" in args:
numLabels = args.numLabels
else:
numLabels = len(labelRefs)
if categoriesInOrderOfInterest is None:
categoriesInOrderOfInterest = range(0,numLabels)
else:
categoriesInOrderOfInterest=categoriesInOrderOfInterest[0:numLabels]
# Select data based on categories of interest. Shift category indices down
# so we go from 0 to numLabels-1
dataSet = []
documentTextMap = {}
counts = numpy.zeros(len(labelRefs))
for document in dataDict.itervalues():
try:
docId = int(document[2])
except:
raise RuntimeError("docId "+str(docId)+" is not an integer")
oldCategoryIndex = document[1][0]
documentTextMap[docId] = document[0]
if oldCategoryIndex in categoriesInOrderOfInterest:
newIndex = categoriesInOrderOfInterest.index(oldCategoryIndex)
dataSet.append([document[0], [newIndex], docId])
counts[newIndex] += 1
# For each document, figure out which categories it belongs to
# Include the shifted category index
documentCategoryMap = {}
for doc in dataDict.iteritems():
docId = int(doc[1][2])
oldCategoryIndex = doc[1][1][0]
if oldCategoryIndex in categoriesInOrderOfInterest:
newIndex = categoriesInOrderOfInterest.index(oldCategoryIndex)
v = documentCategoryMap.get(docId, [])
v.append(newIndex)
documentCategoryMap[docId] = v
labelRefs = [labelRefs[i] for i in categoriesInOrderOfInterest]
print "Total number of unique documents",len(documentCategoryMap)
print "Category counts: ",counts
print "Categories in training/test data:", labelRefs
return dataSet, labelRefs, documentCategoryMap, documentTextMap |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createModel(modelName, **kwargs):
""" Return a classification model of the appropriate type. The model could be any supported subclass of ClassficationModel based on modelName. @param modelName (str) A supported temporal memory type @param kwargs (dict) Constructor argument for the class that will be instantiated. Keyword parameters specific to each model type should be passed in here. """ |
if modelName not in TemporalMemoryTypes.getTypes():
raise RuntimeError("Unknown model type: " + modelName)
return getattr(TemporalMemoryTypes, modelName)(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getConstructorArguments(modelName):
""" Return constructor arguments and associated default values for the given model type. @param modelName (str) A supported temporal memory type @return argNames (list of str) a list of strings corresponding to constructor arguments for the given model type, excluding 'self'. @return defaults (list) a list of default values for each argument """ |
if modelName not in TemporalMemoryTypes.getTypes():
raise RuntimeError("Unknown model type: " + modelName)
argspec = inspect.getargspec(
getattr(TemporalMemoryTypes, modelName).__init__)
return (argspec.args[1:], argspec.defaults) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getTypes(cls):
""" Get sequence of acceptable model types. Iterates through class attributes and separates the user-defined enumerations from the default attributes implicit to Python classes. i.e. this function returns the names of the attributes explicitly defined above. """ |
for attrName in dir(cls):
attrValue = getattr(cls, attrName)
if (isinstance(attrValue, type)):
yield attrName |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def initialize_dendrites(self):
""" Initialize all the dendrites of the neuron to a set of random connections """ |
# Wipe any preexisting connections by creating a new connection matrix
self.dendrites = SM32()
self.dendrites.reshape(self.dim, self.num_dendrites)
for row in range(self.num_dendrites):
synapses = numpy.random.choice(self.dim, self.dendrite_length, replace = False)
for synapse in synapses:
self.dendrites[synapse, row] = 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_activation(self, datapoint):
""" Only for a single datapoint """ |
activations = datapoint * self.dendrites
activations = self.nonlinearity(activations)
return activations.sum() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.