markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.variance"> <img align=left src="files/images/pyspark-page32.svg" width=500 height=500 /> </a>
# variance x = sc.parallelize([1,3,2]) y = x.variance() # divides by N print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.stdev"> <img align=left src="files/images/pyspark-page33.svg" width=500 height=500 /> </a>
# stdev x = sc.parallelize([1,3,2]) y = x.stdev() # divides by N print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleStdev"> <img align=left src="files/images/pyspark-page34.svg" width=500 height=500 /> </a>
# sampleStdev x = sc.parallelize([1,3,2]) y = x.sampleStdev() # divides by N-1 print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleVariance"> <img align=left src="files/images/pyspark-page35.svg" width=500 height=500 /> </a>
# sampleVariance x = sc.parallelize([1,3,2]) y = x.sampleVariance() # divides by N-1 print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.countByValue"> <img align=left src="files/images/pyspark-page36.svg" width=500 height=500 /> </a>
# countByValue x = sc.parallelize([1,3,1,2,3]) y = x.countByValue() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.top"> <img align=left src="files/images/pyspark-page37.svg" width=500 height=500 /> </a>
# top x = sc.parallelize([1,3,1,2,3]) y = x.top(num = 3) print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.takeOrdered"> <img align=left src="files/images/pyspark-page38.svg" width=500 height=500 /> </a>
# takeOrdered x = sc.parallelize([1,3,1,2,3]) y = x.takeOrdered(num = 3) print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.take"> <img align=left src="files/images/pyspark-page39.svg" width=500 height=500 /> </a>
# take x = sc.parallelize([1,3,1,2,3]) y = x.take(num = 3) print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.first"> <img align=left src="files/images/pyspark-page40.svg" width=500 height=500 /> </a>
# first x = sc.parallelize([1,3,1,2,3]) y = x.first() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.collectAsMap"> <img align=left src="files/images/pyspark-page41.svg" width=500 height=500 /> </a>
# collectAsMap x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.collectAsMap() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.keys"> <img align=left src="files/images/pyspark-page42.svg" width=500 height=500 /> </a>
# keys x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.keys() print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.values"> <img align=left src="files/images/pyspark-page43.svg" width=500 height=500 /> </a>
# values x = sc.parallelize([('C',3),('A',1),('B',2)]) y = x.values() print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduceByKey"> <img align=left src="files/images/pyspark-page44.svg" width=500 height=500 /> </a>
# reduceByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.reduceByKey(lambda agg, obj: agg + obj) print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.reduceByKeyLocally"> <img align=left src="files/images/pyspark-page45.svg" width=500 height=500 /> </a>
# reduceByKeyLocally x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.reduceByKeyLocally(lambda agg, obj: agg + obj) print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.countByKey"> <img align=left src="files/images/pyspark-page46.svg" width=500 height=500 /> </a>
# countByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) y = x.countByKey() print(x.collect()) print(y)
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.join"> <img align=left src="files/images/pyspark-page47.svg" width=500 height=500 /> </a>
# join x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.join(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.leftOuterJoin"> <img align=left src="files/images/pyspark-page48.svg" width=500 height=500 /> </a>
# leftOuterJoin x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.leftOuterJoin(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.rightOuterJoin"> <img align=left src="files/images/pyspark-page49.svg" width=500 height=500 /> </a>
# rightOuterJoin x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',5)]) z = x.rightOuterJoin(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.partitionBy"> <img align=left src="files/images/pyspark-page50.svg" width=500 height=500 /> </a>
# partitionBy x = sc.parallelize([(0,1),(1,2),(2,3)],2) y = x.partitionBy(numPartitions = 3, partitionFunc = lambda x: x) # only key is passed to paritionFunc print(x.glom().collect()) print(y.glom().collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.combineByKey"> <img align=left src="files/images/pyspark-page51.svg" width=500 height=500 /> </a>
# combineByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) createCombiner = (lambda el: [(el,el**2)]) mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) # append to aggregated mergeComb = (lambda agg1,agg2: agg1 + agg2 ) # append agg1 with agg2 y = x.combineByKey(createCombiner,mergeVal,m...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.aggregateByKey"> <img align=left src="files/images/pyspark-page52.svg" width=500 height=500 /> </a>
# aggregateByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) zeroValue = [] # empty list is 'zero value' for append operation mergeVal = (lambda aggregated, el: aggregated + [(el,el**2)]) mergeComb = (lambda agg1,agg2: agg1 + agg2 ) y = x.aggregateByKey(zeroValue,mergeVal,mergeComb) print(x.collect())...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.foldByKey"> <img align=left src="files/images/pyspark-page53.svg" width=500 height=500 /> </a>
# foldByKey x = sc.parallelize([('B',1),('B',2),('A',3),('A',4),('A',5)]) zeroValue = 1 # one is 'zero value' for multiplication y = x.foldByKey(zeroValue,lambda agg,x: agg*x ) # computes cumulative product within each key print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupByKey"> <img align=left src="files/images/pyspark-page54.svg" width=500 height=500 /> </a>
# groupByKey x = sc.parallelize([('B',5),('B',4),('A',3),('A',2),('A',1)]) y = x.groupByKey() print(x.collect()) print([(j[0],[i for i in j[1]]) for j in y.collect()])
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.flatMapValues"> <img align=left src="files/images/pyspark-page55.svg" width=500 height=500 /> </a>
# flatMapValues x = sc.parallelize([('A',(1,2,3)),('B',(4,5))]) y = x.flatMapValues(lambda x: [i**2 for i in x]) # function is applied to entire value, then result is flattened print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.mapValues"> <img align=left src="files/images/pyspark-page56.svg" width=500 height=500 /> </a>
# mapValues x = sc.parallelize([('A',(1,2,3)),('B',(4,5))]) y = x.mapValues(lambda x: [i**2 for i in x]) # function is applied to entire value print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.groupWith"> <img align=left src="files/images/pyspark-page57.svg" width=500 height=500 /> </a>
# groupWith x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))]) y = sc.parallelize([('B',(7,7)),('A',6),('D',(5,5))]) z = sc.parallelize([('D',9),('B',(8,8))]) a = x.groupWith(y,z) print(x.collect()) print(y.collect()) print(z.collect()) print("Result:") for key,val in list(a.collect()): print(key, [list...
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.cogroup"> <img align=left src="files/images/pyspark-page58.svg" width=500 height=500 /> </a>
# cogroup x = sc.parallelize([('C',4),('B',(3,3)),('A',2),('A',(1,1))]) y = sc.parallelize([('A',8),('B',7),('A',6),('D',(5,5))]) z = x.cogroup(y) print(x.collect()) print(y.collect()) for key,val in list(z.collect()): print(key, [list(i) for i in val])
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.sampleByKey"> <img align=left src="files/images/pyspark-page59.svg" width=500 height=500 /> </a>
# sampleByKey x = sc.parallelize([('A',1),('B',2),('C',3),('B',4),('A',5)]) y = x.sampleByKey(withReplacement=False, fractions={'A':0.5, 'B':1, 'C':0.2}) print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.subtractByKey"> <img align=left src="files/images/pyspark-page60.svg" width=500 height=500 /> </a>
# subtractByKey x = sc.parallelize([('C',1),('B',2),('A',3),('A',4)]) y = sc.parallelize([('A',5),('D',6),('A',7),('D',8)]) z = x.subtractByKey(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.subtract"> <img align=left src="files/images/pyspark-page61.svg" width=500 height=500 /> </a>
# subtract x = sc.parallelize([('C',4),('B',3),('A',2),('A',1)]) y = sc.parallelize([('C',8),('A',2),('D',1)]) z = x.subtract(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.keyBy"> <img align=left src="files/images/pyspark-page62.svg" width=500 height=500 /> </a>
# keyBy x = sc.parallelize([1,2,3]) y = x.keyBy(lambda x: x**2) print(x.collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.repartition"> <img align=left src="files/images/pyspark-page63.svg" width=500 height=500 /> </a>
# repartition x = sc.parallelize([1,2,3,4,5],2) y = x.repartition(numPartitions=3) print(x.glom().collect()) print(y.glom().collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.coalesce"> <img align=left src="files/images/pyspark-page64.svg" width=500 height=500 /> </a>
# coalesce x = sc.parallelize([1,2,3,4,5],2) y = x.coalesce(numPartitions=1) print(x.glom().collect()) print(y.glom().collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zip"> <img align=left src="files/images/pyspark-page65.svg" width=500 height=500 /> </a>
# zip x = sc.parallelize(['B','A','A']) y = x.map(lambda x: ord(x)) # zip expects x and y to have same #partitions and #elements/partition z = x.zip(y) print(x.collect()) print(y.collect()) print(z.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zipWithIndex"> <img align=left src="files/images/pyspark-page66.svg" width=500 height=500 /> </a>
# zipWithIndex x = sc.parallelize(['B','A','A'],2) y = x.zipWithIndex() print(x.glom().collect()) print(y.collect())
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
<a href="http://spark.apache.org/docs/1.2.0/api/python/pyspark.html#pyspark.RDD.zipWithUniqueId"> <img align=left src="files/images/pyspark-page67.svg" width=500 height=500 /> </a>
# zipWithUniqueId x = sc.parallelize(['B','A','A'],2) y = x.zipWithUniqueId() print(x.glom().collect()) print(y.collect()) # stop the spark context sc.stop()
notes/2-pyspark-rdd-examples.ipynb
dsiufl/2015-Fall-Hadoop
mit
Example 2 Lists can also hold multiple object types
list1 = [1,'a',"This is a list",5.25] print(list1)
Lists.ipynb
vravishankar/Jupyter-Books
mit
Example 3
# Find the length of the list len(list1)
Lists.ipynb
vravishankar/Jupyter-Books
mit
Example 4 - Slicing & Indexing
# Get the element using the index print(list1[0]) print(list1[2]) # Grab index 1 and everything after it print(list1[1:]) # Grab the element from index position 1 to 3 (1 less than given) print(list1[1:3]) # Grab elements upto 3rd item print(list1[:3]) # Grab the last item in the list print(list1[-1]) print(list1[...
Lists.ipynb
vravishankar/Jupyter-Books
mit
List Methods
list4 = ['a','b','d','e'] list4.insert(2,'c') list4 list4.append(['f','g']) list4 popped_item = list4.pop() popped_item print(list4) # sort elements list4.sort() list4 # reverse elements list4.reverse() list4 list4.remove('a') list4 list5 = ['a','f'] list4.extend(list5) list4 del list4[2] list4 # count the ite...
Lists.ipynb
vravishankar/Jupyter-Books
mit
1. ReLU A great default choice for hidden layers. It is frequently used in industry and is almost always adequete to solve a problem. Although this graph is not differentiable at z=0, it is not usually a problem in practice since an exact value of 0 is rare. The derivative at z=0 can usually be set to 0 or 1 without a ...
relu = np.maximum(z,0) draw_activation_plot(relu)
content/deep-learning/Activation Functions.ipynb
jlawman/jlawman.github.io
mit
2. Leaky ReLU Can be better than ReLU, but it is used less often in practice. It provides a differentiable point at 0 to address the concern mentioned above.
leaky_ReLU = np.maximum(0.01*z,z) draw_activation_plot(leaky_ReLU)
content/deep-learning/Activation Functions.ipynb
jlawman/jlawman.github.io
mit
3. sigmoid Almost never used except in output layer when dealing with binary classification. It's most useful feature is that it guarentees an output between 0 and 1. However, when z is very small or very large, the derivative of the sigmoid function is very small which can slow down gradient descent.
sigmoid = 1/(1+np.exp(-z)) draw_activation_plot(sigmoid,y_ticks=[0,1], two_quad_y_lim=[0,1])
content/deep-learning/Activation Functions.ipynb
jlawman/jlawman.github.io
mit
4. tanh This is essentially a shifted version of the sigmoid function which is usually strictly better. The mean of activations is closer to 0 which makes training on centered data easier. tanh is also a great default choice for hidden layers.
tanh = (np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z)) draw_activation_plot(tanh,y_ticks=[-1,0,1],quadrants=4)
content/deep-learning/Activation Functions.ipynb
jlawman/jlawman.github.io
mit
5.1. Logging Simulations with Observers E-Cell4 provides special classes for logging, named Observer. Observer class is given when you call the run function of Simulator.
def create_simulator(f=gillespie.Factory()): m = NetworkModel() A, B, C = Species('A', 0.005, 1), Species('B', 0.005, 1), Species('C', 0.005, 1) m.add_species_attribute(A) m.add_species_attribute(B) m.add_species_attribute(C) m.add_reaction_rule(create_binding_reaction_rule(A, B, C, 0.01)) m...
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
One of most popular Observer is FixedIntervalNumberObserver, which logs the number of molecules with the given time interval. FixedIntervalNumberObserver requires an interval and a list of serials of Species for logging.
obs1 = FixedIntervalNumberObserver(0.1, ['A', 'B', 'C']) sim = create_simulator() sim.run(1.0, obs1)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
data function of FixedIntervalNumberObserver returns the data logged.
print(obs1.data())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
targets() returns a list of Species, which you specified as an argument of the constructor.
print([sp.serial() for sp in obs1.targets()])
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
NumberObserver logs the number of molecules after every steps when a reaction occurs. This observer is useful to log all reactions, but not available for ode.
obs1 = NumberObserver(['A', 'B', 'C']) sim = create_simulator() sim.run(1.0, obs1) print(obs1.data())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
TimingNumberObserver allows you to give the times for logging as an argument of its constructor.
obs1 = TimingNumberObserver([0.0, 0.1, 0.2, 0.5, 1.0], ['A', 'B', 'C']) sim = create_simulator() sim.run(1.0, obs1) print(obs1.data())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
run function accepts multile Observers at once.
obs1 = NumberObserver(['C']) obs2 = FixedIntervalNumberObserver(0.1, ['A', 'B']) sim = create_simulator() sim.run(1.0, [obs1, obs2]) print(obs1.data()) print(obs2.data())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
FixedIntervalHDF5Observedr logs the whole data in a World to an output file with the fixed interval. Its second argument is a prefix for output filenames. filename() returns the name of a file scheduled to be saved next. At most one format string like %02d is allowed to use a step count in the file name. When you do no...
obs1 = FixedIntervalHDF5Observer(0.2, 'test%02d.h5') print(obs1.filename()) sim = create_simulator() sim.run(1.0, obs1) # Now you have steped 5 (1.0/0.2) times print(obs1.filename()) w = load_world('test05.h5') print(w.t(), w.num_molecules(Species('C')))
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
The usage of FixedIntervalCSVObserver is almost same with that of FixedIntervalHDF5Observer. It saves positions (x, y, z) of particles with the radius (r) and serial number of Species (sid) to a CSV file.
obs1 = FixedIntervalCSVObserver(0.2, "test%02d.csv") print(obs1.filename()) sim = create_simulator() sim.run(1.0, obs1) print(obs1.filename())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
Here is the first 10 lines in the output CSV file.
print(''.join(open("test05.csv").readlines()[: 10]))
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
For particle simulations, E-Cell4 also provides Observer to trace a trajectory of a molecule, named FixedIntervalTrajectoryObserver. When no ParticleID is specified, it logs all the trajectories. Once some ParticleID is lost for the reaction during a simulation, it just stop to trace the particle any more.
sim = create_simulator(spatiocyte.Factory(0.005)) obs1 = FixedIntervalTrajectoryObserver(0.01) sim.run(0.1, obs1) print([tuple(pos) for pos in obs1.data()[0]])
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
Generally, World assumes a periodic boundary for each plane. To avoid the big jump of a particle at the edge due to the boundary condition, FixedIntervalTrajectoryObserver tries to keep the shift of positions. Thus, the positions stored in the Observer are not necessarily limited in the cuboid given for the World. To t...
obs1 = NumberObserver(['C']) obs2 = FixedIntervalNumberObserver(0.1, ['A', 'B']) sim = create_simulator() sim.run(10.0, [obs1, obs2]) plotting.plot_number_observer(obs1, obs2, step=True)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
You can set the style for plotting, and even add an arbitrary function to plot.
plotting.plot_number_observer(obs1, '-', obs2, ':', lambda t: 60 * (1 + 2 * math.exp(-0.9 * t)) / (2 + math.exp(-0.9 * t)), '--', step=True)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
Plotting in the phase plane is also available by specifing the x-axis and y-axis.
plotting.plot_number_observer(obs2, 'o', x='A', y='B')
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
For spatial simulations, to visualize the state of World, plotting.plot_world is available. This function plots the points of particles in three-dimensional volume in the interactive way. You can save the image by clicking a right button on the drawing region.
sim = create_simulator(spatiocyte.Factory(0.005)) plotting.plot_world(sim.world())
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
You can also make a movie from a series of HDF5 files, given as a FixedIntervalHDF5Observer. plotting.plot_movie requires an extra library, ffmpeg.
sim = create_simulator(spatiocyte.Factory(0.005)) obs1 = FixedIntervalHDF5Observer(0.02, 'test%02d.h5') sim.run(1.0, obs1) plotting.plot_movie(obs1)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
Finally, corresponding to FixedIntervalTrajectoryObserver, plotting.plot_trajectory provides a visualization of particle trajectories.
sim = create_simulator(spatiocyte.Factory(0.005)) obs1 = FixedIntervalTrajectoryObserver(1e-3) sim.run(1, obs1) plotting.plot_trajectory(obs1)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
show internally calls these plotting functions corresponding to the given observer. Thus, you can do simply as follows:
show(obs1)
en/tutorials/tutorial05.ipynb
ecell/ecell4-notebooks
gpl-2.0
Range Range is simply the difference between the maximum and minimum values in a dataset. Not surprisingly, it is very sensitive to outliers.
print 'Range of X:', np.ptp(X)
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
Mean absolute deviation The mean absolute deviation is the average of the distances of observations from the arithmetic mean. We use the absolute value of the deviation, so that 5 above the mean and 5 below the mean both contribute 5, because otherwise the deviations always sum to 0. $$ MAD = \frac{\sum_{i=1}^n |X_i - ...
abs_dispersion = [abs(mu - x) for x in X] MAD = sum(abs_dispersion)/len(abs_dispersion) print 'Mean absolute deviation of X:', MAD
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
Variance and standard deviation The variance $\sigma^2$ is defined as the average of the squared deviations around the mean: $$ \sigma^2 = \frac{\sum_{i=1}^n (X_i - \mu)^2}{n} $$ This is sometimes more convenient than the mean absolute deviation because absolute value is not differentiable, while squaring is smooth, an...
print 'Variance of X:', np.var(X) print 'Standard deviation of X:', np.std(X)
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
One way to interpret standard deviation is by referring to Chebyshev's inequality. This tells us that the proportion of samples within $k$ standard deviations (that is, within a distance of $k \cdot$ standard deviation) of the mean is at least $1 - 1/k^2$ for all $k>1$. Let's check that this is true for our data set.
k = 1.25 dist = k*np.std(X) l = [x for x in X if abs(x - mu) <= dist] print 'Observations within', k, 'stds of mean:', l print 'Confirming that', float(len(l))/len(X), '>', 1 - 1/k**2
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
The bound given by Chebyshev's inequality seems fairly loose in this case. This bound is rarely strict, but it is useful because it holds for all data sets and distributions. Semivariance and semideviation Although variance and standard deviation tell us how volatile a quantity is, they do not differentiate between dev...
# Because there is no built-in semideviation, we'll compute it ourselves lows = [e for e in X if e <= mu] semivar = sum(map(lambda x: (x - mu)**2,lows))/len(lows) print 'Semivariance of X:', semivar print 'Semideviation of X:', math.sqrt(semivar)
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
A related notion is target semivariance (and target semideviation), where we average the distance from a target of values which fall below that target: $$ \frac{\sum_{X_i < B} (X_i - B)^2}{n_{<B}} $$
B = 19 lows_B = [e for e in X if e <= B] semivar_B = sum(map(lambda x: (x - B)**2,lows_B))/len(lows_B) print 'Target semivariance of X:', semivar_B print 'Target semideviation of X:', math.sqrt(semivar_B)
lectures/drafts/Measures of Dispersion.ipynb
bspalding/research_public
apache-2.0
HMM class code Here we create the HMM class. Everything is compiled in the constructor. We also provide methods for all the individual algorithms:
class DiscreteHMM: def __init__(self, N=3, M=4): updates={} pi = theano.shared((np.ones(N)/N).astype(theano.config.floatX)) a = theano.shared((np.ones((N,N))/(N*np.ones(N))).astype(theano.config.floatX)) b = theano.shared((np.ones((N,M))/(N*np.o...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Model creation We can either use the default (all equally probable) or some other random values to begin with. Here we will read the model parameters from a file created in my other notebook. It will allow us to make sure all the calculations match the ones there:
with open('../data/hmm.pkl') as f: O,pi,a,b,N,M,Time=pickle.load(f) print 'Number of states: {}'.format(N) print 'Number of observation classes: {}'.format(M) print 'Number of time steps: {}'.format(Time) #T is taken by theano.tensor print 'Observation sequence: {}'.format(O) print 'Priors: {}'.format(pi) prin...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Here we will contruct the HMM object. The constructor needs to compile everything and since we have a few functions, it may take a little while:
%time hmm=DiscreteHMM() #we can also set the model parameters hmm.setModel(pi,a,b,N,M)
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Algorithms Let's test the methots now. You can compare the values with the ones from my other notebook:
print 'Forward probabilities:\n{}'.format(hmm.forward(O)) print 'Backward probabilities:\n{}'.format(hmm.backward(O)) print 'Full model probability: {}'.format(hmm.full_prob(O)) print 'Complete state probability:\n{}'.format(hmm.gamma(O)) seq,vite_prob=hmm.viterbi(O) print 'Viterbi sequence: {} its probability {}'.form...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Expected values
exp_pi,exp_a,exp_b=hmm.exp_values(O) print 'Expected priors: {}'.format(exp_pi) print 'Expected transitions:\n{}'.format(exp_a) print 'Expected observations:\n{}'.format(exp_b)
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Baum-Welch We will run 15 iterations of the Baum-Welch EM reestimation here. We will also output the model probability (which should increase with each iteration) and also the mean difference between the model parameters and their expected values (which will decrease to 0 as the model converges on the optimum).
hmm.setModel(pi,a,b,N,M) for i in range(15): prob,exp_err=hmm.baum_welch(O) print 'Iteration #{} P={} delta_exp={}'.format(i+1,prob,exp_err)
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Gradient Descent Since this is Theano, we can easily implement GD using the built-in grad method. The parameters are updated by multiplying them with their gradients. The updated values have to also be renormalized to keep the stochasticity of the parameters.
hmm.setModel(pi,a,b,N,M) for i in range(20): prob=hmm.gradient_descent(O,0.2) print 'Iteration #{} P={}'.format(i+1,prob) print hmm.full_prob(O) pi_n,a_n,b_n,N_n,M_n=hmm.getModel() np.set_printoptions(suppress=True) print 'PI: {}'.format(pi_n) print 'A:\n{}'.format(a_n) print 'B:\n{}'.format(b_n) np.set_pri...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
This method quickly converges to the optimum, although in this example the optimum is not a very useful model because it stays mostly in one state all the time. Having several different sequences would probably serve as a better test for this method... Log model This is the same class above, but moved into the log doma...
from pylearn2.expr.basic import log_sum_exp def LogDot(a,b): return log_sum_exp(a + b.T, axis=1) def LogSum(a,axis=None): return log_sum_exp(a,axis) def LogAdd(a,b): return T.log(T.exp(a)+T.exp(b)) def LogSub(a,b): return T.log(T.exp(a)-T.exp(b))
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Here is the actual class in the LogDomain:
class LogDiscreteHMM: def __init__(self, N=3, M=4): updates={} pi = theano.shared((np.zeros(N)/N).astype(theano.config.floatX)) a = theano.shared((np.zeros((N,N))/(N*np.ones(N))).astype(theano.config.floatX)) b = theano.shared((np.zeros((N,M))/(...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Here we construct the object. It is not much more complicated than the one above:
%time loghmm=LogDiscreteHMM()
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Since all the parameters are in the log domain, we have to take logarithms of all the values that were used above:
loghmm.setModel(np.log(pi),np.log(a),np.log(b),N,M)
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
And we have to compute the exponential of the results to get back into the normal domain. Nevertheless, the results are the same as above:
print 'Forward probabilities:\n{}'.format(np.exp(loghmm.forward(O))) print 'Backward probabilities:\n{}'.format(np.exp(loghmm.backward(O))) print 'Full model probability: {}'.format(np.exp(loghmm.full_prob(O))) print 'Complete state probability:\n{}'.format(np.exp(loghmm.gamma(O))) seq,vite_prob=loghmm.viterbi(O) print...
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
The expected values for Baum-Welch are also correct:
exp_pi,exp_a,exp_b=loghmm.exp_values(O) print 'Expected priors: {}'.format(np.exp(exp_pi)) print 'Expected transitions:\n{}'.format(np.exp(exp_a)) print 'Expected observations:\n{}'.format(np.exp(exp_b))
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
And the Baum-Welch procedure works the same as well. The only exception here is that the exp_err value is not retrieved in the log domain, since it's more convinient this way:
loghmm.setModel(np.log(pi),np.log(a),np.log(b),N,M) for i in range(15): prob,exp_err=loghmm.baum_welch(O) print 'Iteration #{} P={} delta_exp={}'.format(i+1,np.exp(prob),exp_err)
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Finally, gradient descent works similarly to the one above:
loghmm.setModel(np.log(pi),np.log(a),np.log(b),N,M) for i in range(20): prob=loghmm.gradient_descent(O,0.2) print 'Iteration #{} P={}'.format(i+1,np.exp(prob))
notebooks/Theano_HMM.ipynb
danijel3/ASRDemos
apache-2.0
Class 17: A Centralized Real Business Cycle Model without Labor The Model Setup A representative household lives for an infinite number of periods. The expected present value of lifetime utility to the household from consuming $C_0, C_1, C_2, \ldots $ is denoted by $U_0$: \begin{align} U_0 & = \log (C_0) + \beta E_0 \...
# 1. Input model parameters and print # 2. Compute the steady state of the model directly # 3. Define a function that evaluates the equilibrium conditions def equilibrium_equations(variables_forward,variables_current,parameters): # Parameters p = parameters # Variables fwd = variables_for...
winter2017/econ129/python/Econ129_Class_17.ipynb
letsgoexploring/teaching
mit
Add Output and Investment Recall the three equilibrium conditions fo the model: \begin{align} \frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha - 1} + 1 - \delta}{C_{t+1}}\right]\ C_t + K_{t+1} & = A_{t} K_t^{\alpha} + (1-\delta) K_t\ \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1} \end{align} Appen...
# 1. Compute the steady state values of Y and I # 2. Define a function that evaluates the equilibrium conditions # 3. Initialize the model # 4. Set the steady state of the model directly. # 5. Find the log-linear approximation around the non-stochastic steady state and solve # 6(a) Compute stochastic simulat...
winter2017/econ129/python/Econ129_Class_17.ipynb
letsgoexploring/teaching
mit
Evaluation
# Compute the standard deviations of A, Y, C, and I # Compute the coefficients of correlation for A, Y, C, and I
winter2017/econ129/python/Econ129_Class_17.ipynb
letsgoexploring/teaching
mit
Initialize Kafka API You need a merchant token to use the Kafka API. To get one, register the merchant at the marketplace.
from api import Marketplace marketplace = Marketplace() registration = marketplace.register( 'http://nobody:55000/', merchant_name='kafka_notebook_merchant', algorithm_name='human') registration
docs/Working with Kafka data.ipynb
hpi-epic/pricewars-merchant
mit
It was not possible to connect to the marketplace if you got the following error: ConnectionError: HTTPConnectionPool(host='marketplace', port=8080) In that case, make sure that the marketplace is running and host and port are correct. If host or port are wrong, you can change it by creating a marketplace object with t...
from api import Kafka kafka = Kafka(token=registration.merchant_token)
docs/Working with Kafka data.ipynb
hpi-epic/pricewars-merchant
mit
Request topic You can request data for specific topics. The most important topics are buyOffer which contains your own sales and marketSituation which contains a history of market situations. The call will return the data in form of a pandas DataFrame. Depending on how active the simulation is and how much data is logg...
sales_data = kafka.download_topic_data('buyOffer') sales_data.head()
docs/Working with Kafka data.ipynb
hpi-epic/pricewars-merchant
mit
This method may return None if it was not possible to obtain the data. For example, this happens if the merchant doesn't have any sales.
len(sales_data) market_situations = kafka.download_topic_data('marketSituation') print(len(market_situations)) market_situations.head()
docs/Working with Kafka data.ipynb
hpi-epic/pricewars-merchant
mit
There are two other keyboard shortcuts for running code: Alt-Enter runs the current cell and inserts a new one below. Ctrl-Enter run the current cell and enters command mode. Managing the Kernel Code is run in a separate process called the Kernel. The Kernel can be interrupted or restarted. Try running the followin...
import time time.sleep(10)
papermill/tests/notebooks/gcs/gcs_in/gcs-simple_notebook.ipynb
nteract/papermill
bsd-3-clause
If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter:
import sys from ctypes import CDLL # This will crash a Linux or Mac system # equivalent calls can be made on Windows # Uncomment these lines if you would like to see the segfault # dll = 'dylib' if sys.platform == 'darwin' else 'so.6' # libc = CDLL("libc.%s" % dll) # libc.time(-1) # BOOM!!
papermill/tests/notebooks/gcs/gcs_in/gcs-simple_notebook.ipynb
nteract/papermill
bsd-3-clause
Cell menu The "Cell" menu has a number of menu items for running code in different ways. These includes: Run and Select Below Run and Insert Below Run All Run All Above Run All Below Restarting the kernels The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. ...
print("hi, stdout") print('hi, stderr', file=sys.stderr)
papermill/tests/notebooks/gcs/gcs_in/gcs-simple_notebook.ipynb
nteract/papermill
bsd-3-clause
Output is asynchronous All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.
import time, sys for i in range(8): print(i) time.sleep(0.5)
papermill/tests/notebooks/gcs/gcs_in/gcs-simple_notebook.ipynb
nteract/papermill
bsd-3-clause
Download and prepare the MS-COCO dataset We will use the MS-COCO dataset to train our model. This dataset contains >82,000 images, each of which has been annotated with at least 5 different captions. The code below will download and extract the dataset automatically. Caution: large download ahead. We'll use the train...
annotation_zip = tf.keras.utils.get_file('captions.zip', cache_subdir=os.path.abspath('.'), origin = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip', extract = True) an...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Optionally, limit the size of the training set for faster training For this example, we'll select a subset of 30,000 captions and use these and the corresponding images to train our model. As always, captioning quality will improve if you choose to use more data.
# read the json file with open(annotation_file, 'r') as f: annotations = json.load(f) # storing the captions and the image name in vectors all_captions = [] all_img_name_vector = [] for annot in annotations['annotations']: caption = '<start> ' + annot['caption'] + ' <end>' image_id = annot['image_id'] ...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Preprocess the images using InceptionV3 Next, we will use InceptionV3 (pretrained on Imagenet) to classify each image. We will extract features from the last convolutional layer. First, we will need to convert the images into the format inceptionV3 expects by: * Resizing the image to (299, 299) * Using the preprocess_...
def load_image(image_path): img = tf.read_file(image_path) img = tf.image.decode_jpeg(img, channels=3) img = tf.image.resize_images(img, (299, 299)) img = tf.keras.applications.inception_v3.preprocess_input(img) return img, image_path
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Initialize InceptionV3 and load the pretrained Imagenet weights To do so, we'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. * Each image is forwarded through the network and the vector that we get at the end is stored in a dictionary (image_name --> f...
image_model = tf.keras.applications.InceptionV3(include_top=False, weights='imagenet') new_input = image_model.input hidden_layer = image_model.layers[-1].output image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Caching the features extracted from InceptionV3 We will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this would exceed the memory limitations of Colab (although these...
# getting the unique images encode_train = sorted(set(img_name_vector)) # feel free to change the batch_size according to your system configuration image_dataset = tf.data.Dataset.from_tensor_slices( encode_train).map(load_image).batch(16) for img, path in image_dataset: batch_featur...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0
Preprocess and tokenize the captions First, we'll tokenize the captions (e.g., by splitting on spaces). This will give us a vocabulary of all the unique words in the data (e.g., "surfing", "football", etc). Next, we'll limit the vocabulary size to the top 5,000 words to save memory. We'll replace all other words with...
# This will find the maximum length of any caption in our dataset def calc_max_length(tensor): return max(len(t) for t in tensor) # The steps above is a general process of dealing with text processing # choosing the top 5000 words from the vocabulary top_k = 5000 tokenizer = tf.keras.preprocessing.text.Tokenizer(...
tensorflow/contrib/eager/python/examples/generative_examples/image_captioning_with_attention.ipynb
apark263/tensorflow
apache-2.0