markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
english = ["hello", "goodbye", "no", "yes", "please", "thank you",] spanish = ["hola", "adios", "no", "si", "por favor", "gracias"] list(zip(english, spanish))
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Now we'll use a dictionary dict to make a lookup table that uses the pairing made by zip, using the first item as the "key" (what you use to look up) and the second item as the "value" (the result of the lookup) You can think of it as a translator -- use the word in English to look up the word in Spanish.
english_to_spanish = dict(zip(english, spanish)) english_to_spanish
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Now we can use English words to look up the word in Spanish! We use the square brackets and the english word we want to use, to look up the spanish word.
english_to_spanish['hello']
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 12 Make an spanish_to_english dictionary and look up the English word for "por favor"
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
spanish_to_english = dict(zip(spanish, english)) spanish_to_english['por favor']
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Okay, detour over! Switching from linguistics back to biology :) Exercise 13 Use dict and zip to create a variable called id_to_color that assigns labels in cluster_ids_unique to a color in set2
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
id_to_color = dict(zip(cluster_ids_unique, set2)) id_to_color
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Now we want to use this id_to_color lookup table to make a long list of colors for each cell.
cell_metadata.head()
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
As an example, let's use the celltypes column to make a list of each celltype color first. Notice that we can use cell_metadata.celltype or cell_metadata['celltype'] to get the column we want. We can only use the 'dot' notation because our column name has no unfriendly characters like spaces, dashes, or dots -- characters that mean something special in Python.
celltypes = cell_metadata.celltype.unique() # Could also use cell_metadata['celltype'].unique() celltypes celltype_to_color = dict(zip(celltypes, sns.color_palette('Accent', n_colors=len(celltypes)))) celltype_to_color
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Now we'll use the existing column cell_metadata.celltype to make a list of colors for each celltype
per_cell_celltype_color = [celltype_to_color[celltype] for celltype in cell_metadata.celltype] # Since this list is as long as our number of cells (300!), let's slice it and only look at the first 10 per_cell_celltype_color[:5]
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 14 Make a variable called per_cell_cluster_color that uses the id_to_color dictionary to look up the color for each value in the cluster_celltype_with_id column of cluster_metadata
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
per_cell_cluster_color = [id_to_color[i] for i in cell_metadata.cluster_celltype_with_id] # Since this list is as long as our number of cells (300!), let's slice it and only look at the first 10 per_cell_cluster_color[:10]
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 15 Now use the cluster colors to label the rows and columns in sns.clustermap. How can
# YOUR CODE HERE
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
sns.clustermap(expression_corr, row_colors=per_cell_cluster_color, col_colors=per_cell_cluster_color)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
We can also combine the celltype and cluster colors we created to create a double-layer colormap!
combined_colors = [per_cell_cluster_color, per_cell_celltype_color] len(combined_colors) sns.clustermap(expression_corr, row_colors=combined_colors, col_colors=combined_colors)
notebooks/1.4_make_clustered_heatmap.ipynb
olgabot/cshl-singlecell-2017
mit
Create sample The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
%cd data/state %cd train %mkdir ../sample %mkdir ../sample/train %mkdir ../sample/valid for d in glob('c?'): os.mkdir('../sample/train/'+d) os.mkdir('../sample/valid/'+d) from shutil import copyfile g = glob('c?/*.jpg') shuf = np.random.permutation(g) for i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i]) %cd ../valid g = glob('c?/*.jpg') shuf = np.random.permutation(g) for i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i]) %cd ../../.. %mkdir data/state/results %mkdir data/state/sample/test
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Create batches
batches = get_batches(path+'train', batch_size=batch_size) val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False) (val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames, test_filename) = get_classes(path)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Basic models Linear model First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Flatten(), Dense(10, activation='softmax') ])
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
As you can see below, this training is going nowhere...
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:
model.summary()
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Over 1.5 million parameters - that should be enough. Incidentally, it's worth checking you understand why this is the number of parameters in this layer:
10*3*224*224
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too high. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:
np.round(model.predict_generator(batches, batches.N)[:10],2)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:
model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Flatten(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Great - we found our way out of that hole... Now we can increase the learning rate and see where we can get to.
model.optimizer.lr=0.001 model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
We're stabilizing at validation accuracy of 0.39. Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:
rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True) val_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)] np.round(val_res, 2)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Yup, pretty consistent - if we see improvements of 3% or more, it's probably not random, based on the above samples. L2 regularization The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (i.e. add the sum of squares of the weights to our loss function):
model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Flatten(), Dense(10, activation='softmax', W_regularizer=l2(0.01)) ]) model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr=0.001 model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Looks like we can get a bit over 50% accuracy this way. This will be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach. Single hidden layer The next simplest model is to add a single hidden layer.
model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Flatten(), Dense(100, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr = 0.01 model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one. Single conv layer 2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:
def conv1(batches): model = Sequential([ BatchNormalization(axis=1, input_shape=(3,224,224)), Convolution2D(32,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Convolution2D(64,3,3, activation='relu'), BatchNormalization(axis=1), MaxPooling2D((3,3)), Flatten(), Dense(200, activation='relu'), BatchNormalization(), Dense(10, activation='softmax') ]) model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) model.optimizer.lr = 0.001 model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) return model conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable result. So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation. Data augmentation To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best result we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately. Width shift: move the image left and right -
gen_t = image.ImageDataGenerator(width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Height shift: move the image up and down -
gen_t = image.ImageDataGenerator(height_shift_range=0.05) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Random shear angles (max in radians) -
gen_t = image.ImageDataGenerator(shear_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Rotation: max in degrees -
gen_t = image.ImageDataGenerator(rotation_range=15) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Channel shift: randomly changing the R,G,B colors -
gen_t = image.ImageDataGenerator(channel_shift_range=20) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
And finally, putting it all together!
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05, shear_range=0.1, channel_shift_range=20, width_shift_range=0.1) batches = get_batches(path+'train', gen_t, batch_size=batch_size) model = conv1(batches)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decisions.
model.optimizer.lr = 0.0001 model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Lucky we tried that - we starting to make progress! Let's keep going.
model.fit_generator(batches, batches.nb_sample, nb_epoch=25, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
deeplearning1/nbs/statefarm-sample.ipynb
yingchi/fastai-notes
apache-2.0
Four-way Multiply-Accumulate These cells demonstrate the evolution of a full four-way multiply-accumulate CFU instruction. SingleMultiply Demonstrates a simple calculation: (a+128)*b
class SingleMultiply(SimpleElaboratable): def __init__(self): self.a = Signal(signed(8)) self.b = Signal(signed(8)) self.result = Signal(signed(32)) def elab(self, m): m.d.comb += self.result.eq((self.a + 128) * self.b) class SingleMultiplyTest(TestBase): def create_dut(self): return SingleMultiply() def test(self): TEST_CASE = [ (1-128, 1, 1), (33-128, -25, 33*-25), ] def process(): for (a, b, expected) in TEST_CASE: yield self.dut.a.eq(a) yield self.dut.b.eq(b) yield Delay(0.1) self.assertEqual(expected, (yield self.dut.result)) yield self.run_sim(process) runTests(SingleMultiplyTest)
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
WordMultiplyAdd Performs four (a + 128) * b operations in parallel, and adds the results.
class WordMultiplyAdd(SimpleElaboratable): def __init__(self): self.a_word = Signal(32) self.b_word = Signal(32) self.result = Signal(signed(32)) def elab(self, m): a_bytes = [self.a_word[i:i+8].as_signed() for i in range(0, 32, 8)] b_bytes = [self.b_word[i:i+8].as_signed() for i in range(0, 32, 8)] m.d.comb += self.result.eq( sum((a + 128) * b for a, b in zip(a_bytes, b_bytes))) class WordMultiplyAddTest(TestBase): def create_dut(self): return WordMultiplyAdd() def test(self): def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128) def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0) TEST_CASE = [ (a(99, 22, 2, 1), b(-2, 6, 7, 111), 59), (a(63, 161, 15, 0), b(29, 13, 62, -38), 4850), ] def process(): for (a, b, expected) in TEST_CASE: yield self.dut.a_word.eq(a) yield self.dut.b_word.eq(b) yield Delay(0.1) self.assertEqual(expected, (yield self.dut.result)) yield self.run_sim(process) runTests(WordMultiplyAddTest)
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
WordMultiplyAccumulate Adds an accumulator to the four-way multiply and add operation. Includes an enable signal to control when accumulation takes place and a clear signal to rest the accumulator.
class WordMultiplyAccumulate(SimpleElaboratable): def __init__(self): self.a_word = Signal(32) self.b_word = Signal(32) self.accumulator = Signal(signed(32)) self.enable = Signal() self.clear = Signal() def elab(self, m): a_bytes = [self.a_word[i:i+8].as_signed() for i in range(0, 32, 8)] b_bytes = [self.b_word[i:i+8].as_signed() for i in range(0, 32, 8)] calculations = ((a + 128) * b for a, b in zip(a_bytes, b_bytes)) summed = sum(calculations) with m.If(self.enable): m.d.sync += self.accumulator.eq(self.accumulator + summed) with m.If(self.clear): m.d.sync += self.accumulator.eq(0) class WordMultiplyAccumulateTest(TestBase): def create_dut(self): return WordMultiplyAccumulate() def test(self): def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128) def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0) DATA = [ # (a_word, b_word, enable, clear), expected accumulator ((a(0, 0, 0, 0), b(0, 0, 0, 0), 0, 0), 0), # Simple tests: with just first byte ((a(10, 0, 0, 0), b(3, 0, 0, 0), 1, 0), 0), ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 1, 0), 30), ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 0), -14), # Since was not enabled last cycle, accumulator will not change ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 1, 0), -14), # Since was enabled last cycle, will change accumlator ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 1), -58), # Accumulator cleared ((a(11, 0, 0, 0), b(-4, 0, 0, 0), 0, 0), 0), # Uses all bytes (calculated on a spreadsheet) ((a(99, 22, 2, 1), b(-2, 6, 7, 111), 1, 0), 0), ((a(2, 45, 79, 22), b(-33, 6, -97, -22), 1, 0), 59), ((a(23, 34, 45, 56), b(-128, -121, 119, 117), 1, 0), -7884), ((a(188, 34, 236, 246), b(-87, 56, 52, -117), 1, 0), -3035), ((a(131, 92, 21, 83), b(-114, -72, -31, -44), 1, 0), -33997), ((a(74, 68, 170, 39), b(102, 12, 53, -128), 1, 0), -59858), ((a(16, 63, 1, 198), b(29, 36, 106, 62), 1, 0), -47476), ((a(0, 0, 0, 0), b(0, 0, 0, 0), 0, 1), -32362), # Interesting bug ((a(128, 0, 0, 0), b(-104, 0, 0, 0), 1, 0), 0), ((a(0, 51, 0, 0), b(0, 43, 0, 0), 1, 0), -13312), ((a(0, 0, 97, 0), b(0, 0, -82, 0), 1, 0), -11119), ((a(0, 0, 0, 156), b(0, 0, 0, -83), 1, 0), -19073), ((a(0, 0, 0, 0), b(0, 0, 0, 0), 1, 0), -32021), ] dut = self.dut def process(): for (a_word, b_word, enable, clear), expected in DATA: yield dut.a_word.eq(a_word) yield dut.b_word.eq(b_word) yield dut.enable.eq(enable) yield dut.clear.eq(clear) yield Delay(0.1) # Wait for input values to settle # Check on accumulator, as calcuated last cycle self.assertEqual(expected, (yield dut.accumulator)) yield Tick() self.run_sim(process) runTests(WordMultiplyAccumulateTest)
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
CFU Wrapper Wraps the preceding logic in a CFU. Uses funct7 to determine what function the WordMultiplyAccumulate unit should perform.
class Macc4Instruction(InstructionBase): """Simple instruction that provides access to a WordMultiplyAccumulate The supported functions are: * 0: Reset accumulator * 1: 4-way multiply accumulate. * 2: Read accumulator """ def elab(self, m): # Build the submodule m.submodules.macc4 = macc4 = WordMultiplyAccumulate() # Inputs to the macc4 m.d.comb += macc4.a_word.eq(self.in0) m.d.comb += macc4.b_word.eq(self.in1) # Only function 2 has a defined response, so we can # unconditionally set it. m.d.comb += self.output.eq(macc4.accumulator) with m.If(self.start): m.d.comb += [ # We can always return control to the CPU on next cycle self.done.eq(1), # clear on function 0, enable on function 1 macc4.clear.eq(self.funct7 == 0), macc4.enable.eq(self.funct7 == 1), ] def make_cfu(): return simple_cfu({0: Macc4Instruction()}) class CfuTest(CfuTestBase): def create_dut(self): return make_cfu() def test(self): "Tests CFU plumbs to Madd4 correctly" def a(a, b, c, d): return pack_vals(a, b, c, d, offset=-128) def b(a, b, c, d): return pack_vals(a, b, c, d, offset=0) # These values were calculated with a spreadsheet DATA = [ # ((fn3, fn7, op1, op2), result) ((0, 0, 0, 0), None), # reset ((0, 1, a(130, 7, 76, 47), b(104, -14, -24, 71)), None), # calculate ((0, 1, a(84, 90, 36, 191), b(109, 57, -50, -1)), None), ((0, 1, a(203, 246, 89, 178), b(-87, 26, 77, 71)), None), ((0, 1, a(43, 27, 78, 167), b(-24, -8, 65, 124)), None), ((0, 2, 0, 0), 59986), # read result ((0, 0, 0, 0), None), # reset ((0, 1, a(67, 81, 184, 130), b(81, 38, -116, 65)), None), ((0, 1, a(208, 175, 180, 198), b(-120, -70, 8, 11)), None), ((0, 1, a(185, 81, 101, 108), b(90, 6, -92, 83)), None), ((0, 1, a(219, 216, 114, 236), b(-116, -9, -109, -16)), None), ((0, 2, 0, 0), -64723), # read result ((0, 0, 0, 0), None), # reset ((0, 1, a(128, 0, 0, 0), b(-104, 0, 0, 0)), None), ((0, 1, a(0, 51, 0, 0), b(0, 43, 0, 0)), None), ((0, 1, a(0, 0, 97, 0), b(0, 0, -82, 0)), None), ((0, 1, a(0, 0, 0, 156), b(0, 0, 0, -83)), None), ((0, 2, a(0, 0, 0, 0), b(0, 0, 0, 0)), -32021), ] self.run_ops(DATA) runTests(CfuTest)
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
Amaranth to Verilog Examples These examples show Amaranth and the Verilog it is translated into. SyncAndComb Demonstrates synchronous and combinatorial logic with a simple component that outputs the high bit of a 12 bit counter.
class SyncAndComb(Elaboratable): def __init__(self): self.out = Signal(1) self.ports = [self.out] def elaborate(self, platform): m = Module() counter = Signal(12) m.d.sync += counter.eq(counter + 1) m.d.comb += self.out.eq(counter[-1]) return m print(convert_elaboratable(SyncAndComb()))
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
Conditional Enable Demonstrates Amaranth's equivalent to Verilog's if statement. A five bit counter is incremented when input signal up is high or decremented when down is high.
class ConditionalEnable(Elaboratable): def __init__(self): self.up = Signal() self.down = Signal() self.value = Signal(5) self.ports = [self.value, self.up, self.down] def elaborate(self, platform): m = Module() with m.If(self.up): m.d.sync += self.value.eq(self.value + 1) with m.Elif(self.down): m.d.sync += self.value.eq(self.value - 1) return m print(convert_elaboratable(ConditionalEnable()))
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
EdgeDetector Simple edge detector, along with a test case.
class EdgeDetector(SimpleElaboratable): """Detects low-high transitions in a signal""" def __init__(self): self.input = Signal() self.detected = Signal() self.ports = [self.input, self.detected] def elab(self, m): last = Signal() m.d.sync += last.eq(self.input) m.d.comb += self.detected.eq(self.input & ~last) class EdgeDetectorTestCase(TestBase): def create_dut(self): return EdgeDetector() def test_with_table(self): TEST_CASE = [ (0, 0), (1, 1), (0, 0), (0, 0), (1, 1), (1, 0), (0, 0), ] def process(): for (input, expected) in TEST_CASE: # Set input yield self.dut.input.eq(input) # Allow some time for signals to propagate yield Delay(0.1) self.assertEqual(expected, (yield self.dut.detected)) yield self.run_sim(process) runTests(EdgeDetectorTestCase) print(convert_elaboratable(EdgeDetector()))
proj/fccm_tutorial/Amaranth_for_CFUs.ipynb
google/CFU-Playground
apache-2.0
The notion of doing a sequence of operations in order is familiar to us from cooking with recipes. For example, a recipe might direct us to: put ingredients in bowl<br> mix ingredients together<br> pour into baking pan<br> bake at 375 degrees for 40 minutes We naturally assume the steps are given in execution order. Conditional execution Some recipes give conditional instructions, such as if not sweet enough, add some sugar Similarly, processors can conditionally execute one or a group of operations.
x = -3 if x<0: x = 0 print("was negative") print(x)
notes/computation.ipynb
parrt/msan501
mit
Conditional operations execute only if the conditional expression is true. To be clear, the processor does not execute all of the operations present in the program. When mapping a real-world problem to a conditional statement, your goal is to identify these key elements: the conditional expression the operation(s) to perform if the condition is true A template for conditional execution looks like: if condition:<br> &nbsp;&nbsp;&nbsp;&nbsp;operation 1<br> &nbsp;&nbsp;&nbsp;&nbsp;operation 2<br> &nbsp;&nbsp;&nbsp;&nbsp;... The condition must be actionable by a computer. For example, the condition "the cat is hungry" is not actionable by computer because there's nothing in its memory a computer can test that is equivalent to the cat being hungry. Conditions almost always consist of equality or relational operators for arithmetic, such as cost &gt; 10, cost + tax &lt; 100, or quantity == 4. It's time to introduce a new data type: boolean, which holds either a true or false value. The result (value) of any equality or relational operator is boolean. For example, "3>2" evaluates to true and "3>4" evaluates to false. In some cases, we want to execute an operation in either case, one operation if the condition is true and a different operation if the condition is false. For example, we can express a conditional operation to find the larger of two values, x and y, as:
x = 99 y = 210 if x > y: max = x else: max = y print(max)
notes/computation.ipynb
parrt/msan501
mit
Conditional execution is kind of like executing an operation zero or one times, depending on the conditional expression. We also need to execute operations multiple times in some cases. Repeated execution Most of the programming you will do involves applying operations to data, which means operating on data elements one by one. This means we need to be able to repeat instructions, but we rarely use such a low level construct as: while condition, do this or that. For example, it's rare that we want to print hello five times:
for i in range(5): print("Hello")
notes/computation.ipynb
parrt/msan501
mit
(range(n) goes from 0 to n-1) On the other hand, generic loops that go around until the condition is met can be useful, such as "read data from the network until the data stops." Or, we can do computations such as the following rough approximation to log base 10:
n = 1000 log = 0 i = n while i > 1: log += 1 print(i) i = i / 10 print(f"Log of {n} is {log}")
notes/computation.ipynb
parrt/msan501
mit
Most of the time, though, we are scanning through data, which means a "for each loop" or "indexed for loop." For-each loops The for-each loop iterates through a sequence of elements, such as a list but can be any iteratable object, such as a file on the disk. It is the most common kind of loop you will use. Here is the pattern: for x in iterable_object: process x in some way For example:
for name in ['parrt', 'mary', 'tombu']: print(name.upper())
notes/computation.ipynb
parrt/msan501
mit
We can iterate through more complicated objects too, such as the following numpy matrix:
import numpy as np A = np.array([[19,11], [21,15], [103,18], [99,13], [8,2]]) for row in A: print(row)
notes/computation.ipynb
parrt/msan501
mit
Or even a dataframe:
from pandas import DataFrame df = DataFrame(data=[[99,'parrt'],[101,'sri'],[42,'kayla']], columns=['ID','user']) df for row in df.itertuples(): print(f"Info {row.ID:04d}: {row.user}")
notes/computation.ipynb
parrt/msan501
mit
A useful bit of Python magic gives us the iteration index as well as the iterated value:
for i,row in enumerate(A): print(f"{i+1}. {row}") # same as the enumerate i = 1 for row in A: print(f"{i}. {row}") i += 1
notes/computation.ipynb
parrt/msan501
mit
With this code pattern, our goal is to find a good iterated variable named, identify the conditional, and then identify the operation(s) to repeat. Indexed loops It's sometimes useful to execute an indexed loop. These are useful when we have to iterate through multiple lists at the same time. For example, if we have a list of names and their phone numbers, we might want to use an indexed loop:
names = ['parrt', 'mary', 'tombu'] phones = ['5707', '1001', '3412'] for i in range(len(names)): name = names[i] phone = phones[i] print(f"{name:>8}: {phone}")
notes/computation.ipynb
parrt/msan501
mit
zip'd loops And here is how the cool kids do the same thing without an indexed loop (using a foreach):
for name, phone in zip(names,phones): print(f"{name:>8}: {phone}") for i, (name, phone) in enumerate(zip(names,phones)): print(f"{i}. {name:>8}: {phone}")
notes/computation.ipynb
parrt/msan501
mit
List comprehensions
names = ['parrt', 'mary', 'tombu'] [name.upper() for name in names] [name for name in names] # make (shallow) copy of list [name for name in names if name.startswith('m')] [name.upper() for name in names if name.startswith('m')] Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21] [q*10 for q in Quantity] Quantity2 = [6, 49, 0, 30, -19, 21, 12, 22, 21] [q*10 for q in Quantity2 if q>0] # Find indexes of all values <= 0 in Quantity2 [i for i,q in enumerate(Quantity2) if q<=0] [f"{name}: {phone}" for name, phone in zip(names,phones)]
notes/computation.ipynb
parrt/msan501
mit
读取图像
from scipy import misc misc.imread('fname.png') import matplotlib.pyplot as plt plt.imread('fname.png')
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
文本文件 numpy.loadtxt() / numpy.savetxt() txt/csv文件 numpy.genfromtxt()/numpy.recfromcsv() 二进制文件 numpy.load() / numpy.save() 2 scipy.linalg 计算行列式
from scipy import linalg arr= np.array([[1,2], [3,4]]) linalg.det(arr) arr = np.array([[3,2], [6,4]]) linalg.det(arr) linalg.det(np.ones(3,4))
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
计算逆矩阵
arr = np.array([[1,2],[3,4]]) iarr = linalg.inv(arr) iarr # 验证 np.allclose(np.dot(arr,iarr),np.eye(2)) # 奇异矩阵求逆抛出异常 arr = np.array([[3,2],[6,4]]) linalg.inv(arr)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
奇异值分解
arr = np.arange(9).reshape((3,3)) + np.diag([1,0,1]) uarr,spec,vharr = linalg.svd(arr) spec sarr = np.diag(spec) svd_mat = uarr.dot(sarr).dot(vharr) np.allclose(svd_mat,arr)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
SVD常用于统计和信号处理领域。其他的一些标准分解方法(QR, LU, Cholesky, Schur) 在 scipy.linalg 中也能够找到。 3 优化
from scipy import optimize def f(x): return x**2 + 10*np.sin(x) x = np.arange(-10,10,0.1) plt.plot(x,f(x)) plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
此函数有一个全局最小值,约为-1.3,含有一个局部最小值,约为3.8. 在寻找最小值的过程中,确定初始值,用梯度下降的方法,bfgs是一个很好的方法。
optimize.fmin_bfgs(f,0) # 但是方法的缺陷是陷入局部最优解 optimize.fmin_bfgs(f,5)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
可以在一个区间中找到一个最小值
xmin_local = optimize.fminbound(f,0,10) xmin_local
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
寻找函数的零点
# guess 1 root = optimize.fsolve(f,1) root # guess -2.5 root = optimize.fsolve(f,-2.5) root
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
曲线拟合 从函数f中采样得到一些含有噪声的数据
xdata = np.linspace(-10,10,num=20) ydata = f(xdata)+np.random.randn(xdata.size)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
我们已经知道函数的形式$x^2+\sin(x)$,但是每一项的系数不清楚,因此进行拟合处理
def f2(x,a,b): return a*x**2 + b*np.sin(x) guess=[3,2] params,params_covariance = optimize.curve_fit(f2, xdata, ydata, guess) params
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
绘制结果
x = np.arange(-10,10,0.1) def f(x): return x**2 + 10 * np.sin(x) grid = (-10,10,0.1) xmin_global = optimize.brute(f,(grid,)) xmin_local = optimize.fminbound(f,0,10) root = optimize.fsolve(f,1) root2 = optimize.fsolve(f,-2.5) xdata = np.linspace(-10,10,num=20) np.random.seed(1234) ydata = f(xdata)+np.random.randn(xdata.size) def f2(x,a,b): return a*x**2 + b * np.sin(x) guess=[2,2] params,_ =optimize.curve_fit(f2,xdata,ydata,guess) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(x,f(x),'b-',label='f(x)') ax.plot(x,f2(x,*params),'r--',label='Curve fit result') xmins = np.array([xmin_global[0],xmin_local]) ax.plot(xmins,f(xmins),'go',label='minize') roots = np.array([root,root2]) ax.plot(roots,f(roots),'kv',label='Roots') ax.legend() ax.set_xlabel('x') ax.set_ylabel('f(x)') plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
4 统计 直方图和概率密度统计
a = np.random.normal(size=1000) bins = np.arange(-4,5) bins histogram = np.histogram(a,bins=bins,normed=True)[0] bins = 0.5*(bins[1:]+bins[:-1]) bins from scipy import stats b =stats.norm.pdf(bins) plt.plot(bins,histogram) plt.plot(bins,b) plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
百分位数 百分位是累计概率分布函数的一个估计
np.median(a) stats.scoreatpercentile(a,50) stats.scoreatpercentile(a,90)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
统计检验 统计检验的结果常用作一个决策指标。例如,如果我们有两组观察点,它们都来自高斯过程,我们可以使用 T-检验 来判断两组观察点是都显著不同:
a = np.random.normal(0,1,size=100) b = np.random.normal(0,1,size=10) stats.ttest_ind(a,b)
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
返回结果分成连个部分 + T检验统计量 使用检验的统计量的值 + P值 如果结果接近1,表明符合预期,接近0表明不符合预期。 5 插值
measured_time = np.linspace(0,1,10) noise = (np.random.random(10)*2-1) *1e-1 measure = np.sin(2*np.pi*measured_time) + noise from scipy.interpolate import interp1d linear_interp = interp1d(measured_time, measure) computed_time = np.linspace(0, 1, 50) linear_results = linear_interp(computed_time) cublic_interp = interp1d(measured_time, measure, kind='cubic') cublic_results = cublic_interp(computed_time) plt.plot(measured_time,measure,'o',label='points') plt.plot(computed_time,linear_results,'r-',label='linear interp') plt.plot(computed_time,cublic_results,'y-',label='cublic interp') plt.legend() 3 plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
练习 温度曲线拟合 阿拉斯加每个月温度的最大值和最小值数据见下表: 最小值 | 最大值 | 最小值 | 最大值 --- | --- | --- | --- -62 | 17 | -9 | 37 -59 | 19 | -13 | 37 -56 | 21 | -25 | 31 -46 | 28 | -46 | 23 -32 | 33 | -52 | 19 -18 | 38 | -48 | 18 要求 + 绘制温度图像 + 拟合出一条函数曲线 + 使用scipy.optimize.curvie_fit()来拟合函数 + 画出函数图像。 + 判断最大值和最小值的偏置是否合理
import numpy as np import matplotlib.pyplot as plt months = np.arange(1,13) mins = [-62,-59,-56,-46,-32,-18,-9,-13,-25,-46,-52,-48] maxes = [17,19,21,28,33,38,37,37,31,23,19,18] fig,ax = plt.subplots() plt.plot(months,mins,'b-',label='min') plt.plot(months,maxes,'r-',label='max') plt.ylim(-80,80) plt.xlim(0.5,12.5) plt.xlabel('month') plt.ylabel('temperature') plt.xticks([1,2,3,4,5,6,7,8,9,10,11,12], ['Jan.','Feb.','Mar.','Apr.','May.','Jun.','Jul.','Aug.','Sep,','Oct.','Nov.','Dec.']) plt.legend() plt.title('Alaska temperature') plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
从图像上来看,温度的最高值和最低值都符合二次函数的特点,$y = at^2+bt+c$,其中$c$为时间$t$的偏置。
from scipy import optimize def f(t,a,b,c): return a * t**2+b*t+c guess = [-1,8,50] params_min,_ = optimize.curve_fit(f,months,mins,guess) params_max,_ = optimize.curve_fit(f,months,maxes,guess) times = np.linspace(1,12,30) plt.plot(times,f(times,*params_min),'b--',label='min_fit') plt.plot(times,f(times,*params_max),'r--',label='max_fit') plt.plot(months,mins,'bo',label='min') plt.plot(months,maxes,'ro',label='max') plt.ylim(-80,80) plt.xlim(0.5,12.5) plt.xlabel('month') plt.ylabel('temperature') plt.xticks([1,2,3,4,5,6,7,8,9,10,11,12], ['Jan.','Feb.','Mar.','Apr.','May.','Jun.','Jul.','Aug.','Sep,','Oct.','Nov.','Dec.']) plt.title('Alaska temperature') plt.show()
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
温度最高值拟合效果较好,但温度最低值拟合效果不太好 求解最小值 驼峰函数 $$f(x,y)=(4-2.1x^2+\frac{x^4}{3})x^2+xy+(4y^2-4)y^2$$ + 限制变量范围: $-2<x<2,-1<y<1$ + 使用 numpy.meshgrid() 和 pylab.imshow() 目测最小值所在区域 + 使用 scipy.optimize.fmin_bfgs() 或者其他的用于可以求解多维函数最小值的算法
import numpy as np from scipy import optimize import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D def sixhump(x): return (4 - 2.1*x[0]**2 + x[0]**4 / 3.) * x[0]**2 + x[0] * x[1] + (-4 + 4*x[1]**2) * x[1] **2 x = np.linspace(-2, 2) y = np.linspace(-1, 1) xg, yg = np.meshgrid(x, y) #plt.figure() # simple visualization for use in tutorial #plt.imshow(sixhump([xg, yg])) #plt.colorbar() fig = plt.figure() ax = fig.add_subplot(111, projection='3d') surf = ax.plot_surface(xg, yg, sixhump([xg, yg]), rstride=1, cstride=1, cmap=plt.cm.jet, linewidth=0, antialiased=False) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('f(x, y)') ax.set_title('Six-hump Camelback function') plt.show() min1 = optimize.fmin_bfgs(sixhump,[0,-0.5]) min2 = optimize.fmin_bfgs(sixhump,[0,0.5]) min3 = optimize.fmin_bfgs(sixhump,[-1.4,1.0]) local1 = sixhump(min1) local2 = sixhump(min2) local3 = sixhump(min3) print local1,local2,local3
python-statatics-tutorial/basic-theme/scipy_basic/details.ipynb
gaufung/Data_Analytics_Learning_Note
mit
Why do we get the answer above rather than what we would expect? The answer has to do with the type of number being used. Python is a "dynamically" typed language and automatically determines what kind of number to allocate for us. Above, because we did not include a decimal, Python automatically treated the expression as integers (int type) and according to integer arithmetic, 1 / 2 = 0. Now if we include a decimal we get:
1.0 / 2
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that Python will make the output a float in this case. What happens for the following though?
4.0 + 4**(3/2) 4.0 + 4.0**(3.0 / 2.0)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Good practice to just add a decimal after any number you really want to treat as a float. Additional types of numbers include complex, Decimal and Fraction.
3+5j
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that to use "named" functions such as sqrt or sin we need to import a module so that we have access to those functions. When you import a module (or package) in Python we are asking Python to go look for the code that is named and make them active in our workspace (also called a namespace in more general parlance). Here is an example where we use Python's builtin math module:
import math math.sqrt(4) math.sin(math.pi / 2.0) math.exp(-math.pi / 4.0)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that in order to access these functions we need to prepend the math. to the functions and the constant $\pi$. We can forgo this and import all of what math holds if we do the following:
from math import * sin(pi / 2.0)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that many of these functions always return a float number regardless of their input. Variables Assign variables like you would in any other language:
num_students = 80 room_capacity = 85 (room_capacity - num_students) / room_capacity * 100.0
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that we do not get what we expect from this expression as we expected from above. What would we have to change to get this to work? We could go back to change our initializations but we could also use the function float to force these values to be of float type:
float(room_capacity - num_students) / float(room_capacity) * 100.0
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note here we have left the defined variables as integers as it makes sense that they remain that way (fractional students aside).
a = 10 b = a + 2 print b
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Lists One of the most useful data structures in Python is the list.
grades = [90.0, 67.0, 85.0, 76.0, 98.0, 70.0]
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Lists are defined with square brackets and delineated by commas. Note that there is another data type called sequences denoted by ( ) which are immutable (cannot be changed) once created. Lets try to do some list manipulations with our list of grades above. Access a single value in a list
grades[3]
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that Python is 0 indexed, i.e. the first value in the list is accessed by 0. Find the length of a list
len(grades)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Add values to a list
grades = grades + [62.0, 82.0, 59.0] print grades
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Slicing is another important operation
grades[2:5] grades[0:4] grades[:4] grades[4:]
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Note that the range of values does not include the last indexed! This is important to remember for more than lists but we will get to that later.
grades[4:11]
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Another property of lists is that you can put different types in them at the same time. This can be important to remember if you may have both int and float types.
remember = ["2", 2, 2.0] remember[0] / 1 remember[1] / 1 remember[2] / 1
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Finally, one of the more useful list creation functions is range which creates a list with the bounds requested
count = range(3,7) print count
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Control Flow if Most basic logical control
x = 4 if x > 5: print "x is greater than 5" elif x < 5: print "x is less than 5" else: print "x is equal to 5"
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
for The for statements provide the most common type of loops in Python (there is also a while construct).
for i in range(5): print i for i in range(3,7): print i for animal in ['cat', 'dog', 'chinchilla']: print animal
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Related to the for statement are the control statements break and continue. Ideally we can create a loop with logic that can avoid these but sometimes code can be more readable with judiciuos use of these statements.
for n in range(2, 10): is_prime = True for x in range(2, n): if n % x == 0: print n, 'equals', x, '*', n / x is_prime = False break if is_prime: print "%s is a prime number" % (n)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
The pass statement might appear fairly useless as it simply does nothing but can provide a stub to remember to come back and implement something
def my_func(x): # Remember to implement this later! pass
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Defining Functions The last statement above defines a function in Python with an argument called x. Functions can be defined and do lots of different things, here are a few examples.
def my_print_function(x): print x my_print_function(3) def my_add_function(a, b): return a + b my_add_function(3.0, 5.0) def my_crazy_function(a, b, c=1.0): d = a + b**c return d my_crazy_function(2.0, 3.0), my_crazy_function(2.0, 3.0, 2.0), my_crazy_function(2.0, 3.0, c=2.0) def my_other_function(a, b, c=1.0): return a + b, a + b**c, a + b**(3.0 / 7.0) my_other_function(2.0, 3.0, c=2.0)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Lets try writing a bit more of a complex (and useful) function. The Fibinocci sequence is formed by adding the previous two numbers of the sequence to get the next value (starting with [0, 1]).
def fibonacci(n): """Return a list of the Fibonacci sequence up to n""" values = [0, 1] while values[-1] <= n: values.append(values[-1] + values[-2]) print values return values fibonacci(100)
1_intro_to_python.ipynb
detcitty/intro-numerical-methods
mit
Some global data
capital = 10000 start = datetime.datetime(2020, 1, 1) end = datetime.datetime.now()
examples/160.merge-trades/strategy.ipynb
fja05680/pinkfish
mit
Define symbols
symbols = ['SPY', 'SPY_merged']
examples/160.merge-trades/strategy.ipynb
fja05680/pinkfish
mit
Options
options = { 'use_adj' : False, 'use_cache' : True, 'stop_loss_pct' : 1.0, 'margin' : 1, 'period' : 7, 'max_open_trades' : 4, 'enable_scale_in' : True, 'enable_scale_out' : True, 'merge_trades' : False }
examples/160.merge-trades/strategy.ipynb
fja05680/pinkfish
mit
Run Strategy
options['merge_trades'] = False strategies = pd.Series(dtype=object) for symbol in symbols: print(symbol, end=" ") strategies[symbol] = strategy.Strategy(symbols[0], capital, start, end, options) strategies[symbol].run() options['merge_trades'] = True # View all columns and row in dataframe pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) # View raw log strategies[symbols[0]].rlog # View unmerged trade log strategies[symbols[0]].tlog # View merged trade log strategies[symbols[1]].tlog
examples/160.merge-trades/strategy.ipynb
fja05680/pinkfish
mit
Summarize results - compare statistics between unmerged and merged view of trade log.
metrics = strategies[symbol].stats.index df = pf.optimizer_summary(strategies, metrics) df
examples/160.merge-trades/strategy.ipynb
fja05680/pinkfish
mit
Read the data
column_names = ['user_id', 'item_id', 'rating', 'timestamp'] df = pd.read_csv('u.data', sep='\t', names=column_names) df.head()
Machine Learning/Recommender Systems/Recommender Systems with Python.ipynb
luizhsda10/Data-Science-Projectcs
mit