markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
GenBank files don't have any per-letter annotations:
record.letter_annotations
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Most of the annotations information gets recorded in the \verb|annotations| dictionary, for example:
len(record.annotations) record.annotations["source"]
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The dbxrefs list gets populated from any PROJECT or DBLINK lines:
record.dbxrefs
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Finally, and perhaps most interestingly, all the entries in the features table (e.g. the genes or CDS features) get recorded as SeqFeature objects in the features list.
len(record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Feature, location and position objects SeqFeature objects Sequence features are an essential part of describing a sequence. Once you get beyond the sequence itself, you need some way to organize and easily get at the more 'abstract' information that is known about the sequence. While it is probably impossible to develo...
from Bio import SeqFeature start_pos = SeqFeature.AfterPosition(5) end_pos = SeqFeature.BetweenPosition(9, left=8, right=9) my_location = SeqFeature.FeatureLocation(start_pos, end_pos)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that the details of some of the fuzzy-locations changed in Biopython 1.59, in particular for BetweenPosition and WithinPosition you must now make it explicit which integer position should be used for slicing etc. For a start position this is generally the lower (left) value, while for an end position this would ge...
print(my_location)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
We can access the fuzzy start and end positions using the start and end attributes of the location:
my_location.start print(my_location.start) my_location.end print(my_location.end)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
If you don't want to deal with fuzzy positions and just want numbers, they are actually subclasses of integers so should work like integers:
int(my_location.start) int(my_location.end)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For compatibility with older versions of Biopython you can ask for the \verb|nofuzzy_start| and \verb|nofuzzy_end| attributes of the location which are plain integers:
my_location.nofuzzy_start my_location.nofuzzy_end
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Notice that this just gives you back the position attributes of the fuzzy locations. Similarly, to make it easy to create a position without worrying about fuzzy positions, you can just pass in numbers to the FeaturePosition constructors, and you'll get back out ExactPosition objects:
exact_location = SeqFeature.FeatureLocation(5, 9) print(exact_location) exact_location.start print(int(exact_location.start)) exact_location.nofuzzy_start
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
That is most of the nitty gritty about dealing with fuzzy positions in Biopython. It has been designed so that dealing with fuzziness is not that much more complicated than dealing with exact positions, and hopefully you find that true! Location testing You can use the Python keyword in with a SeqFeature or location ob...
my_snp = 4350 record = SeqIO.read("data/NC_005816.gb", "genbank") for feature in record.features: if my_snp in feature: print("%s %s" % (feature.type, feature.qualifiers.get('db_xref')))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that gene and CDS features from GenBank or EMBL files defined with joins are the union of the exons -- they do not cover any introns. Sequence described by a feature or location A SeqFeature or location object doesn't directly contain a sequence, instead the location describes how to get this from the parent seque...
from Bio.SeqFeature import SeqFeature, FeatureLocation seq = Seq("ACCGAGACGGCAAAGGCTAGCATAGGTATGAGACTTCCTTCCTGCCAGTGCTGAGGAACTGGGAGCCTAC") feature = SeqFeature(FeatureLocation(5, 18), type="gene", strand=-1)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You could take the parent sequence, slice it to extract 5:18, and then take the reverse complement. If you are using Biopython 1.59 or later, the feature location's start and end are integer like so this works:
feature_seq = seq[feature.location.start:feature.location.end].reverse_complement() print(feature_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This is a simple example so this isn't too bad -- however once you have to deal with compound features (joins) this is rather messy. Instead, the SeqFeature object has an extract method to take care of all this (and since Biopython 1.78 can handle trans-splicing by supplying a dictionary of referenced sequences):
feature_seq = feature.extract(seq) print(feature_seq)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The length of a SeqFeature or location matches that of the region of sequence it describes.
print(len(feature_seq)) print(len(feature)) print(len(feature.location))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For simple FeatureLocation objects the length is just the difference between the start and end positions. However, for a CompoundLocation the length is the sum of the constituent regions. Comparison The SeqRecord mobjects can be very complex, but hereโ€™s a simple example:
from Bio.SeqRecord import SeqRecord record1 = SeqRecord(Seq("ACGT"), id="test") record2 = SeqRecord(Seq("ACGT"), id="test")
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
What happens when you try to compare these โ€œidenticalโ€ records?
record1 == record2
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Perhaps surprisingly older versions of Biopython would use Pythonโ€™s default object comparison for theSeqRecord, meaning record1 == record2 would only return True if these variables pointed at the same object in memory. In this example, record1 == record2 would have returned False here!
record1 == record2 # on old versions of Biopython!
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
False As of Biopython 1.67, SeqRecord comparison like record1 == record2 will instead raise an explicit error to avoid people being caught out by this:
record1 == record2
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Instead you should check the attributes you are interested in, for example the identifier and the sequence:
record1.id == record2.id record1.seq == record2.seq
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Beware that comparing complex objects quickly gets complicated. References Another common annotation related to a sequence is a reference to a journal or other published work dealing with the sequence. We have a fairly simple way of representing a Reference in Biopython -- we have a Bio.SeqFeature.Reference class that ...
record = SeqRecord( Seq( "MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD" "GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK" "NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM" "SSAC" ), id="gi|14150838|gb|AAK54648.1|AF376133_1", descr...
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This format method takes a single mandatory argument, a lower case string which is supported by Bio.SeqIO as an output format. However, some of the file formats Bio.SeqIO can write to require more than one record (typically the case for multiple sequence alignment formats), and thus won't work via this format() method....
record = SeqIO.read("data/NC_005816.gb", "genbank") print(record) len(record) len(record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
For this example we're going to focus in on the pim gene, YP_pPCP05. If you have a look at the GenBank file directly you'll find this gene/CDS has location string 4343..4780, or in Python counting 4342:4780. From looking at the file you can work out that these are the twelfth and thirteenth entries in the file, so in P...
print(record.features[20]) print(record.features[21])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Let's slice this parent record from 4300 to 4800 (enough to include the pim gene/CDS), and see how many features we get:
sub_record = record[4300:4800] sub_record len(sub_record) len(sub_record.features)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Our sub-record just has two features, the gene and CDS entries for YP_pPCP05:
print(sub_record.features[0]) print(sub_record.features[1])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Notice that their locations have been adjusted to reflect the new parent sequence! While Biopython has done something sensible and hopefully intuitive with the features (and any per-letter annotation), for the other annotation it is impossible to know if this still applies to the sub-sequence or not. To avoid guessing,...
print(sub_record.annotations) print(sub_record.dbxrefs)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
The same point could be made about the record id, name and description, but for practicality these are preserved:
print(sub_record.id) print(sub_record.name) print(sub_record.description)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This illustrates the problem nicely though, our new sub-record is not the complete sequence of the plasmid, so the description is wrong! Let's fix this and then view the sub-record as a reduced FASTA file using the format method described above:
sub_record.description ="Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, partial." print(sub_record.format("fasta"))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Adding SeqRecord objects You can add SeqRecord objects together, giving a new SeqRecord. What is important here is that any common per-letter annotations are also added, all the features are preserved (with their locations adjusted), and any other common annotation is also kept (like the id, name and description). For ...
record = next(SeqIO.parse("data/example.fastq", "fastq")) print(len(record)) print(record.seq) print(record.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Let's suppose this was Roche 454 data, and that from other information you think the TTT should be only TT. We can make a new edited record by first slicing the SeqRecord before and after the 'extra' third T:
left = record[:20] print(left.seq) print(left.letter_annotations["phred_quality"]) right = record[21:] print(right.seq) print(right.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now add the two parts together:
edited = left + right print(len(edited)) print(edited.seq) print(edited.letter_annotations["phred_quality"])
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Easy and intuitive? We hope so! You can make this shorter with just:
edited = record[:20] + record[21:]
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Now, for an example with features, we'll use a GenBnak file. Suppose you have a circular genome:
record = SeqIO.read("data/NC_005816.gb", "genbank") print(record)
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You can shift the origin like this:
print(len(record)) print(len(record.features)) print(record.dbxrefs) print(record.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
You can shift the origin like this:
shifted = record[2000:] + record[:2000] print(shifted) print(len(shifted))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Note that this isn't perfect in that some annotation like the database cross references and one of the features (the source feature) have been lost:
print(len(shifted.features)) print(shifted.dbxrefs) print(shifted.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
This is because the SeqRecord slicing step is cautious in what annotation it preserves (erroneously propagating annotation can cause major problems). If you want to keep the database cross references or the annotations dictionary, this must be done explicitly:
shifted.dbxrefs = record.dbxrefs[:] shifted.annotations = record.annotations.copy() print(shifted.dbxrefs) print(shifted.annotations.keys())
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Also note that in an example like this, you should probably change the record identifiers since the NCBI references refer to the original unmodified sequence. Reverse-complementing SeqRecord objects One of the new features in Biopython 1.57 was the SeqRecord object's reverse_complement method. This tries to balance eas...
record = SeqIO.read("data/NC_005816.gb", "genbank") print("%s %i %i %i %i" % (record.id, len(record), len(record.features), len(record.dbxrefs), len(record.annotations)))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Here we take the reverse complement and specify a new identifier - but notice how most of the annotation is dropped (but not the features):
rc = record.reverse_complement(id="TESTING") print("%s %i %i %i %i" % (rc.id, len(rc), len(rc.features), len(rc.dbxrefs), len(rc.annotations)))
notebooks/04 - Sequence Annotation objects.ipynb
tiagoantao/biopython-notebook
mit
Banana-shaped target distribution
dtarget = lambda x: exp( (-x[0]**2)/200. - 0.5*(x[1]+(0.05*x[0]**2) - 100.*0.05)**2) x1 = np.linspace(-20, 20, 101) x2 = np.linspace(-15, 10, 101) X, Y = np.meshgrid(x1, x2) Z = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101) plt.figure(figsize=(10,7)) plt.contour(X, Y, Z) plt.show() start = np.array([...
Hamiltonian MCMC (HMC).ipynb
erickpeirson/statistical-computing
cc0-1.0
Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll cal...
# Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) trainX[0]
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
# Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training...
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the ou...
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflearn.ful...
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
wbbeyourself/cn-deep-learning
mit
'Hello World!' is a data structure called a string.
type('Hello World') 4+5 type(9.) 4 * 5 4**5 # exponentiation # naming things and storing them in memory for later use x = 2**10 print(x) whos # with explanation print('The value of x is {:,}.'.format(x)) # you can change the formatting of x inside the brackets type(3.14159) print('The value of pi is approxim...
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Lists Lists are a commonly used python data structure.
x = [1, 2, 3] type(x) whos x.append(4) x # throws an error x.prepend(0) y = [0] x+y y+x whos # didn't save it - let's do it again y = y+x y # Exercise: there is a more efficient way - find the reference in the docs for the insert command. # insert the value 2.5 into the list into the appropriate spot # your ...
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Exercise: take a few minutes to read the docs for text strings here: https://docs.python.org/3/library/stdtypes.html#textseq Immutable means 'can't be changed' So if you want to change a string, you need to make a copy of some sort.
x * 5 + (y + str(' ')) * 3
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Tuples Exercise: Find the doc page for the tuples datatype. What is the difference between a tuple and a list?
# Exercise: write a tuple consisting of the first five letters of the alphabet (lower-case) in reversed order # your code here tup = ('z', 'y', 'x', 'w', 'v') type(tup) tup[3]
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Dicts The dictionary data structure consists of key-value pairs. This shows up a lot; for instance, when reading JSON files (http://www.json.org/)
x = ['Bob', 'Amy', 'Fred'] y = [32, 27, 19] z = dict(zip(x, y)) type(z) z z[1] z['Bob'] z.keys() z.values() detailed ={'amy': {'age': 32, 'school': 'UNH', 'GPA':4.0}, 'bob': {'age': 27, 'school': 'UNC', 'GPA':3.4}} detailed['amy']['school'] # less trivial example # library imports; ignore for now from urllib...
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Control structures: the 'for' loop
# indents matter in Python for i in range(20): print('%s: %s' % (d[i]['title'], d[i]['completitionYear'])) # exercises: print the sizes and titles of the last ten paintings in this list. # The statement should print as 'title: width pixels x height pixels' # your code here:
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
The 'if-then' statement
data = [1.2, 2.4, 23.3, 4.5] new_data = [] for i in range(len(data)): if round(data[i]) % 2 == 0: # modular arithmetic, remainder of 0 new_data.append(round(data[i])) else: new_data.append(0) print(new_data)
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Digression - list comprehensions Rather than a for loop, in a situation like that above, Python has a method called a list comprehension for creating lists. Sometimes this is more efficient. It's often nicer syntactically, as long as the number of conditions is not too large (<= 2 is a good guideline).
print(data) new_new_data = [round(i) if round(i) % 2 == 0 else 0 for i in data] print(new_new_data) data = list(range(20)) for i in data: if i % 2 == 0: print(i) elif i >= 10: print('wow, that\'s a big odd number - still no fun') else: print('odd num no fun')
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
The 'while' loop
# beware loops that don't terminate counter = 0 tmp = 2 while counter < 10: tmp = tmp**2 counter += 1 print('{:,}'.format(tmp)) print('tmp is %d digits long, that\'s huge!' % len(str(tmp))) # the 'pass' command for i in range(10): if i % 2 == 0: print(i) else: pass # the continue comma...
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Functions Functions take in inputs and produce outputs.
def square(x): '''input: a numerical value x output: the square of x ''' return x**2 square(3.14) # Exercise: write a function called 'reverse' to take in a string and reverse it # your code here: # test reverse('Hi, my name is Joan Jett') def raise_to_power(x, n=2): # 2 is the default for n ...
notebooks/introduction_to_python.ipynb
jwjohnson314/data-801
mit
Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.
numkdhh = thinkstats2.Pmf(resp.numkdhh) numkdhh
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Display the PMF.
thinkplot.Hist(numkdhh, label='actual') thinkplot.Config(title="PMF of num children under 18", xlabel="number of children under 18", ylabel="probability")
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.
biased_pmf = BiasPmf(numkdhh, label='biased') thinkplot.Hist(biased_pmf) thinkplot.Config(title="PMF of num children under 18", xlabel="number of children under 18", ylabel="probability")
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Display the actual Pmf and the biased Pmf on the same axes.
width = 0.45 thinkplot.PrePlot(2) thinkplot.Hist(biased_pmf, align="right", label="biased", width=width) thinkplot.Hist(numkdhh, align="left", label="actual", width=width) thinkplot.Config(title="PMFs of children under 18 in a household", xlabel='number of children', ylabel='probabilit...
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Compute the means of the two Pmfs.
print "actual mean:", numkdhh.Mean() print "biased mean:", biased_pmf.Mean()
code/chap03ex.ipynb
goodwordalchemy/thinkstats_notes_and_exercises
gpl-3.0
Verification of the FUSED-Wind wrapper common inputs
v80 = wt.WindTurbine('Vestas v80 2MW offshore','V80_2MW_offshore.dat',70,40) HR1 = wf.WindFarm('Horns Rev 1','HR_coordinates.dat',v80) WD = range(0,360,1)
examples/Script.ipynb
rethore/FUSED-Wake
agpl-3.0
The following figure shows the distribution of the sum of three dice, pmf_3d6, and the distribution of the best three out of four, pmf_best3.
pmf_3d6.plot(label='sum of 3 dice') pmf_best3.plot(label='best 3 of 4', style='--') decorate_dice('Distribution of attributes')
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Most characters have at least one attribute greater than 12; almost 10% of them have an 18. The following figure shows the CDFs for the three distributions we have computed.
import matplotlib.pyplot as plt cdf_3d6 = pmf_3d6.make_cdf() cdf_3d6.plot(label='sum of 3 dice') cdf_best3 = pmf_best3.make_cdf() cdf_best3.plot(label='best 3 of 4 dice', style='--') cdf_max6.plot(label='max of 6 attributes', style=':') decorate_dice('Distribution of attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Here's what it looks like, along with the distribution of the maximum.
cdf_min6.plot(color='C4', label='minimum of 6') cdf_max6.plot(color='C2', label='maximum of 6', style=':') decorate_dice('Minimum and maximum of six attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
We can compare it to the distribution of attributes you get by rolling four dice at adding up the best three.
cdf_best3.plot(label='best 3 of 4', color='C1', style='--') cdf_standard.step(label='standard set', color='C7') decorate_dice('Distribution of attributes') plt.ylabel('CDF');
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
I plotted cdf_standard as a step function to show more clearly that it contains only a few quantities.
# Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: Suppose you are fighting three monsters: One is armed with a short sword that causes one 6-sided die of damage, One is armed with a battle axe that causes one 8-sided die of damage, and One is armed with a bastard sword that causes one 10-sided die of damage. One of the monsters, chosen at random, att...
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: Henri Poincarรฉ was a French mathematician who taught at the Sorbonne around 1900. The following anecdote about him is probably fiction, but it makes an interesting probability problem. Supposedly Poincarรฉ suspected that his local bakery was selling loaves of bread that were lighter than the advertised weight ...
mean = 950 std = 50 np.random.seed(17) sample = np.random.normal(mean, std, size=365) # Solution goes here # Solution goes here
notebooks/chap07.ipynb
AllenDowney/ThinkBayes2
mit
In the meanwhile we are trying to have more information about pandas. In the following sections we are using the value_counts method to have more information about each feature values. This method specify number of different values for given feature.
housing['total_rooms'].value_counts() housing['ocean_proximity'].value_counts()
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
See the difference between loc and iloc methods in a simple pandas DataFrame.
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).iloc[1] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1, ['b']] pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[...
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
Here we want to see the apply function of pandas for an specific feature.
pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}])['a'].apply(lambda a: a > 10)
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
The following function helps to split the given dataset into test and train sets.
from zlib import crc32 import numpy as np def test_set_check(identifier, test_ratio): return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32 def split_train_test_by_id(data, test_ratio, id_column): ids = data[id_column] in_test_set = ids.apply(lambda _id: test_set_check(_id, test_ratio)) ...
ml/housing/Housing.ipynb
1995parham/Learning
gpl-2.0
Below is a plot of the signal.
plt.figure(figsize=(figWidth, 4)) plt.plot(signalTime, signalSamples) plt.xlabel("t") plt.ylabel("Amplitude") plt.suptitle('Source Signal') plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
To verify that the signal really has only two frequency components, here is the output of the FFT for it.
fftFreqs = np.arange(bandwidth) fftValues = (np.fft.fft(signalSamples) / sampleFrequency)[:int(bandwidth)] plt.plot(fftFreqs, np.absolute(fftValues)) plt.xlim(0, bandwidth) plt.ylim(0, 0.3) plt.xlabel("Frequency") plt.ylabel("Magnitude") plt.suptitle("Source Signal Frequency Components") plt.show()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
PDM Modulation Now that we have a signal to work with, next step is to generate a pulse train from it. The code below is a simple hack that generates 64 samples for every one in the original signal. Normally, this would involve interpolation so that the 63 additional samples vary linearly from the previous sample to th...
pdmFreq = 64 pdmPulses = np.empty(sampleFrequency * pdmFreq) pdmTime = np.arange(0, pdmPulses.size) pdmIndex = 0 signalIndex = 0 quantizationError = 0 while pdmIndex < pdmPulses.size: sample = signalSamples[signalIndex] signalIndex += 1 for tmp in range(pdmFreq): if sample >= quantizationError: ...
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Visualize the first 4K PDM samples. We should be able to clearly see the pulsing.
span = 1024 plt.figure(figsize=(16, 6)) counter = 1 for pos in range(0, pdmIndex, span): from matplotlib.ticker import MultipleLocator plt.subplot(4, 1, counter) counter += 1 # Generate a set of time values that correspond to pulses with +1 values. Remove the rest # and plot. plt.vlines(np....
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Low-pass Filter A fundamental nature of high-frequency sampling for PCM is that the noise from the quantization resulting from the PCM modulator is also of high-frequency (in a real system, there is also low-freq noise from clock jitter, heat, etc). When we decimate the signal, we do not want to bring the noise into th...
import LowPassFilter lpf = LowPassFilter.LowPassFilter()
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
PDM Decimation Our PDM signal has a sampling frequency of 64 &times; sampleFrequency or 65.536 kHz. To get to our original sampleFrequency we need to ultimately use one sample out of every 64 we see in the PDM pulse train. Since we want to filter out high-frequency noise, and our filter is tuned for 2 &times; sampleFre...
derivedSamples = [] pdmIndex = 0 while pdmIndex < pdmPulses.size: lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 filtered = lpf(pdmPulses[int(pdmIndex)]) pdmIndex += pdmFreq / 2 derivedSamples.append(filtered) derivedSamples = np.array(derivedSamples) signalSamples.size, derivedSamples.size
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Now plots of the resulting signal in both time and frequency domains
plt.figure(figsize=(figWidth, 4)) plt.plot(signalTime, derivedSamples) plt.xlabel("t") plt.ylabel("Amplitude") plt.suptitle('Derived Signal') plt.show() fftFreqs = np.arange(bandwidth) fftValues = (np.fft.fft(derivedSamples) / sampleFrequency)[:int(bandwidth)] plt.plot(fftFreqs, np.absolute(fftValues)) plt.xlim(0, ban...
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
Filtering Test Let's redo the PCM modulation / decimation steps but this time while injecting a high-frequency (32.767 kHz) signal with 30% intensity during the modulation. Hopefully, we will not see this noise appear in the final result.
pdmFreq = 64 pdmPulses = np.empty(sampleFrequency * pdmFreq) pdmTime = np.arange(0, pdmPulses.size) pdmIndex = 0 signalIndex = 0 quantizationError = 0 noiseFreq = 32767 # Hz noiseAmplitude = .30 noiseSampleDuration = 1.0 / (sampleFrequency * pdmFreq) noiseTime = np.arange(0, 1, noiseSampleDuration) noiseSamples = np....
src/articles/PDMPlayground/index.ipynb
bradhowes/keystrokecountdown
mit
ไฝฟ็”จsklearnๅฎž็Žฐk-means่š็ฑป sklearn.cluster.KMeansๆไพ›ไบ†ไธ€ไธช็”จไบŽๅšk-means่š็ฑป็š„ๆŽฅๅฃ.
from sklearn.cluster import KMeans import numpy as np X = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]]) kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆŸฅ็œ‹ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽๅ„ไธชๅ‘้‡็š„ๆ ‡็ญพ
kmeans.labels_
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽ็”จไบŽ้ข„ๆต‹ๅ‘้‡็š„ๆ ‡็ญพ
kmeans.predict([[0, 0], [4, 4]])
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
ๆจกๅž‹่ฎญ็ปƒ็ป“ๆŸๅŽ็š„ๅ„ไธช็ฐ‡็š„ไธญๅฟƒ็‚น
kmeans.cluster_centers_
ipynbs/unsupervised/Kmeans.ipynb
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
mit
Process MEG data
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) raw.set_eeg_reference() # set EEG average reference events = mne.find_events(raw, stim_channel='STI 014') event_id = dict(aud_r=1) # event trigger and conditions tmin = -0.2 # s...
0.14/_downloads/plot_mne_dspm_source_localization.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is an alternative way of calculating the capacity by approximating the integral using the Gauss-Hermite Quadrature (https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature). The Gauss-Hermite quadrature states that \begin{equation} \int_{-\infty}^\infty e^{-x^2}f(x)\mathrm{d}x \approx \sum_{i=1}^nw_if(x_i) ...
# alternative method using Gauss-Hermite Quadrature (see https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature) # use 40 components to approximate the integral, should be sufficiently exact x_GH, w_GH = np.polynomial.hermite.hermgauss(40) print(w_GH) def C_BIAWGN_GH(sigman): integral_xplus1 = np.sum(w_GH ...
SC468/BIAWGN_Capacity.ipynb
kit-cel/wt
gpl-2.0
Plot the capacity curves as a function of $E_s/N_0$ (in dB) and $E_b/N_0$ (in dB). In order to calculate $E_b/N_0$, we recall from the lecture that \begin{equation} \frac{E_s}{N_0} = r\cdot \frac{E_b}{N_0}\qquad\Rightarrow\qquad\frac{E_b}{N_0} = \frac{1}{r}\cdot \frac{E_s}{N_0} \end{equation} Next, we know that the bes...
fig = plt.figure(1,figsize=(15,7)) plt.subplot(121) plt.plot(esno_dB_range, capacity_AWGN) plt.plot(esno_dB_range, capacity_BIAWGN) plt.xlim((-10,10)) plt.ylim((0,2)) plt.xlabel('$E_s/N_0$ (dB)',fontsize=16) plt.ylabel('Capacity (bit/channel use)',fontsize=16) plt.grid(True) plt.legend(['AWGN','BI-AWGN'],fontsize=14) ...
SC468/BIAWGN_Capacity.ipynb
kit-cel/wt
gpl-2.0
Time evolution of Spin Squuezing Parameter $\xi^2= \frac{N \langle\Delta J_y^2\rangle}{\langle J_z\rangle^2}$
#set initial state for spins (Dicke basis) nt = 1001 td0 = 1/(N*Lambda) tmax = 10 * td0 t = np.linspace(0, tmax, nt) excited = dicke(N, N/2, N/2) load_file = False if load_file == False: # cycle over all states in Dicke space xi2_1_list = [] xi2_2_list = [] xi2_1_min_list = [] xi2_2_min_list = [] ...
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
Visualization
label_size2 = 20 lw = 3 texplot = False # if texplot == True: # plt.rc('text', usetex = True) # plt.rc('xtick', labelsize=label_size) # plt.rc('ytick', labelsize=label_size) fig1 = plt.figure(figsize = (10,6)) for xi2_1 in xi2_1_list: plt.plot(t*(N*Lambda), xi2_1, '-', label = r' $\gamma_\Downarrow=0....
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
Visualization
plt.rc('text', usetex = True) label_size = 20 label_size2 = 20 label_size3 = 20 plt.rc('xtick', labelsize=label_size) plt.rc('ytick', labelsize=label_size) lw = 3 i0 = -3 i0s=2 fig1 = plt.figure(figsize = (8,5)) # excited state spin squeezing plt.plot(t*(N*Lambda), xi2_1_list[-1], 'k-', label = r'$|\frac{N}...
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
The plot shows the spin squeezing parameter for two different dynamics -- only collective de-excitation, black curves; only local de-excitation, red curves -- and for two different inital states, the maximally excited state (thin curves) and another Dicke state with longer squeezing time (thick curves). This study, per...
# plot the dt matrix in the Dicke space plt.rc('text', usetex = True) label_size = 20 label_size2 = 20 label_size3 = 20 plt.rc('xtick', labelsize=label_size) plt.rc('ytick', labelsize=label_size) lw = 3 i0 = 7 i0s=2 ratio_squeezing_local = 3 fig1 = plt.figure(figsize = (6,8)) ds = dicke_space(N) value_excited = 3 ds[...
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
The Plot above shows the two initial states (darker dots) $|\frac{N}{2},\frac{N}{2}\rangle$ (top edge of the Dicke triangle, red dot) and $|j,j\rangle$, with $j=\frac{N}{2}-3=7$ (black dot). A study of the Dicke triangle (dark yellow space) and state engineering is performed in Ref. [8] for different initial state. Re...
qutip.about()
examples/piqs-spin-squeezing-noise.ipynb
qutip/qutip-notebooks
lgpl-3.0
2) Create classes/bins Instead of having a range of values you can discretize in classes/bins. Make use of pandas' qcut: Discretize variable into equal-sized buckets.
data['height'].hist(bins=100) plt.title('Height population distribution') plt.xlabel('cm') plt.ylabel('freq')
course/class2/01-clean/examples/00-kill.ipynb
hershaw/data-science-101
mit
Step 1: Fit the Initial Random Forest Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
# Load the iris data iris = load_iris() # Create the train-test datasets X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target) np.random.seed(1039) # Just fit a simple random forest classifier with 2 decision trees rf = RandomForestClassifier(n_estimators = 2) rf.fit(X = X_train, y = y_train) ...
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Get the second Decision tree to use for testing
estimator = rf.estimators_[1] from sklearn.tree import _tree estimator.tree_.node_count estimator.tree_.children_left[0] estimator.tree_.children_right[0] _tree.TREE_LEAF
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Write down an efficient Binary Tree Traversal Function
# Now plot the trees individually utils.draw_tree(inp_tree = estimator) def binaryTreePaths(dtree, root_node_id = 0): # Use these lists to parse the tree structure children_left = dtree.tree_.children_left children_right = dtree.tree_.children_right if root_node_id is None: paths =...
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
Yu-Group/scikit-learn-sandbox
mit
Options
## Retrieve the bounding box of the specified county - if no county is specified, the bounding boxes for all NM counties will be requested countyBBOXlink = "http://gstore.unm.edu/apps/epscor/search/nm_counties.json?limit=100&query=" + county_name ## define the request URL print countyBBOXlink ## print the request URL ...
presentations/2014-04-CI-day/examples/notebook_02-Copy1.ipynb
karlbenedict/karlbenedict.github.io
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื”ืชื•ืฆืื” ื”ื™ื ืจืฉื™ืžื” ืฉืœ ื›ืœ ื”ืคืขื•ืœื•ืช ืฉืืคืฉืจ ืœื”ืคืขื™ืœ ืขืœ <i>str</i>.<br> ื‘ืฉืœื‘ ื”ื–ื”, ืืžืœื™ืฅ ืœื›ื ืœื”ืชืขืœื ืžืคืขื•ืœื•ืช ื‘ืจืฉื™ืžื” ืฉืฉืžืŸ ืžืชื—ื™ืœ ื‘ืงื• ืชื—ืชื•ืŸ. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื˜ืจื™ืง ื ื•ืกืฃ ืฉื›ื ืจืื” ื ื•ื— ื™ื•ืชืจ, ื–ืžื™ืŸ ื‘ืกื‘ื™ื‘ื•ืช...
# ืžืงืžื• ืืช ื”ืกืžืŸ ืื—ืจื™ ื”ื ืงื•ื“ื”, ื•ืื– ืœื—ืฆื• ืขืœ ื”ืžืงืฉ "ื˜ืื‘" ื‘ืžืงืœื“ืช str. # ื ื™ืชืŸ ื’ื ื›ืš: "Hello". # ืื• ื›ืš: s = "Hello" s.
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<span style="text-align: right; direction: rtl; float: right; clear: both;">ืชื™ืขื•ื“ ืขืœ ืคืขื•ืœื” ืื• ืขืœ ืคื•ื ืงืฆื™ื”</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืžืงืจื” ืฉื ืจืฆื” ืœื—ืคืฉ ืคืจื˜ื™ื ื ื•ืกืคื™ื ืขืœ ืื—ืช ื”ืคื•ื ืงืฆื™ื•ืช ืื• ื”ืคืขื•ืœื•ืช (ื ื ื™ื— <code>len</code>, ืื• <code dir="ltr" style="direction: ltr">str.upper(...
len?
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<p style="text-align: right; direction: rtl; float: right; clear: both;"> ื‘ืจื’ืข ืฉื ืจื™ืฅ ืืช ื”ืชื, ืชืงืคื•ืฅ ืœื ื• ื—ืœื•ื ื™ืช ืขื ืžื™ื“ืข ื ื•ืกืฃ ืขืœ ื”ืคื•ื ืงืฆื™ื”.<br> ืื ืื ื—ื ื• ืจื•ืฆื™ื ืœืงื‘ืœ ืžื™ื“ืข ืขืœ ืคืขื•ืœื”, ื ื›ืชื•ื‘ ืืช ืกื•ื’ ื”ืขืจืš ืฉืขืœื™ื• ืื ื—ื ื• ืจื•ืฆื™ื ืœื‘ืฆืข ืื•ืชื” (ื ื ื™ื—, str): </p>
# str - ื”ืฉื ืฉืœ ื˜ื™ืคื•ืก ื”ื ืชื•ื ื™ื (ื”ืกื•ื’ ืฉืœ ื”ืขืจืš) # . - ื”ื ืงื•ื“ื” ื”ื™ื ืกื™ืžื•ืŸ ืฉื”ืคืขื•ืœื” ืฉื›ืชื‘ื ื• ืื—ืจื™ื” ืฉื™ื™ื›ืช ืœืกื•ื’ ืฉื›ืชื‘ื ื• ืœืคื ื™ื” # upper - ื”ืฉื ืฉืœ ื”ืคืขื•ืœื” ืฉืขืœื™ื” ืจื•ืฆื™ื ืœืงื‘ืœ ืขื–ืจื” # ? - ืžื‘ืงืฉ ืืช ื”ืžื™ื“ืข ืขืœ ื”ืคืขื•ืœื” str.upper?
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
<div class="align-center" style="display: flex; text-align: right; direction: rtl;"> <div style="display: flex; width: 10%; float: right; "> <img src="images/warning.png" style="height: 50px !important;" alt="ืื–ื”ืจื”!"> </div> <div style="width: 90%"> <p style="text-align: right; direction: r...
numbers = [2, 9, 10, 8, 7, 4, 3, 5, 6, 1]
week02/6_Documentation.ipynb
PythonFreeCourse/Notebooks
mit
In this example, it is True that our variable m is larger than zero, and therefore, the print call ('if code' in the above figure) is executed. Now, what if the condition were not True? Well...
n = -5 if n > 0: print("Larger than zero.")
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
marburg-open-courseware/gmoc
mit