markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
These are shown below. Most sigils fit into a bounding box (as given by the default BOX sigil), either above or below the axis for the forward or reverse strand, or straddling it (double the height) for strand-less features. The BIGARROW sigil is different, always straddling the axis with the direction taken from the f...
#Full height shafts, giving pointed boxes: gd_feature_set.add_feature(feature, sigil="ARROW", color="brown", arrowshaft_height=1.0) #Or, thin shafts: gd_feature_set.add_feature(feature, sigil="ARROW", color="teal", arrowshaft_height=0.2) #Or, v...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
The results are shown below: Secondly, the length of the arrow head - given as a proportion of the height of the bounding box (defaulting to 0.5, or 50%):
#Short arrow heads: gd_feature_set.add_feature(feature, sigil="ARROW", color="blue", arrowhead_length=0.25) #Or, longer arrow heads: gd_feature_set.add_feature(feature, sigil="ARROW", color="orange", arrowhead_length=1) #Or, very very long arrow heads (i.e. all head...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
The results are shown below: Biopython 1.61 adds a new BIGARROW sigil which always stradles the axis, pointing left for the reverse strand or right otherwise:
#A large arrow straddling the axis: gd_feature_set.add_feature(feature, sigil="BIGARROW")
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
All the shaft and arrow head options shown above for the ARROW sigil can be used for the BIGARROW sigil too. A nice example Now let’s return to the pPCP1 plasmid from Yersinia pestis biovar Microtus, and the top down approach used above, but take advantage of the sigil options we’ve now discussed. This time we’ll use a...
record = SeqIO.read("data/NC_005816.gb", "genbank") gd_diagram = GenomeDiagram.Diagram(record.id) gd_track_for_features = gd_diagram.new_track(1, name="Annotated Features") gd_feature_set = gd_track_for_features.new_set() for feature in record.features: if feature.type != "gene": #Exclude this feature ...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
Multiple tracks All the examples so far have used a single track, but you can have more than one track – for example show the genes on one, and repeat regions on another. In this example we’re going to show three phage genomes side by side to scale, inspired by Figure 6 in Proux et al. (2002). We’ll need the GenBank fi...
A_rec = SeqIO.read("data/NC_002703.gbk", "gb") B_rec = SeqIO.read("data/AF323668.gbk", "gb")
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
The figure we are imitating used different colors for different gene functions. One way to do this is to edit the GenBank file to record color preferences for each feature - something Sanger’s Artemis editor does, and which GenomeDiagram should understand. Here however, we’ll just hard code three lists of colors. Note ...
from reportlab.lib.colors import red, grey, orange, green, brown, blue, lightblue, purple A_colors = [red]*5 + [grey]*7 + [orange]*2 + [grey]*2 + [orange] + [grey]*11 + [green]*4 \ + [grey] + [green]*2 + [grey, green] + [brown]*5 + [blue]*4 + [lightblue]*5 \ + [grey, lightblue] + [purple]*2 + [grey] ...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
Now to draw them – this time we add three tracks to the diagram, and also notice they are given different start/end values to reflect their different lengths.
name = "data/Proux Fig 6" gd_diagram = GenomeDiagram.Diagram(name) max_len = 0 for record, gene_colors in zip([A_rec, B_rec], [A_colors, B_colors]): max_len = max(max_len, len(record)) gd_track_for_features = gd_diagram.new_track(1, name=record.name, greyt...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
I did wonder why in the original manuscript there were no red or orange genes marked in the bottom phage. Another important point is here the phage are shown with different lengths - this is because they are all drawn to the same scale (they are different lengths). The key difference from the published figure is they h...
#Tuc2009 (NC_002703) vs bIL285 (AF323668) A_vs_B = [ (99, "Tuc2009_01", "int"), (33, "Tuc2009_03", "orf4"), (94, "Tuc2009_05", "orf6"), (100,"Tuc2009_06", "orf7"), (97, "Tuc2009_07", "orf8"), (98, "Tuc2009_08", "orf9"), (98, "Tuc2009_09", "orf10"), (100,"Tuc2009_10", "orf12"), (100,"...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
For the first and last phage these identifiers are locus tags, for the middle phage there are no locus tags so I’ve used gene names instead. The following little helper function lets us lookup a feature using either a locus tag or gene name:
def get_feature(features, id, tags=["locus_tag", "gene"]): """Search list of SeqFeature objects for an identifier under the given tags.""" for f in features: for key in tags: #tag may not be present in this feature for x in f.qualifiers.get(key, []): if x == id: ...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
We can now turn those list of identifier pairs into SeqFeature pairs, and thus find their location co-ordinates. We can now add all that code and the following snippet to the previous example (just before the gd_diagram.draw(...) line – see the finished example script <a href="data/Proux_et_al_2002_Figure_6.py">Proux_e...
from Bio.Graphics.GenomeDiagram import CrossLink from reportlab.lib import colors #Note it might have been clearer to assign the track numbers explicitly... for rec_X, tn_X, rec_Y, tn_Y, X_vs_Y in [(A_rec, 2, B_rec, 1, A_vs_B)]: track_X = gd_diagram.tracks[t...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
There are several important pieces to this code. First the GenomeDiagram object has a cross_track_links attribute which is just a list of CrossLink objects. Each CrossLink object takes two sets of track-specific co-ordinates (here given as tuples, you can alternatively use a GenomeDiagram.Feature object instead). You c...
from ftplib import FTP ftp = FTP('ftp.ncbi.nlm.nih.gov') print("Logging in") ftp.login() ftp.cwd('genomes/archive/old_genbank/A_thaliana/OLD/') print("Starting download - This can be slow!") for chro, name in [ ("CHR_I", "NC_003070.fna"), ("CHR_I", "NC_003070.gbk"), ("CHR_II", "NC_003071.fna"), ("CHR_II...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
Here is a very simple example - for which we’ll use Arabidopsis thaliana. You can skip this bit, but first I downloaded the five sequenced chromosomes from the NCBI’s FTP site (per the code above) and then parsed them with Bio.SeqIO to find out their lengths. You could use the GenBank files for this, but it is faster t...
from Bio import SeqIO entries = [("Chr I", "NC_003070.fna"), ("Chr II", "NC_003071.fna"), ("Chr III", "NC_003072.fna"), ("Chr IV", "NC_003073.fna"), ("Chr V", "NC_003074.fna")] for (name, filename) in entries: record = SeqIO.read("data/" + filename, "fasta") print(name,...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
This gave the lengths of the five chromosomes, which we’ll now use in the following short demonstration of the BasicChromosome module:
from reportlab.lib.units import cm from Bio.Graphics import BasicChromosome entries = [("Chr I", 30432563), ("Chr II", 19705359), ("Chr III", 23470805), ("Chr IV", 18585042), ("Chr V", 26992728)] max_len = 30432563 #Could compute this telomere_length = 1000000 #For illustra...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
This example is deliberately short and sweet. The next example shows the location of features of interest. Continuing from the previous example, let’s also show the tRNA genes. We’ll get their locations by parsing the GenBank files for the five Arabidopsis thaliana chromosomes. You’ll need to download these files from ...
entries = [("Chr I", "NC_003070.gbk"), ("Chr II", "NC_003071.gbk"), ("Chr III", "NC_003072.gbk"), ("Chr IV", "NC_003073.gbk"), ("Chr V", "NC_003074.gbk")] max_len = 30432563 #Could compute this telomere_length = 1000000 #For illustration chr_diagram = BasicChromosome.Organi...
notebooks/17 - Graphics including GenomeDiagram.ipynb
tiagoantao/biopython-notebook
mit
Nestly assuming fragments already simulated assuming Day1_rep10 notebook already ran
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/' buildDir = os.path.join(workDir, 'Day1_rep10') R_dir = '/home/nick/notebook/SIPSim/lib/R/' fragFile= '/home/nick/notebook/SIPSim/dev/bac_genome1147/validation/ampFrags.pkl' targetFile = '/home/nick/notebook/SIPSim/dev/fullCyc/CD-HIT/target_ta...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
BD min/max what is the min/max BD that we care about?
%%R ## min G+C cutoff min_GC = 13.5 ## max G+C cutoff max_GC = 80 ## max G+C shift max_13C_shift_in_BD = 0.036 min_BD = min_GC/100.0 * 0.098 + 1.66 max_BD = max_GC/100.0 * 0.098 + 1.66 max_BD = max_BD + max_13C_shift_in_BD cat('Min BD:', min_BD, '\n') cat('Max BD:', max_BD, '\n')
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Loading data Emperical SIP data
%%R -i physeqDir -i physeq_SIP_core -i bulk_days # bulk core samples F = file.path(physeqDir, physeq_SIP_core) physeq.SIP.core = readRDS(F) physeq.SIP.core.m = physeq.SIP.core %>% sample_data physeq.SIP.core = prune_samples(physeq.SIP.core.m$Substrate == '12C-Con' & physeq.SIP.core.m...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
bulk soil samples
%%R physeq.dir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/' physeq.bulk = 'bulk-core' physeq.file = file.path(physeq.dir, physeq.bulk) physeq.bulk = readRDS(physeq.file) physeq.bulk.m = physeq.bulk %>% sample_data physeq.bulk = prune_samples(physeq.bulk.m$Exp_type == 'microcosm_bulk' & ...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Simulated
OTU_files = !find $buildDir -name "OTU_abs1e9_sub.txt" #OTU_files = !find $buildDir -name "OTU_abs1e9.txt" OTU_files %%R -i OTU_files # loading files df.SIM = list() for (x in OTU_files){ SIM_rep = gsub('/home/nick/notebook/SIPSim/dev/fullCyc/n1147_frag_norm_9_2.5_n5/Day1_rep10/', '', x) #SIM_rep = gsub('/OTU...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
'bulk soil' community files
# loading comm files comm_files = !find $buildDir -name "bulk-core_comm_target.txt" comm_files %%R -i comm_files df.comm = list() for (f in comm_files){ rep = gsub('.+/Day1_rep10/([0-9]+)/.+', '\\1', f) df.comm[[rep]] = read.delim(f, sep='\t') %>% dplyr::select(library, taxon_name, rel_abund_perc) %>%...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
BD span of just overlapping taxa Taxa overlapping between emperical data and genomes in dataset These taxa should have the same relative abundances in both datasets. The comm file was created from the emperical dataset phyloseq file.
%%R -i targetFile df.target = read.delim(targetFile, sep='\t') df.target %>% nrow %>% print df.target %>% head(n=3) %%R # filtering to just target taxa df.j.t = df.j %>% filter(OTU %in% df.target$OTU) df.j %>% nrow %>% print df.j.t %>% nrow %>% print ## plotting ggplot(df.j.t, aes(mean_rel_abund, BD_range_perc...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Correlation between relative abundance and BD_range diff Are low abundant taxa more variable in their BD span
%%R # formatting data df.1 = df.j.t %>% filter(dataset == 'simulated') %>% select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc) df.2 = df.j.t %>% filter(dataset == 'emperical') %>% select(SIM_rep, OTU, mean_rel_abund, BD_range, BD_range_perc) df.12 = inner_join(df.1, df.2, c('OTU' = 'OTU')) ...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Notes between Day1_rep10, Day1_richFromTarget_rep10, and Day1_add_Rich_rep10: Day1_rep10 has the most accurate representation of BD span (% of gradient spanned by taxa). Accuracy drops at ~1e-3 to ~5e-4, but this is caused by detection limits (veil-line effect). Comparing abundance distributions of overlapping taxa
%%R join_abund_dists = function(df.EMP.j, df.SIM.j, df.target){ ## emperical df.EMP.j.f = df.EMP.j %>% filter(abundance > 0) %>% #filter(!OTU %in% c('OTU.32', 'OTU.2', 'OTU.4')) %>% # TEST dplyr::select(OTU, sample, abundance, Buoyant_density, bulk_abund) %>% mutate(data...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Calculating center of mass for overlapping taxa weighted mean BD, where weights are relative abundances
%%R center_mass = function(df){ df = df %>% group_by(dataset, SIM_rep, OTU) %>% summarize(center_mass = weighted.mean(Buoyant_density, rel_abund_c, na.rm=T), median_rel_abund_c = median(rel_abund_c)) %>% ungroup() return(df) } df.j.cm = center_mass(df.j) %%R -w 650 ...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Notes Leaving out the PCR simulation does not help with simulation accuracy for center of mass on overlapping taxa plotting taxon abundance vs diff between emperical & simulated
%%R df.j.cm.s.f = df.j.cm.s %>% mutate(CM_diff = emperical - simulated) ggplot(df.j.cm.s.f, aes(median_rel_abund_c, CM_diff)) + geom_point() + scale_x_log10() + labs(x='Relative abundance', y='Center of mass (Emperical - Simulated)', title='Center of mass') + theme_bw() + theme( text =...
ipynb/bac_genome/fullCyc/Day1_fullDataset/rep10_noPCR.ipynb
nick-youngblut/SIPSim
mit
Load Data from CSVs
import unicodecsv ## Longer version of code (replaced with shorter, equivalent version below) # enrollments = [] # f = open('enrollments.csv', 'rb') # reader = unicodecsv.DictReader(f) # for row in reader: # enrollments.append(row) # f.close() with open('enrollments.csv', 'rb') as f: reader = unicodecsv.Dict...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
This page contains documentation for Python's csv module. Instead of csv, you'll be using unicodecsv in this course. unicodecsv works exactly the same as csv, but it comes with Anaconda and has support for unicode. The csv documentation page is still the best way to learn how to use the unicodecsv library, since the tw...
from datetime import datetime as dt # Takes a date as a string, and returns a Python datetime object. # If there is no date given, returns None def parse_date(date): if date == '': return None else: return dt.strptime(date, '%Y-%m-%d') # Takes a string which is either an empty string or r...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur. Questions How long to submit projects? How do students who submit their projects different from the students who don't? 10 Investig...
##################################### # 3 # ##################################### ## Rename the "acct" column in the daily_engagement table to "account_key". # NOTE Added later after finding the problems in the data for engagement_record in daily_engagement: # Rename the "acct" col...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
The reason total enrollments and unique enrollments are different is that students can enroll, cancel and then re-enroll There are much more daily engagegement (136240) compared to enrollments. That is expected as each student will have an entry for every day 11 Problems in the Data The number of unique engagements a...
##################################### # 4 # ##################################### ## Find any one student enrollments where the student ## is missing from the daily engagement table. ## Output that enrollment. def get_one(data): for row in data: return row def get_accoun...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
I notice that is_udacity is True for the missing record while it is False for the present record. The instructor talks with a Udacity Data Scientist and they share that for these are Udacity test accounts and they may not have data in the daily engagement table. So we will go ahead and remove these test accounts from t...
# Create a set of the account keys for all Udacity test accounts udacity_test_accounts = set() for enrollment in enrollments: if enrollment['is_udacity']: udacity_test_accounts.add(enrollment['account_key']) len(udacity_test_accounts) # Given some data with an account_key field, # removes any records corr...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
At this point we repeat and ensure that this will all of the surprises related to our earlier observation. This is a common process that we need to do during data analysis. Checking for More Problem Records So we run the earlier code again and ensure that we do not gave any more surprises
##################################### # 5 # ##################################### ## Find the number of surprising data points (enrollments missing from ## the engagement table) that remain, if any. print_total_and_unique("enrollments", non_udacity_enrollments) print_total_and_unique("d...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Tracking Down the Remaining Problems We see that we still have something left in the data that we are not quite sure about as the unique numbers still do not match. So we repeat and try to find what problem still remains.
audit_all_enrollment_should_have_engagement(non_udacity_enrollments, non_udacity_engagement)
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Looking at the above data we see that days_to_cancel is 0 for the missing account. The join_date is the same as cancel_date. Probably a person needs to be enrolled at least a day for there to be an engagement record. Now we repeat to see if excluding these account we can no more surprises. For this we need to firstl...
#Make a list of people who cancelled the same day people_who_cancelled_same_day = set() for enrollment in non_udacity_enrollments: if enrollment['days_to_cancel'] == 0: people_who_cancelled_same_day.add(enrollment['account_key']) len(people_who_cancelled_same_day) def remove_people_who_cancelled_same_day(d...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Now we have done the filtering we will see if our check passes or are there more surprises left
audit_all_enrollment_should_have_engagement(enrollments_2, engagement_2)
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Finally we can see that we have no more surprises left. At least surprises related to someone enrolled not having engagement. Now we may or may not want to actually exclude these people when analysing further. Depends on what we are questions we are trying to answer. Refining the Question Now that we don't have any ...
##################################### # 6 # ##################################### ## Create a dictionary named paid_students containing all students who either ## haven't canceled yet or who remained enrolled for more than 7 days. The keys ## should be account keys, and the values shoul...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Thinking about it the name paid_students isn't really good as someone who has not cancelled may or may not be a paid student. But I'll go with that so that rest of the lesson remains in-sync with the videos. Now we will filter and keep only these students and proceed further based on these only.
def keep_paid(data): result = [] for row in data: account_key = row['account_key'] if account_key in paid_students: result.append(row) return result # Filter data to keep only for paid enrollments paid_enrollments = keep_paid(non_udacity_enrollments) paid_engagements = keep_paid...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Getting Data from First Week We will filter out data to keep only engagement upto the first week. I added a function to keep data within n days rather than one week only. What if I want to change it later? Giving an additional parameter helps.
# Takes a student's join date and the date of a specific engagement record, # and returns True if that engagement record happened within one week # of the student joining. def within_one_week(join_date, engagement_date): time_delta = engagement_date - join_date return time_delta.days < 7 def within_n_days(join...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Ignore the following code block till you get to the block Number of Visits in First Week This adds a has_visited column to engagement records for use later
for engagement in paid_engagements: if engagement['num_courses_visited'] > 0: engagement['has_visited'] = 1 else: engagement['has_visited'] = 0 ##################################### # 7 # ##################################### ## Create a list of rows from the en...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
At this point we would like to divide the data into 2 parts - student who pass the project - student who don't pass But as we have this data about student engagement in the first week why don't we explore it a bit? That will help us understand it better. Exploring Student Engagement Let us explore the average t...
from collections import defaultdict def group_by(data, key): grouped = defaultdict(list) for record in data: _key = record[key] grouped[_key].append(record) return grouped # Create a dictionary of engagement grouped by student. # The keys are account keys, and the values are lists of engag...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Now we sum time spent by each student
# Create a dictionary with the total minutes each student spent # in the classroom during the first week. # The keys are account keys, and the values are numbers (total minutes) def sum_grouped_by_key(data, key): total_by_account = {} for account_key, engagement_for_student in data.items(): total = 0 ...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Now we output the average While we are looking at the mean we will also look at some other statistics Even though we know the mean, standard deviation, maximum, and minimum of various metrics, there are a lot of other facts about each metric that would be nice to know. Are more values close to the minimum or the maxi...
%pylab inline import seaborn as sns import matplotlib.pyplot as plt import numpy as np def summarize(data_dict): # Summarize the data about minutes spent in the classroom data_vals = data_dict.values() print 'Mean:', np.mean(data_vals) print 'Standard deviation:', np.std(data_vals) print 'Minimum:...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
The line %matplotlib inline is specifically for IPython notebook, and causes your plots to appear in your notebook rather than a new window. If you are not using IPython notebook, you should not include this line, and instead you should add the line plt.show() at the bottom to show the plot in a new window. To change h...
summarize(total_minutes_by_account)
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Debugging Data Analysis Code
##################################### # 8 # ##################################### ## Go through a similar process as before to see if there is a problem. ## Locate at least one surprising piece of data, output it, and take a look at it.
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
26 - Lessons Completed in First Week
##################################### # 9 # ##################################### ## Adapt the code above to find the mean, standard deviation, minimum, and maximum for ## the number of lessons completed by each student during the first week. Try creating ## one or more functions to re-...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
28 - Number of Visits in the First Week We want to analyze how many days did a student visit the class at all so we will add a has_visted field.
###################################### # 10 # ###################################### ## Find the mean, standard deviation, minimum, and maximum for the number of ## days each student visits the classroom during the first week. days_visited_by_account = sum_grouped_by_key( engagement...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Splitting out Passing Students Now we get to the part where we are splitting the data into 2 parts - those who pass and those who don't pass. Then we will try and figure out what was the difference between their engagement.
paid_submissions[0] ###################################### # 11 # ###################################### ## Create two lists of engagement data for paid students in the first week. ## The first list should contain data for students who eventually pass the ## subway project, and the sec...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Comparing the Two Student Groups
###################################### # 12 # ###################################### ## Compute some metrics you're interested in and see how they differ for ## students who pass the subway project vs. students who don't. A good ## starting point would be the metrics we looked at earlie...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
We can see that the mean is much higher. We would expect passing students to be spending some more time compared to non passing students. The difference is 2.5 hours for non-passing vs. 6.5 hours for passing students Let's now do a comparsion for lessons_completed
summarize_data_for_key(passing_engagement_by_account, 'lessons_completed') summarize_data_for_key(non_passing_engagement_by_account, 'lessons_completed')
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Again we can see that the average is higher for students who passed. Now let's see what kind of difference did visits had for passing vs non-passing students
summarize_data_for_key(passing_engagement_by_account, 'has_visited') summarize_data_for_key(non_passing_engagement_by_account, 'has_visited')
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Again the mean is higher. Out of all of these the minutes spent seems to be most striking is the minutes spent. But we need to understand that just spending higher time does not mean the student will pass. In other words we have just found a correlation between the two not causation. For ensuring causation we will need...
###################################### # 13 # ###################################### ## Make histograms of the three metrics we looked at earlier for both ## students who passed the subway project and students who didn't. You ## might also want to make histograms of any other metrics yo...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Making Predictions We may also want to find which students are most likely to pass their project based on the data that we have with us so far. 38 - Communication which of your finding are most interesting how will you present them? e.g. - Number of minutes spent can be communicated simply by saying that on an ave...
non_passing_visits = sum_grouped_by_key(non_passing_engagement_by_account, 'has_visited') plt.hist(non_passing_visits.values(), bins=8) plt.xlabel('Number of days') plt.title('Distribution of classroom visits in the first week ' + 'for students who do not pass the subway project') passing_visits = sum_grou...
udacity_data_science_notes/intro_data_analysis/lesson_01/L1_Starter_Code.ipynb
anshbansal/anshbansal.github.io
mit
Example Prov-JSON export and import
from prov.model import ProvDocument d1 = ProvDocument() %%writefile wps-prov.json { "prefix": { "enes": "http://www.enes.org/enes_entitiy/", "workflow": "http://www.enes.org/enes/workflow/#", "dc": "http://dublin-core.org/", "user": "http://www.enes.org/enes_entity/user/", ...
neo4j_prov/notebooks/ENES-prov-1.ipynb
stephank16/enes_graph_use_case
gpl-3.0
Example Transformation to Neo4j graph The transformation code is based on the prov_to_dot() function in the dot.py package of the prov python package mentioned above ( https://github.com/trungdong/prov ). The code was simplified and modified to generate neo4j nodes and relation instead of dot nodes and relations.
## d2 graph is input parameter for this cell .. import six from py2neo import Graph, Node, Relationship, authenticate node_map = {} count = [0, 0, 0, 0] # counters for node ids records = d2.get_records() relations = [] use_labels = True show_relation_attributes = True other_attributes = True show_nary = True def _ad...
neo4j_prov/notebooks/ENES-prov-1.ipynb
stephank16/enes_graph_use_case
gpl-3.0
generate neo4j graph based on generated neo4j Nodes map and Relationship list
authenticate("localhost:7474", "neo4j", "prolog16") # connect to authenticated graph database graph = Graph("http://localhost:7474/db/data/") graph.delete_all() for rel in neo_rels: graph.create(rel) %load_ext cypher results = %cypher http://neo4j:prolog16@localhost:7474/db/data MATCH (a)-[r]-(b) RETURN a,r, b ...
neo4j_prov/notebooks/ENES-prov-1.ipynb
stephank16/enes_graph_use_case
gpl-3.0
"remember" cells
# example info calls on nodes and relations .. der = d2.get_records()[0] print der.get_type()._str + "tst" print der.attributes print der.is_relation() print der.label print der.value print der.args print der.is_element() print der.formal_attributes print der.get_asserted_types() print der.get_provn
neo4j_prov/notebooks/ENES-prov-1.ipynb
stephank16/enes_graph_use_case
gpl-3.0
1. 利萨如曲线 在NumPy中,所有标准三角函数如sin、cos、tan等均有对应的通用函数。利萨如曲线(Lissajous curve)是一种很有趣的使用三角函数的方式。 利萨如曲线由如下参数方程定义: - x = A sin(at + π/2) - y = B sin(bt)
# 为简单起见,令A和B为1 t = np.linspace(-np.pi, np.pi, 201) a = 9 b = 8 x = np.sin(a*t + np.pi/2) y = np.sin(b*t) plot(x, y) show() def lissajous(a, b): t = np.linspace(-np.pi, np.pi, 201) x = np.sin(a*t + np.pi/2) y = np.sin(b*t) return x, y # matplotlib.gridspec.GridSpecBase # 指定figure中subplot的位置 gs = grid...
Visualization/(3)special_curves_plot.ipynb
ymero/pyDataScienceToolkits_Base
mit
2. 绘制方波 方波可以近似表示为多个正弦波的叠加。事实上,任意一个方波信号都可以用无穷傅里叶级数表示。
Latex(r"$\sum_{k=1}^\infty\frac{4\sin((2k-1)t)}{(2k-1)\pi}$") t = np.linspace(-np.pi, np.pi, 201) k = np.arange(1,99) k = 2*k - 1 f = np.zeros_like(t) for i in range(len(t)): f[i] = np.sum(np.sin(k * t[i])/k) f = (4/np.pi) * f plot(t, f) show()
Visualization/(3)special_curves_plot.ipynb
ymero/pyDataScienceToolkits_Base
mit
锯齿波和三角波 锯齿波和三角波也是常见的波形。和方波类似,我们也可以将它们表示成无穷傅里叶级数。对锯齿波取绝对值即可得到三角波。
# 锯齿波的无穷级数表达式 Latex(r"$\sum_{k=1}^\infty\frac{-2\sin(2\pi kt)}{k\pi}$") t = np.linspace(-np.pi, np.pi, 201) k = np.arange(1,99) f = np.zeros_like(t) for i in range(len(t)): f[i] = np.sum(np.sin(2*np.pi*k * t[i])/k) f = (-2/np.pi) * f plot(t, f) show() plot(t, np.abs(f),c='g',lw=2.0) show()
Visualization/(3)special_curves_plot.ipynb
ymero/pyDataScienceToolkits_Base
mit
Hello, many worlds <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/hello_many_worlds"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.re...
!pip install tensorflow==2.7.0
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
1. The Basics 1.1 Cirq and parameterized quantum circuits Before exploring TensorFlow Quantum (TFQ), let's look at some <a target="_blank" href="https://github.com/quantumlib/Cirq" class="external">Cirq</a> basics. Cirq is a Python library for quantum computing from Google. You use it to define circuits, including stat...
a, b = sympy.symbols('a b')
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
The following code creates a two-qubit circuit using your parameters:
# Create two qubits q0, q1 = cirq.GridQubit.rect(1, 2) # Create a circuit on these qubits using the parameters you created above. circuit = cirq.Circuit( cirq.rx(a).on(q0), cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1)) SVGCircuit(circuit)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
To evaluate circuits, you can use the cirq.Simulator interface. You replace free parameters in a circuit with specific numbers by passing in a cirq.ParamResolver object. The following code calculates the raw state vector output of your parameterized circuit:
# Calculate a state vector with a=0.5 and b=-0.5. resolver = cirq.ParamResolver({a: 0.5, b: -0.5}) output_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector output_state_vector
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
State vectors are not directly accessible outside of simulation (notice the complex numbers in the output above). To be physically realistic, you must specify a measurement, which converts a state vector into a real number that classical computers can understand. Cirq specifies measurements using combinations of the <a...
z0 = cirq.Z(q0) qubit_map={q0: 0, q1: 1} z0.expectation_from_state_vector(output_state_vector, qubit_map).real z0x1 = 0.5 * z0 + cirq.X(q1) z0x1.expectation_from_state_vector(output_state_vector, qubit_map).real
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
1.2 Quantum circuits as tensors TensorFlow Quantum (TFQ) provides tfq.convert_to_tensor, a function that converts Cirq objects into tensors. This allows you to send Cirq objects to our <a target="_blank" href="https://www.tensorflow.org/quantum/api_docs/python/tfq/layers">quantum layers</a> and <a target="_blank" href=...
# Rank 1 tensor containing 1 circuit. circuit_tensor = tfq.convert_to_tensor([circuit]) print(circuit_tensor.shape) print(circuit_tensor.dtype)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
This encodes the Cirq objects as tf.string tensors that tfq operations decode as needed.
# Rank 1 tensor containing 2 Pauli operators. pauli_tensor = tfq.convert_to_tensor([z0, z0x1]) pauli_tensor.shape
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
1.3 Batching circuit simulation TFQ provides methods for computing expectation values, samples, and state vectors. For now, let's focus on expectation values. The highest-level interface for calculating expectation values is the tfq.layers.Expectation layer, which is a tf.keras.Layer. In its simplest form, this layer i...
batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Batching circuit execution over parameter values in Cirq requires a loop:
cirq_results = [] cirq_simulator = cirq.Simulator() for vals in batch_vals: resolver = cirq.ParamResolver({a: vals[0], b: vals[1]}) final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector cirq_results.append( [z0.expectation_from_state_vector(final_state_vector, { ...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
The same operation is simplified in TFQ:
tfq.layers.Expectation()(circuit, symbol_names=[a, b], symbol_values=batch_vals, operators=z0)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
2. Hybrid quantum-classical optimization Now that you've seen the basics, let's use TensorFlow Quantum to construct a hybrid quantum-classical neural net. You will train a classical neural net to control a single qubit. The control will be optimized to correctly prepare the qubit in the 0 or 1 state, overcoming a simul...
# Parameters that the classical NN will feed values into. control_params = sympy.symbols('theta_1 theta_2 theta_3') # Create the parameterized circuit. qubit = cirq.GridQubit(0, 0) model_circuit = cirq.Circuit( cirq.rz(control_params[0])(qubit), cirq.ry(control_params[1])(qubit), cirq.rx(control_params[2])...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
2.2 The controller Now define controller network:
# The classical neural network layers. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Given a batch of commands, the controller outputs a batch of control signals for the controlled circuit. The controller is randomly initialized so these outputs are not useful, yet.
controller(tf.constant([[0.0],[1.0]])).numpy()
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
2.3 Connect the controller to the circuit Use tfq to connect the controller to the controlled circuit, as a single keras.Model. See the Keras Functional API guide for more about this style of model definition. First define the inputs to the model:
# This input is the simulated miscalibration that the model will learn to correct. circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` dtype=tf.string, name='circuits_input') # Commands wil...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Next apply operations to those inputs, to define the computation.
dense_2 = controller(commands_input) # TFQ layer for classically controlled circuits. expectation_layer = tfq.layers.ControlledPQC(model_circuit, # Observe Z operators = cirq.Z(qubit)) expectation = expectation_layer([circuits_in...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Now package this computation as a tf.keras.Model:
# The full Keras model is built from our layers. model = tf.keras.Model(inputs=[circuits_input, commands_input], outputs=expectation)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
The network architecture is indicated by the plot of the model below. Compare this model plot to the architecture diagram to verify correctness. Note: May require a system install of the graphviz package.
tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
This model takes two inputs: The commands for the controller, and the input-circuit whose output the controller is attempting to correct. 2.4 The dataset The model attempts to output the correct correct measurement value of $\hat{Z}$ for each command. The commands and correct values are defined below.
# The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired Z expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]], dtype=np.float32)
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
This is not the entire training dataset for this task. Each datapoint in the dataset also needs an input circuit. 2.4 Input circuit definition The input-circuit below defines the random miscalibration the model will learn to correct.
random_rotations = np.random.uniform(0, 2 * np.pi, 3) noisy_preparation = cirq.Circuit( cirq.rx(random_rotations[0])(qubit), cirq.ry(random_rotations[1])(qubit), cirq.rz(random_rotations[2])(qubit) ) datapoint_circuits = tfq.convert_to_tensor([ noisy_preparation ] * 2) # Make two copied of this circuit
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
There are two copies of the circuit, one for each datapoint.
datapoint_circuits.shape
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
2.5 Training With the inputs defined you can test-run the tfq model.
model([datapoint_circuits, commands]).numpy()
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Now run a standard training process to adjust these values towards the expected_outputs.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() model.compile(optimizer=optimizer, loss=loss) history = model.fit(x=[datapoint_circuits, commands], y=expected_outputs, epochs=30, verbose=0) plt.plot(history.h...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
From this plot you can see that the neural network has learned to overcome the systematic miscalibration. 2.6 Verify outputs Now use the trained model, to correct the qubit calibration errors. With Cirq:
def check_error(command_values, desired_values): """Based on the value in `command_value` see how well you could prepare the full circuit to have `desired_value` when taking expectation w.r.t. Z.""" params_to_prepare_output = controller(command_values).numpy() full_circuit = noisy_preparation + model_circuit ...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
The value of the loss function during training provides a rough idea of how well the model is learning. The lower the loss, the closer the expectation values in the above cell is to desired_values. If you aren't as concerned with the parameter values, you can always check the outputs from above using tfq:
model([datapoint_circuits, commands])
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
3 Learning to prepare eigenstates of different operators The choice of the $\pm \hat{Z}$ eigenstates corresponding to 1 and 0 was arbitrary. You could have just as easily wanted 1 to correspond to the $+ \hat{Z}$ eigenstate and 0 to correspond to the $-\hat{X}$ eigenstate. One way to accomplish this is by specifying a ...
# Define inputs. commands_input = tf.keras.layers.Input(shape=(1), dtype=tf.dtypes.float32, name='commands_input') circuits_input = tf.keras.Input(shape=(), # The circuit-tensor has dtype `tf.string` ...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Here is the controller network:
# Define classical NN. controller = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='elu'), tf.keras.layers.Dense(3) ])
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Combine the circuit and the controller into a single keras.Model using tfq:
dense_2 = controller(commands_input) # Since you aren't using a PQC or ControlledPQC you must append # your model circuit onto the datapoint circuit tensor manually. full_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit) expectation_output = tfq.layers.Expectation()(full_circuit, ...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
3.2 The dataset Now you will also include the operators you wish to measure for each datapoint you supply for model_circuit:
# The operators to measure, for each command. operator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]]) # The command input values to the classical NN. commands = np.array([[0], [1]], dtype=np.float32) # The desired expectation value at output of quantum circuit. expected_outputs = np.array([[1], [-1]...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
3.3 Training Now that you have your new inputs and outputs you can train once again using keras.
optimizer = tf.keras.optimizers.Adam(learning_rate=0.05) loss = tf.keras.losses.MeanSquaredError() two_axis_control_model.compile(optimizer=optimizer, loss=loss) history = two_axis_control_model.fit( x=[datapoint_circuits, commands, operator_data], y=expected_outputs, epochs=30, verbose=1) plt.plot(h...
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
The loss function has dropped to zero. The controller is available as a stand-alone model. Call the controller, and check its response to each command signal. It would take some work to correctly compare these outputs to the contents of random_rotations.
controller.predict(np.array([0,1]))
docs/tutorials/hello_many_worlds.ipynb
tensorflow/quantum
apache-2.0
Discover and load the data for analysis Using the pystac_client we can search the Planetary Computer's STAC endpoint for items matching our query parameters. We will look for data tiles (1-degree square) that intersect our bounding box.
stac = pystac_client.Client.open("https://planetarycomputer.microsoft.com/api/stac/v1") search = stac.search(bbox=bbox,collections=["cop-dem-glo-30"]) items = list(search.get_items()) print('Number of 1-degree data tiles connected to our region:',len(items))
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Next, we'll load the elevation data into an xarray DataArray, calculate the slope between pixels, and then "clip" the data to only the pixels within our region (bounding box). The dataset includes elevation (meters) at lat-lon positions (EPSG:4326) at a spatial separation of 30-meters per pixel.
signed_asset = planetary_computer.sign(items[0].assets["data"]) data_elevation = (xr.open_rasterio(signed_asset.href).squeeze().drop("band"))
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
We will create a function to calculate slope (in percent) between pixels. The "dem" parameter is the elevation dataset to use for the slope calculation. The "resolution" parameter is the pixel spatial resolution of the elevation dataset.
from scipy.ndimage import convolve def slope_pct(dem, resolution): # Kernel for rate of elevation change in x-axis. dx_kernel = np.array([[1, 0, -1], [2, 0, -2], [1, 0, -1]]) # Kernel for rate of elevation change in y-axis. dy_kernel = np.array([[1, 2...
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Display elevation and slope products
Clipped_Data.elevation.plot.imshow(size=8,cmap=plt.cm.terrain,vmin=0.0,vmax=np.max(Clipped_Data.elevation)) plt.gca().set_aspect('equal') plt.title('Terrain Elevation (meters)') plt.xlabel('Longitude') plt.ylabel('Latitude') plt.show() Clipped_Data.slope.plot.imshow(size=8, cmap=plt.cm.nipy_spectral, vmin=0, vmax=50) ...
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Save the output data in a GeoTIFF file
filename = "DEM_sample8.tiff" # Set the dimensions of file in pixels height = Clipped_Data.elevation.shape[0] width = Clipped_Data.elevation.shape[1] # Define the Coordinate Reference System (CRS) to be common Lat-Lon coordinates # Define the tranformation using our bounding box so the Lat-Lon information is written ...
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
How will the participants use this data? The GeoTIFF file will contain the Lat-Lon coordinates of each pixel and will also contain the elevation and slope for each pixel as separate data layers. Since the FrogID data is also Lat-Lon position, it is possible to find the closest pixel using code similar to what is demons...
# This is an example for a specific Lon-Lat location randomly selected within our sample region. values = Clipped_Data.elevation.sel(x=150.71, y=-33.51, method="nearest").values print("This is the elevation in meters for the closest pixel: ", np.round(values,1)) values = Clipped_Data.slope.sel(x=150.71, y=-33.51, m...
notebooks/Data_Challenge/DEM.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Time series analysis Load the data from "Price of Weed".
transactions = pd.read_csv('mj-clean.csv', parse_dates=[5]) transactions.head()
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The following function takes a DataFrame of transactions and compute daily averages.
def GroupByDay(transactions, func=np.mean): """Groups transactions by day and compute the daily mean ppg. transactions: DataFrame of transactions returns: DataFrame of daily prices """ grouped = transactions[['date', 'ppg']].groupby('date') daily = grouped.aggregate(func) daily['date'] = ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The following function returns a map from quality name to a DataFrame of daily averages.
def GroupByQualityAndDay(transactions): """Divides transactions by quality and computes mean daily price. transaction: DataFrame of transactions returns: map from quality to time series of ppg """ groups = transactions.groupby('quality') dailies = {} for name, group in groups: ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The following plots the daily average price for each quality.
import matplotlib.pyplot as plt thinkplot.PrePlot(rows=3) for i, (name, daily) in enumerate(dailies.items()): thinkplot.SubPlot(i+1) title = 'Price per gram ($)' if i == 0 else '' thinkplot.Config(ylim=[0, 20], title=title) thinkplot.Scatter(daily.ppg, s=10, label=name) if i == 2: plt.xtic...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
We can use statsmodels to run a linear model of price as a function of time.
import statsmodels.formula.api as smf def RunLinearModel(daily): model = smf.ols('ppg ~ years', data=daily) results = model.fit() return model, results
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Now let's plot the fitted model with the data.
def PlotFittedValues(model, results, label=''): """Plots original data and fitted values. model: StatsModel model object results: StatsModel results object """ years = model.exog[:,1] values = model.endog thinkplot.Scatter(years, values, s=15, label=label) thinkplot.Plot(years, results....
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0