markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Generate an outcome variable.
def gen_outcome(otype, mtime0): if otype == "full": lp = 0.5*mtime0 elif otype == "no": lp = exp else: lp = exp + mtime0 mn = np.exp(-lp) ytime0 = -mn * np.log(np.random.uniform(size=n)) ctime = -2 * mn * np.log(np.random.uniform(size=n)) ystatus = (ctime >= ytime0).a...
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Build a dataframe containing all the relevant variables.
def build_df(ytime, ystatus, mtime0, mtime, mstatus): df = pd.DataFrame({"ytime": ytime, "ystatus": ystatus, "mtime": mtime, "mstatus": mstatus, "exp": exp}) return df
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Run the full simulation and analysis, under a particular population structure of mediation.
def run(otype): mtime0, mtime, mstatus = gen_mediator() ytime, ystatus = gen_outcome(otype, mtime0) df = build_df(ytime, ystatus, mtime0, mtime, mstatus) outcome_model = sm.PHReg.from_formula("ytime ~ exp + mtime", status="ystatus", data=df) mediator_model = sm.PHReg.from_formula("mtime ~ exp", st...
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Run the example with full mediation
run("full")
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Run the example with partial mediation
run("partial")
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Run the example with no mediation
run("no")
v0.12.2/examples/notebooks/generated/mediation_survival.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Introduction In the first part of the error analysis tutorial we have introduced the binning analysis, an easy and common tool for error estimation. However, we have seen that it failed to deliver an estimate for our second data set. In this tutorial, we will get to know a different method: the autocorrelation analysis...
# Numpy solution time_series_1_centered = time_series_1 - np.average(time_series_1) autocov = np.empty(1000) for j in range(1000): autocov[j] = np.dot(time_series_1_centered[:N_SAMPLES - j], time_series_1_centered[j:]) autocov /= N_SAMPLES fig = plt.figure(figsize=(10, 6)) plt.gca().axhline(0, color="gray", linew...
doc/tutorials/error_analysis/error_analysis_part2.ipynb
espressomd/espresso
gpl-3.0
We can see that the auto-covariance function starts at a high value and decreases quickly into a long noisy tail which fluctuates around zero. The high values at short lag times indicate that there are strong correlations at short time scales, as expected. However, even though the tail looks uninteresting, it can bear ...
from scipy.optimize import curve_fit def exp_fnc(x, a, b): return a * np.exp(-x / b) N_MAX = 1000 j = np.arange(1, N_MAX) j_log = np.logspace(0, 3, 100) popt, pcov = curve_fit(exp_fnc, j, autocov[1:N_MAX], p0=[15, 10]) # compute analytical ACF of AR(1) process AN_SIGMA_1 = np.sqrt(EPS_1 ** 2 / (1 - PHI_1 ** 2)) ...
doc/tutorials/error_analysis/error_analysis_part2.ipynb
espressomd/espresso
gpl-3.0
Since the auto-covariance function is very well matched with an exponential, this analysis already gives us a reasonable estimate of the autocorrelation time. Here we have the luxury to have an analytical ACF at hand which describes the statistics of the simple AR(1) process, which generated our simulation data. It is ...
# compute the ACF acf = autocov / autocov[0] # integrate the ACF (suffix _v for vectors) j_max_v = np.arange(1000) tau_int_v = np.zeros(1000) for j_max in j_max_v: tau_int_v[j_max] = 0.5 + np.sum(acf[1:j_max + 1]) # plot fig = plt.figure(figsize=(10, 6)) plt.plot(j_max_v[1:], tau_int_v[1:], label="numerical summi...
doc/tutorials/error_analysis/error_analysis_part2.ipynb
espressomd/espresso
gpl-3.0
In this plot, we have the analytical solution at hand, which is a luxury not present in real applications. For the analysis, we therefore need to act as if there was no analytic solution: We see that the integrated autocorrelation time seems to quickly reach a plateau at a $j_\mathrm{max}$ of around 20. Further summati...
C = 5.0 # determine j_max j_max = 0 while j_max < C * tau_int_v[j_max]: j_max += 1 # plot fig = plt.figure(figsize=(10, 6)) plt.plot(j_max_v[1:], C * tau_int_v[1:]) plt.plot(j_max_v[1:], j_max_v[1:]) plt.plot([j_max], [C * tau_int_v[j_max]], "ro") plt.xscale("log") plt.ylim((0, 50)) plt.xlabel(r"sum length $j_\m...
doc/tutorials/error_analysis/error_analysis_part2.ipynb
espressomd/espresso
gpl-3.0
Using this value of $j_\mathrm{max}$, we can calculate the integrated autocorrelation time $\hat{\tau}_{X, \mathrm{int}}$ and estimate the SEM with equation (5).
tau_int = tau_int_v[j_max] print(f"Integrated autocorrelation time: {tau_int:.2f} time steps\n") N_eff = N_SAMPLES / (2 * tau_int) print(f"Original number of samples: {N_SAMPLES}") print(f"Effective number of samples: {N_eff:.1f}") print(f"Ratio: {N_eff / N_SAMPLES:.3f}\n") sem = np.sqrt(autocov[0] / N_eff) print(f"S...
doc/tutorials/error_analysis/error_analysis_part2.ipynb
espressomd/espresso
gpl-3.0
Let's integrate this system for 100 orbital periods.
sim = setupSimulation() sim.integrate(100.*2.*np.pi)
ipython_examples/CloseEncounters.ipynb
dchandan/rebound
gpl-3.0
Rebound exits the integration routine normally. We can now explore the final particle orbits:
for o in sim.calculate_orbits(): print(o)
ipython_examples/CloseEncounters.ipynb
dchandan/rebound
gpl-3.0
We see that the orbits of both planets changed significantly and we can already speculate that there was a close encounter. Let's redo the simulation, but this time set the sim.exit_min_distance flag for the simulation. If this flag is set, then REBOUND calculates the minimum distance between all particle pairs each ti...
sim = setupSimulation() # Resets everything sim.exit_min_distance = 0.15 Noutputs = 1000 times = np.linspace(0,100.*2.*np.pi,Noutputs) distances = np.zeros(Noutputs) ps = sim.particles # ps is now an array of pointers. It will update as the simulation runs. try: for i,time in enumerate(times): sim.integrate...
ipython_examples/CloseEncounters.ipynb
dchandan/rebound
gpl-3.0
The Encounter does currently not tell you wich particles had a close encounter. But you can easily search for the pair yourself (see below). Here, we already know which bodies had a close encounter (the two planets), so let's plot their separation.
%matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(10,5)) ax = plt.subplot(111) ax.set_xlabel("time [orbits]") ax.set_xlim([0,sim.t/(2.*np.pi)]) ax.set_ylabel("distance") plt.plot(times/(2.*np.pi), distances); plt.plot([0.0,12],[0.2,0.2]) # Plot our close encounter criteria;
ipython_examples/CloseEncounters.ipynb
dchandan/rebound
gpl-3.0
We did indeed find the close enounter correctly. We can now search for the two particles that collided and, for this example, merge them. To do that we'll first calculate our new merged planet coordinates, then remove the two particles that collided from REBOUND and finally add the new particle.
import copy from itertools import combinations def mergeParticles(sim): # Find two closest particles min_d2 = 1e9 # large number particles = sim.particles for p1, p2 in combinations(particles,2): dx = p1.x - p2.x dy = p1.y - p2.y dz = p1.z - p2.z d2 = dx*dx + dy*dy + dz*d...
ipython_examples/CloseEncounters.ipynb
dchandan/rebound
gpl-3.0
1. The key classes within the api and being "pythonic" There are several essential class types that are used throughout Shyft, and initially, these may cause some challenges -- particularly to seasoned python users. If you are used to working with the datetime module, pandas and numpy, it will be important that you und...
from shyft.time_series import IntVector, DoubleVector import numpy as np # works: iv = IntVector([0, 1, 4, 5]) print(iv) # won't work: # iv[2] = 2.2 # see the DoubleVector dv = DoubleVector([1.0, 3, 4.5, 10.110293]) print(dv) dv[0] = 2.3
notebooks/api/api-essentials.ipynb
statkraft/shyft-doc
lgpl-3.0
Note, however, that these containers are very basic lists. They don't have methods such as .pop and .index. In generally, they are meant just to be used as first class containers for which you'll pass data into before passing it to Shyft. For those familiar with python and numpy, you can think of it similar to the adva...
IV1 = IntVector([int(i) for i in np.arange(1000)])
notebooks/api/api-essentials.ipynb
statkraft/shyft-doc
lgpl-3.0
TODO: Calendar, TimeSeries, TimeAxis, UtcPeriod, TsVector
# once the shyft_path is set correctly, you should be able to import shyft modules import shyft # if you have problems here, it may be related to having your LD_LIBRARY_PATH # pointing to the appropriate libboost_python libraries (.so files) from shyft import api from shyft.repository.default_state_repository import...
notebooks/api/api-essentials.ipynb
statkraft/shyft-doc
lgpl-3.0
1. shyft.time_series.TimeSeries The Shyft time_series, shyft.time_series contains a lot of functionality worth exploring. The TimeSeries class provides some tools for adding timeseries, looking at statistics, etc. Below is a quick exploration of some of the possibilities. Users should explore using the source code, tab...
# First, we can also plot the statistical distribution of the # discharges over the sub-catchments from shyft.time_series import TsVector,IntVector,TimeAxis,Calendar,time,UtcPeriod # api.TsVector() is a a strongly typed list of time-series,that supports time-series vector operations. discharge_ts = TsVector() # exce...
notebooks/api/api-essentials.ipynb
statkraft/shyft-doc
lgpl-3.0
We'll use some empty data to demonstate assembling an image.
a = np.zeros(g.expected_data_shape) g.plot_data_fast(a, axis_units='m');
docs/dssc_geometry.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Let's have a close up look at some pixels in Q1M1. get_pixel_positions() gives us pixel centres. to_distortion_array() gives pixel corners in a slightly different format, suitable for PyFAI. PyFAI requires non-negative x and y coordinates. But we want to plot them along with the centre positions, so we pass allow_negat...
pixel_pos = g.get_pixel_positions() print("Pixel positions array shape:", pixel_pos.shape, "= (modules, slow_scan, fast_scan, x/y/z)") q1m1_centres = pixel_pos[0] cx = q1m1_centres[..., 0] cy = q1m1_centres[..., 1] distortn = g.to_distortion_array(allow_negative_xy=True) print("Distortion array shape:", distortn...
docs/dssc_geometry.ipynb
European-XFEL/h5tools-py
bsd-3-clause
Example 2. Retrieving trends
# The Yahoo! Where On Earth ID for the entire world is 1. # See https://dev.twitter.com/docs/api/1.1/get/trends/place and # http://developer.yahoo.com/geo/geoplanet/ WORLD_WOE_ID = 1 US_WOE_ID = 23424977 RUS_WOE_ID = 2122265 # Prefix ID with the underscore for query string parameterization. # Without the underscore, t...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 3. Displaying API responses as pretty-printed JSON
import json print json.dumps(world_trends, indent=1) print print json.dumps(us_trends, indent=1) print print json.dumps(rus_trends, indent=1)
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 4. Computing the intersection of 3 sets of trends
world_trends_set = set([trend['name'] for trend in world_trends[0]['trends']]) us_trends_set = set([trend['name'] for trend in us_trends[0]['trends']]) rus_trends_set = set([trend['name'] for trend in rus_trends[0]['trends']]) common_trends = worl...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 5. Collecting search results
# Import unquote to prevent url encoding errors in next_results from urllib import unquote # XXX: Set this variable to a trending topic, # or anything else for that matter. The example query below # was a trending topic when this content was being developed # and is used throughout the remainder of this chapter. q =...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 6. Extracting text, screen names, and hashtags from tweets
status_texts = [ status['text'] for status in statuses ] screen_names = [ user_mention['screen_name'] for status in statuses for user_mention in status['entities']['user_mentions'] ] hashtags = [ hashtag['text'] for status in statuses for hashtag...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 7. Creating a basic frequency distribution from the words in tweets
from collections import Counter for item in [words, screen_names, hashtags]: c = Counter(item) print c.most_common()[:10] # top 10 print print json.dumps(screen_names, indent=1)
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 8. Using prettytable to display tuples in a nice tabular format
from prettytable import PrettyTable for label, data in (('Word', words), ('Screen Name', screen_names), ('Hashtag', hashtags)): pt = PrettyTable(field_names=[label, 'Count']) c = Counter(data) [ pt.add_row(kv) for kv in c.most_common()[:10] ] pt.align[label], ...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 9. Calculating lexical diversity for tweets
# A function for computing lexical diversity def lexical_diversity(tokens): return 1.0*len(set(tokens))/len(tokens) # A function for computing the average number of words per tweet def average_words(statuses): total_words = sum([ len(s.split()) for s in statuses ]) return 1.0*total_words/len(statuses) p...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 10. Finding the most popular retweets
retweets = [ # Store out a tuple of these three values ... (status['retweet_count'], status['retweeted_status']['user']['screen_name'], status['text'], status['retweeted_status']['id'] ) # ... for each status ... ...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 11. Looking up users who have retweeted a status
# Get the original tweet id for a tweet from its retweeted_status node # and insert it here in place of the sample value that is provided # from the text of the book _retweets = twitter_api.statuses.retweets(id=833667112648466436) print [r['user']['screen_name'] for r in _retweets]
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 12. Plotting frequencies of words
%matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt word_counts = sorted(Counter(words).values(), reverse=True) plt.loglog(word_counts) plt.ylabel("Freq") plt.xlabel("Word Rank")
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 13. Generating histograms of words, screen names, and hashtags
for label, data in (('Words', words), ('Screen Names', screen_names), ('Hashtags', hashtags)): # Build a frequency map for each set of data # and plot the values c = Counter(data) plt.hist(c.values()) # Add a title and y-label ... plt.title(label) ...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Example 14. Generating a histogram of retweet counts
# Using underscores while unpacking values in # a tuple is idiomatic for discarding them counts = [] for status in statuses: counts.append(status['retweet_count']) #counts = [count for count, _, _ in retweets] plt.hist(counts) plt.title("Retweets with 0") plt.xlabel('Bins (number of times retweeted)') plt.ylabel(...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Note: This histogram gives you an idea of how many times tweets are retweeted with the x-axis defining partitions for tweets that have been retweeted some number of times and the y-axis telling you how many tweets fell into each bin. For example, a y-axis value of 5 for the "15-20 bin" on the x-axis means that there we...
# Using underscores while unpacking values in # a tuple is idiomatic for discarding them counts = [] for status in statuses: if status['retweet_count'] > 0: counts.append(math.log(status['retweet_count'])) # Taking the log of the *data values* themselves can # often provide quick and valuable insight int...
general studies/Mining Twitter.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Collect the data Get the name of the 4 fields we have to select
select = soupe.find_all('select') select_name = [s.attrs['name'] for s in select] select_name
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Get the select field correspondind to the 4 names found before
select_field = [soupe.find('select',{'name': name}) for name in select_name]
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Get the value corresponding to the "Informatique"
option_unite_acad = select_field[0].find_all('option') #option_unite_acad[[opt.text == 'Informatique' for opt in option_unite_acad]] option_unite_acad unite_acad ={opt['value']: opt.text for opt in option_unite_acad if opt.text == 'Informatique'} unite_acad
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Get all the values of the academic period field In the second select_Field, in the option tag, we take all value execept the one equal to null We only keep the period that are bigger than 2007 (in case there were older periods)
option = select_field[1].find_all('option') period_acad = {opt['value']: opt.text for opt in option if opt['value'] != 'null' and int(opt.text.split('-')[0]) >= 2007} period_acad
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Get all the values of the pedagogic period field correspoding to the bachelor semester in the 3rd select_field, we take all value that contains 'Bachelor' in the label Since we need to find the first and last record of a student, we only consider the 1st, 5th and 6th semester. It is not possible to finish his bachelor...
option = select_field[2].find_all('option') period_pedago = {opt['value']: opt.text for opt in option if 'Bachelor' in opt.text and ('1' in opt.text or '5' in opt.text or '6' in opt.text) } period_pedago option = select_field[3].find_all('option') hiverEte = {opt['value']: opt.text for opt in option if opt['value'] !...
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Collect the data Create a function that will parse one request and return a dataFrame
def parseRequest(u_a, p_a, p_p, h_e): #Send request url = 'http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD='+u_a[0]+'&ww_x_PERIODE_ACAD='+p_a[0]+'&ww_x_PERIODE_PEDAGO='+p_p[0]+'&ww_x_HIVERETE='+ h_e r = requests.g...
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
We iterate over all the parameters. We decided to skip the 'Type de semestre' (HIVERETE) since it is a redundant information. An odd semester is always in Autumn and an even one is always in Spring
list_df = [] for u_a in unite_acad.items(): for p_a in period_acad.items(): for p_p in period_pedago.items(): print('Request for: ',u_a[1], p_a[1], p_p[1]) list_df.append(parseRequest(u_a,p_a, p_p, 'null')) Student = pd.concat(list_df, ignore_index=True) Student ...
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
How many years it took each student to go from the first to the sixth semester As said before, here we check student that are in semester 1 (beginning) and semester 6 or 5 (in case they did the bachelor in 3.5 or 4.5 year)
Student.index = Student.No_Sciper + Student.Semester.astype(str) + Student.Year_start.astype(str) Student.index.is_unique
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Show total number of student that made at least one semester
len(Student.No_Sciper.unique())
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Eliminate student who don't finished their studies We group by sciper number (which we now is unique for each student). It return a sciper with a dataframe containing all the entries for one student We keep people that appear in semester 1, 5 and 6. => those are the people that graduated in informatique We drop all oth...
def computeTotalYears(df): start = df.Year_start.min() end = df.Year_stop.max() end_semester = df[df.Year_stop == end].Semester if(end_semester == '6').any(): return (int(end) - int(start)) else: return (int(end) - int(start) -0.5) Student_copy = Student.copy() Student_copy.i...
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Person that didn't complete the first year in compute Science, we don't consider them since we can't know when they begin their first year
Only_5_6.count()
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Nomber of person that complete the bachelor in computer science
Bachelor.count()
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Number of person that tried at least the first years or last one
len(grouped)
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Person that tried the first year but never finished the bachelor
len(grouped) - len(Bachelor) - len(Only_5_6)
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Compute the average time (in years) to complete the bachelor we choose to ouptut the result in years since it is more significant for human than month. To have the number of months we just need to multiply by 12 In total
len(Bachelor) average = Bachelor.Years.sum()/len(Bachelor) average Bachelor.Years.max() Bachelor.Years.min() Bachelor.Years.hist(bins = 10, range=[3, 8])
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Female
Female = Bachelor[Bachelor.Civilité == 'Madame'] len(Female) averageFemale = Female.Years.sum()/len(Female) averageFemale Female.Years.hist(bins = 10, range=[3, 8])
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Male
Male = Bachelor[Bachelor.Civilité == 'Monsieur'] len(Male) average = Male.Years.sum()/len(Male) average Male.Years.hist(bins = 10, range=[3, 8])
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Test the results
import scipy.stats as stats
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
We want to see if the difference of the average years for female and male are statistically significant with a threshold of 95% We use a Welch's T-Test (which does not assume equal population variance): it measures whether the average value differs significantly across samples.
stats.ttest_ind(a = Female.Years, b= Male.Years, equal_var=False)
HW02-Data_from_the_Web/bachelor_data_analysis.ipynb
christophebertrand/ada-epfl
mit
Parameters
# length of impulse response N = 10 # switch for choosing different impulse responses --> you may add more options if you like to switch = 2 if switch == 1: h = np.ones(N) elif switch == 2: a = 0.5 h = a**( - np.arange( 0, N ) ) # padding zeros h = np.hstack( [h, np.zeros_like( h ) ] )
sigNT/systems/frequency_response.ipynb
kit-cel/wt
gpl-2.0
Getting Frequency Response by Applying FFT
# frequency response by FFT H_fft = np.fft.fft( np.hstack( [ h, np.zeros( 9 * len( h ) ) ] ) ) # frequency domain out of FFT parameters delta_Omega = 2 * np.pi / len(H_fft ) Omega = np.arange( -np.pi, np.pi, delta_Omega )
sigNT/systems/frequency_response.ipynb
kit-cel/wt
gpl-2.0
Getting Frequency Response as Response to Harmonics
# coarse quantiziation of frequency regime for the filterung in order to reduce computational load N_coarse = 100 delta_Omega_coarse = 2 * np.pi / N_coarse Omega_coarse = np.arange( -np.pi, np.pi, delta_Omega_coarse ) # getting values of frequency response by filtering H_response = np.zeros_like( Omega_coarse, dtype ...
sigNT/systems/frequency_response.ipynb
kit-cel/wt
gpl-2.0
Plotting
plt.figure() plt.plot( Omega, np.abs( np.fft.fftshift( H_fft ) ), label= '$|H_{FFT}(\\Omega)|$' ) plt.plot( Omega_coarse, np.abs( H_response ), label= '$|H_{resp.}(\\Omega)|$') plt.grid( True ) plt.xlabel('$\\Omega$') plt.legend( loc='upper right') plt.figure() plt.plot( Omega, np.angle( np.fft.fftshift( H_fft ) )...
sigNT/systems/frequency_response.ipynb
kit-cel/wt
gpl-2.0
Load a pretrained GMM Model and import training data ('data_path')
# Load test data temp_table_name = 'tweets' test_data_path = 'hdfs:///path/to/test/data/*' g = GMM(sc, sqlCtx, {'fields':set(['user.location', 'text']), 'json_path':'/local/path/to/twitter_format.json'}) # Train Model [This takes a while so make sure to save it] #g.train('hdfs:///path/to/training/data') #g.save('/loca...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Evaluate the performance of this model on a set of test data
g.test(test_data_path)
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Pull a set of test data to use for interactive exploration
# Create a temporary table in this context which allows us to explore interatively all_tweets = sqlCtx.parquetFile(test_data_path) all_tweets.cache() all_tweets.registerTempTable(temp_table_name) # NOTE: This where clause filters out US geo coorindates #where_clause = "lang = 'en' and geo.coordinates is not null and ...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Helper functions
def print_tweet(tweet): print print 'Text:', tweet.text print 'User Specified Location:', tweet.user.location print 'Location:', tweet.geo.coordinates # Temporary block of code until the new gmm models are run ####TODO REMOVE THIS when Re-Run!!!!!!! from sklearn import mixture def combine_gmms(gmms): ...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Look at probability distribution of a few tweets
plot_row(local_tweets[0]) plot_row(local_tweets[6]) # Print all the tweets we've pulled into the local context for i, t in enumerate(local_tweets): print i, t.text # Compute local array of actual error and min training error min_errors = [] actual_errors = [] skipped = 0 for i in range(len(local_tweets)): tr...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Compare prediction error [measured] to error in training data X-axis [km]: The error of the word with the minimum error in the training set. Error in the training set is defined as the median distance between the most likely point and every occurrence of that word in the training data Y-axis [km]: The distance between ...
plt.figure() #plt.plot(np.log(min_errors), np.log(actual_errors), '.') plt.plot(min_errors, actual_errors, '.') plt.axis([0,3000,0,3000]) #plt.axis([0,np.log(3000),0,np.log(3000)]) #print min(actual_errors) from scipy.stats import pearsonr print pearsonr(min_errors, actual_errors) print pearsonr(np.log(min_errors), np...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Same plot as above but this time containing N percent of the probability mass
percent_of_mass_to_include = 0.8 tweet = local_tweets[85] (est_location, min_error, error_distance, combined_gmm) = get_gmm_info(tweet) print_tweet(tweet) print 'Estimated Location:', est_location print 'Error (km):', error_distance plot_gmm(combined_gmm, true_ll=tweet.geo.coordinates, percent=percent_of_mass_to_includ...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Find probability mass for a given bouding box Function using KDE which approximates the area better than a simple mesh grid. Otherwise we found that the mesh-grid could often under-sample the probability for especially 'peaky' distributions (such as nyc).
import statsmodels.sandbox.distributions.extras as ext #documented at http://statsmodels.sourceforge.net/devel/_modules/statsmodels/sandbox/distributions/extras.html#mvnormcdf def prob_mass(gmm_model, upper_bound, lower_bound): total_prob = 0 for i in range(0, len(gmm_model.weights_)): val = ext.mvnorm...
notebooks/Explore_GMM.ipynb
vivek8943/soft-boiled
apache-2.0
Loading a user
import bandicoot as bc U = bc.read_csv('ego', 'data/', 'data/antennas.csv')
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Visualization Export and serve an interactive visualization using: python bc.visualization.run(U) or export only using: python bc.visualization.export(U, 'my-viz-path')
import os viz_path = os.path.dirname(os.path.realpath(__name__)) + '/viz' bc.visualization.export(U, viz_path) from IPython.display import IFrame IFrame("/files/viz/index.html", "100%", 700)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Individual and spatial indicators Using bandicoot, compute aggregated indicators from bc.individual and bc.spatial:
bc.individual.percent_initiated_conversations(U) bc.spatial.number_of_antennas(U) bc.spatial.radius_of_gyration(U)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Let's play with indicators The signature of the active_days indicators is: python bc.individual.active_days(user, groupby='week', interaction='callandtext', summary='default', split_week=False, split_day=False, filter_empty=True, datatype=None) What does that mean? <hr /> The ‘groupby’ keyword <br /> <div class="alert...
bc.individual.active_days(U)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
The groupby keyword controls the aggregation: groupby='week' to divide by week (by default), groupby='month' to divide by month, groupby=None to aggregate all values.
bc.individual.active_days(U, groupby='week') bc.individual.active_days(U, groupby='month') bc.individual.active_days(U, groupby=None)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
The ‘summary’ keyword Some indicators such as active_days returns one number. Others, such as duration_of_calls returns a distribution. The summary keyword can take three values: summary='default' to return mean and standard deviation, summary='extended' for the second type of indicators, to return mean, sem, median, ...
bc.individual.call_duration(U) bc.individual.call_duration(U, summary='extended') bc.individual.call_duration(U, summary=None)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Splitting days and weeks split_week divide records by 'all week', 'weekday', and 'weekend'. split_day divide records by 'all day', 'day', and 'night'.
bc.individual.active_days(U, split_week=True, split_day=True)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Exporting indicators The function bc.utils.all computes automatically all indicators for a single user. You can use the same keywords to group by week/month/all time range, or return extended statistics.
features = bc.utils.all(U, groupby=None) features
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Exporting in CSV and JSON bandicoot supports exports in CSV and JSON format. Both to_csv and to_json functions require either a single feature dictionnary, or a list of dictionnaries (for multiple users).
bc.to_csv(features, 'demo_export_user.csv') bc.to_json(features, 'demo_export_user.json') !head demo_export_user.csv !head -n 15 demo_export_user.json
demo/demo.ipynb
yvesalexandre/bandicoot
mit
Extending bandicoot You can easily develop your indicator using the @grouping decorator. You only need to write a function taking as input a list of records and returning an integer or a list of integers (for a distribution). The @grouping decorator wraps the function and call it for each group of weeks.
from bandicoot.helper.group import grouping @grouping(interaction='call') def shortest_call(records): in_durations = (r.call_duration for r in records) return min(in_durations) shortest_call(U) shortest_call(U, split_day=True)
demo/demo.ipynb
yvesalexandre/bandicoot
mit
1. Summarize data by country
# 0. Count number of posters from each state # Calculate mean poster popularity states = df['Country'].unique() dict_state_counts = {'Country':states,'count':np.zeros(len(states)),'popularity':np.zeros(len(states))} for i, s in enumerate(states): dict_state_counts['count'][i] = int(sum(df['Country']==s)) dict_s...
sfn/.ipynb_checkpoints/Poster viewer distribution by state-checkpoint.ipynb
srcole/qwm
mit
2. Poster popularity vs. prevalence Across states in the United States, we found a positive correlation between the number of posters from a state and the popularity of those posters. We debatably see this again across countries to a trending level of significance (1-tailed p-value = 0.06)
print sp.stats.spearmanr(np.log10(df_counts['count']),df_counts['popularity']) plt.figure(figsize=(3,3)) plt.semilogx(df_counts['count'],df_counts['popularity'],'k.') plt.xlabel('Number of posters\nin the state') plt.ylabel('Average number of viewers per poster') plt.ylim((-.1,3.6)) plt.xlim((.9,1000))
sfn/.ipynb_checkpoints/Poster viewer distribution by state-checkpoint.ipynb
srcole/qwm
mit
3. Permutation tests: difference in popularity across countries In this code, we test if the relative popularity / unpopularity observed for any country is outside what is expected by chance Here, the most popular and least popular countries are defined by a nonparametric statiscal test between the number of viewers at...
# Simulate randomized data Nperm = 100 N_posters = len(df) rand_statepop = np.zeros((Nperm,len(states)),dtype=np.ndarray) rand_statepopmean = np.zeros((Nperm,len(states))) for i in range(Nperm): # Random permutation of posters, organized by state randperm_viewers = np.random.permutation(df[key_N].values) fo...
sfn/.ipynb_checkpoints/Poster viewer distribution by state-checkpoint.ipynb
srcole/qwm
mit
How well are we predicting?
affair_mod.pred_table()
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The coefficients of the discrete choice model do not tell us much. What we're after is marginal effects.
mfx = affair_mod.get_margeff() print(mfx.summary()) respondent1000 = dta.iloc[1000] print(respondent1000) resp = dict(zip(range(1,9), respondent1000[["occupation", "educ", "occupation_husb", "rate_marriage", "age", "yrs_married", ...
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
predict expects a DataFrame since patsy is used to select columns.
respondent1000 = dta.iloc[[1000]] affair_mod.predict(respondent1000) affair_mod.fittedvalues[1000] affair_mod.model.cdf(affair_mod.fittedvalues[1000])
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The "correct" model here is likely the Tobit model. We have an work in progress branch "tobit-model" on github, if anyone is interested in censored regression models. Exercise: Logit vs Probit
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6, 6, 1000) ax.plot(support, stats.logistic.cdf(support), 'r-', label='Logistic') ax.plot(support, stats.norm.cdf(support), label='Probit') ax.legend(); fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) support = np.linspace(-6,...
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Compare the estimates of the Logit Fair model above to a Probit model. Does the prediction table look better? Much difference in marginal effects? Generalized Linear Model Example
print(sm.datasets.star98.SOURCE) print(sm.datasets.star98.DESCRLONG) print(sm.datasets.star98.NOTE) dta = sm.datasets.star98.load_pandas().data print(dta.columns) print(dta[['NABOVE', 'NBELOW', 'LOWINC', 'PERASIAN', 'PERBLACK', 'PERHISP', 'PERMINTE']].head(10)) print(dta[['AVYRSEXP', 'AVSALK', 'PERSPENK', 'PTRATIO...
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Aside: Binomial distribution Toss a six-sided die 5 times, what's the probability of exactly 2 fours?
stats.binom(5, 1./6).pmf(2) from scipy.special import comb comb(5,2) * (1/6.)**2 * (5/6.)**3 from statsmodels.formula.api import glm glm_mod = glm(formula, dta, family=sm.families.Binomial()).fit() print(glm_mod.summary())
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The number of trials
glm_mod.model.data.orig_endog.sum(1) glm_mod.fittedvalues * glm_mod.model.data.orig_endog.sum(1)
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
First differences: We hold all explanatory variables constant at their means and manipulate the percentage of low income households to assess its impact on the response variables:
exog = glm_mod.model.data.orig_exog # get the dataframe means25 = exog.mean() print(means25) means25['LOWINC'] = exog['LOWINC'].quantile(.25) print(means25) means75 = exog.mean() means75['LOWINC'] = exog['LOWINC'].quantile(.75) print(means75)
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Again, predict expects a DataFrame since patsy is used to select columns.
resp25 = glm_mod.predict(pd.DataFrame(means25).T) resp75 = glm_mod.predict(pd.DataFrame(means75).T) diff = resp75 - resp25
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
The interquartile first difference for the percentage of low income households in a school district is:
print("%2.4f%%" % (diff[0]*100)) nobs = glm_mod.nobs y = glm_mod.model.endog yhat = glm_mod.mu from statsmodels.graphics.api import abline_plot fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, ylabel='Observed Values', xlabel='Fitted Values') ax.scatter(yhat, y) y_vs_yhat = sm.OLS(y, sm.add_constant(yhat, p...
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Plot fitted values vs Pearson residuals Pearson residuals are defined to be $$\frac{(y - \mu)}{\sqrt{(var(\mu))}}$$ where var is typically determined by the family. E.g., binomial variance is $np(1 - p)$
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title='Residual Dependence Plot', xlabel='Fitted Values', ylabel='Pearson Residuals') ax.scatter(yhat, stats.zscore(glm_mod.resid_pearson)) ax.axis('tight') ax.plot([0.0, 1.0],[0.0, 0.0], 'k-');
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Histogram of standardized deviance residuals with Kernel Density Estimate overlaid The definition of the deviance residuals depends on the family. For the Binomial distribution this is $$r_{dev} = sign\left(Y-\mu\right)*\sqrt{2n(Y\log\frac{Y}{\mu}+(1-Y)\log\frac{(1-Y)}{(1-\mu)}}$$ They can be used to detect ill-fitting...
resid = glm_mod.resid_deviance resid_std = stats.zscore(resid) kde_resid = sm.nonparametric.KDEUnivariate(resid_std) kde_resid.fit() fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, title="Standardized Deviance Residuals") ax.hist(resid_std, bins=25, density=True); ax.plot(kde_resid.support, kde_resid.densit...
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
QQ-plot of deviance residuals
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = sm.graphics.qqplot(resid, line='r', ax=ax)
v0.12.2/examples/notebooks/generated/discrete_choice_example.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Import the Move, Recorder and Player
# Import everything you need for recording, playing, saving, and loading Moves # Move: object used to represent a movement # MoveRecorder: object used to record a Move # MovePlayer: object used to play (and re-play) a Move from pypot.primitive.move import Move, MoveRecorder, MovePlayer
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Create a Recorder for the robot Poppy
record_frequency = 50.0 # This means that a new position will be recorded 50 times per second. recorded_motors = [poppy.m4, poppy.m5, poppy.m6] # We will record the position of the 3 last motors of the Ergo # You can also use alias for the recorded_motors # e.g. recorder = MoveRecorder(poppy, record_frequency, poppy.t...
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Start the recording First, turn the recorded motors compliant, so you can freely move them:
for m in recorded_motors: m.compliant = True
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Starts the recording when you are ready!
recorder.start()
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Stop the recording Stop it when you are done demonstrating the movement.
recorder.stop()
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Turn back off the compliance.
for m in recorded_motors: m.compliant = False
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0
Get the recorder Move and store it on the disk Save the recorded move on the text file named 'mymove.json'.
recorded_move = recorder.move with open('mymove.json', 'w') as f: recorded_move.save(f)
samples/notebooks/Record, Save, and Play Moves on a Poppy Creature.ipynb
poppy-project/pypot
gpl-3.0