markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Questions Could this be a CWL compiler? WIll it take a root document and return the whole structure? Can I find the dockerRequirement anywhere in the doc? Can I find the dockerRequirement using the schema? 1. CWL Docker Compiler What does that mean? Abstractly, that it would read an input document, look for all docker requirements and hints, pull the images, and then write a shell script to reload everything 2. Root document and return whole structure?
workflow = parse('/Users/dcl9/Code/python/mmap-cwl/mmap.cwl')
cwl-freezing.ipynb
Duke-GCB/cwl-freezer
mit
Yes, that works
# This function will find dockerImageId anyhwere in the tree def find_key(d, key, path=[]): if isinstance(d, list): for i, v in enumerate(d): for f in find_key(v, key, path + [str(i)]): yield f elif isinstance(d, dict): if key in d: pathstring = '/'.join(path + [key]) yield pathstring for k, v in d.items(): for f in find_key(v, key, path + [k]): yield f # Could adapt to find class: DockerRequirement instead for x in find_key(workflow, 'dockerImageId'): print x, dpath.util.get(workflow, x) dpath.util.get(workflow, 'steps/0/run/steps/0/run/hints/0')
cwl-freezing.ipynb
Duke-GCB/cwl-freezer
mit
extract docker image names
def image_names(workflow): image_ids = [] for x in find_key(workflow, 'dockerImageId'): image_id = dpath.util.get(workflow, x) if image_id not in image_ids: image_ids.append(image_id) return image_ids image_names(workflow) import docker def docker_hashes(image_ids): for name in image_ids: print name docker_hashes(image_names(workflow))
cwl-freezing.ipynb
Duke-GCB/cwl-freezer
mit
Docker IO Query docker for the sha of the docker image id
%%sh eval $(docker-machine env default) import docker_io images = get_image_metadata(client, 'dukegcb/xgenovo') for img in images: write_image(client, img, '/tmp/images') md
cwl-freezing.ipynb
Duke-GCB/cwl-freezer
mit
Now we import some modules we use and add the PyPhysim to the python path.
import sys sys.path.append("/home/darlan/cvs_files/pyphysim") # xxxxxxxxxx Import Statements xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx from pyphysim.simulations.core import SimulationRunner, SimulationParameters, SimulationResults, Result from pyphysim.comm import modulators, channels from pyphysim.util.conversion import dB2Linear from pyphysim.util import misc #from pyphysim.ia import ia import numpy as np from pprint import pprint from matplotlib import pyplot # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Now we set the transmit parameters and load the simulation results from the file corresponding to those transmit parameters.
# xxxxx Parameters xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx #params = SimulationParameters.load_from_config_file('ia_config_file.txt') K = 3 Nr = 2 Nt = 2 Ns = 1 M = 4 modulator = "PSK" #max_iterations = np.r_[5:121:5] max_iterations_string = "[5_(5)_120]" #misc.replace_dict_values("{max_iterations}",{"max_iterations":max_iterations}) # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # xxxxx Results base name xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx base_name = "results_{M}-{modulator}_{Nr}x{Nt}_({Ns})_MaxIter_{max_iterations}".format(M=M, modulator=modulator, Nr=Nr, Nt=Nt, Ns=Ns, max_iterations=max_iterations_string) # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx alt_min_results_2x2_1 = SimulationResults.load_from_file( 'ia_alt_min_{0}.pickle'.format(base_name)) max_sinrn_results_2x2_1 = SimulationResults.load_from_file( "ia_max_sinr_{0}_['random'].pickle".format(base_name)) mmse_CF_init_results_2x2_1 = SimulationResults.load_from_file( "ia_mmse_{0}_['random'].pickle".format(base_name)) # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Let's define helper methods to get mean number of IA iterations from a simulation results object.
# Helper function to get the number of repetitions for a given set of transmit parameters def get_num_runned_reps(sim_results_object, fixed_params=dict()): all_runned_reps = np.array(sim_results_object.runned_reps) indexes = sim_results_object.params.get_pack_indexes(fixed_params) return all_runned_reps[indexes] # Helper function to get the number of IA runned iterations for a given set of transmit parameters def get_num_mean_ia_iterations(sim_results_object, fixed_params=dict()): return sim_results_object.get_result_values_list('ia_runned_iterations', fixed_params)
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Get the SNR values from the simulation parameters object.
SNR_alt_min = np.array(alt_min_results_2x2_1.params['SNR']) SNR_max_SINR = np.array(max_sinrn_results_2x2_1.params['SNR']) # SNR_min_leakage = np.array(min_leakage_results.params['SNR']) SNR_mmse = np.array(mmse_CF_init_results_2x2_1.params['SNR'])
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Define a function that we can call to plot the BER. This function will plot the BER for all SNR values for the four IA algorithms, given the desired "max_iterations" parameter value.
def plot_ber(alt_min_results, max_sinrn_results, mmse_results, max_iterations, ax=None): # Alt. Min. Algorithm ber_alt_min = alt_min_results.get_result_values_list( 'ber', fixed_params={'max_iterations': max_iterations}) ber_CF_alt_min = alt_min_results.get_result_values_confidence_intervals( 'ber', P=95, fixed_params={'max_iterations': max_iterations}) ber_errors_alt_min = np.abs([i[1] - i[0] for i in ber_CF_alt_min]) # Max SINR Algorithm ber_max_sinr = max_sinrn_results.get_result_values_list( 'ber', fixed_params={'max_iterations': max_iterations}) ber_CF_max_sinr = max_sinrn_results.get_result_values_confidence_intervals( 'ber', P=95, fixed_params={'max_iterations': max_iterations}) ber_errors_max_sinr = np.abs([i[1] - i[0] for i in ber_CF_max_sinr]) # MMSE Algorithm ber_mmse = mmse_results.get_result_values_list( 'ber', fixed_params={'max_iterations': max_iterations}) ber_CF_mmse = mmse_results.get_result_values_confidence_intervals( 'ber', P=95, fixed_params={'max_iterations': max_iterations}) ber_errors_mmse = np.abs([i[1] - i[0] for i in ber_CF_mmse]) if ax is None: fig, ax = plt.subplots(nrows=1, ncols=1) ax.errorbar(SNR_alt_min, ber_alt_min, ber_errors_alt_min, fmt='-r*', elinewidth=2.0, label='Alt. Min.') ax.errorbar(SNR_max_SINR, ber_max_sinr, ber_errors_max_sinr, fmt='-g*', elinewidth=2.0, label='Max SINR') ax.errorbar(SNR_mmse, ber_mmse, ber_errors_mmse, fmt='-m*', elinewidth=2.0, label='MMSE.') ax.set_xlabel('SNR') ax.set_ylabel('BER') title = 'BER for Different Algorithms ({max_iterations} Max Iterations)\nK={K}, Nr={Nr}, Nt={Nt}, Ns={Ns}, {M}-{modulator}'.replace("{max_iterations}", str(max_iterations)) ax.set_title(title.format(**alt_min_results.params.parameters)) ax.set_yscale('log') leg = ax.legend(fancybox=True, shadow=True, loc='lower left', bbox_to_anchor=(0.01, 0.01), ncol=4) ax.grid(True, which='both', axis='both') # Lets plot the mean number of ia iterations ax2 = ax.twinx() mean_alt_min_ia_terations = get_num_mean_ia_iterations(alt_min_results, {'max_iterations': max_iterations}) mean_max_sinrn_ia_terations = get_num_mean_ia_iterations(max_sinrn_results, {'max_iterations': max_iterations}) mean_mmse_ia_terations = get_num_mean_ia_iterations(mmse_results, {'max_iterations': max_iterations}) ax2.plot(SNR_alt_min, mean_alt_min_ia_terations, '--r*') ax2.plot(SNR_max_SINR, mean_max_sinrn_ia_terations, '--g*') ax2.plot(SNR_mmse, mean_mmse_ia_terations, '--m*') # Horizontal line with the max alowed ia iterations ax2.hlines(max_iterations, SNR_alt_min[0], SNR_alt_min[-1], linestyles='dashed') ax2.set_ylim(0, max_iterations*1.1) ax2.set_ylabel('IA Mean Iterations') # Set the X axis limits ax.set_xlim(SNR_alt_min[0], SNR_alt_min[-1]) # Set the Y axis limits ax.set_ylim(1e-6, 1)
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Plot the BER We can create a 4x4 grids if plots and call the plot_ber function to plot in each subplot.
fig, ax = pyplot.subplots(2,2,figsize=(20,15)) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 5, ax[0,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 10, ax[0,1]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 15, ax[1,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 20, ax[1,1]) fig, ax = subplots(2,2,figsize=(20,15)) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 25, ax[0,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 30, ax[0,1]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 35, ax[1,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 40, ax[1,1]) fig, ax = subplots(2,2,figsize=(20,15)) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 45, ax[0,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 50, ax[0,1]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 55, ax[1,0]) plot_ber(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 60, ax[1,1])
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Plot the Capacity
def plot_capacity(alt_min_results, max_sinrn_results, mmse_results, max_iterations, ax=None): # xxxxx Plot Sum Capacity (all) xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx sum_capacity_alt_min = alt_min_results.get_result_values_list( 'sum_capacity', fixed_params={'max_iterations': max_iterations}) sum_capacity_CF_alt_min = alt_min_results.get_result_values_confidence_intervals( 'sum_capacity', P=95, fixed_params={'max_iterations': max_iterations}) sum_capacity_errors_alt_min = np.abs([i[1] - i[0] for i in sum_capacity_CF_alt_min]) #sum_capacity_closed_form = closed_form_results.get_result_values_list( # 'sum_capacity', # fixed_params={'max_iterations': max_iterations}) #sum_capacity_CF_closed_form = closed_form_results.get_result_values_confidence_intervals( # 'sum_capacity', # P=95, # fixed_params={'max_iterations': max_iterations}) #sum_capacity_errors_closed_form = np.abs([i[1] - i[0] for i in sum_capacity_CF_closed_form]) sum_capacity_max_sinr = max_sinrn_results.get_result_values_list( 'sum_capacity', fixed_params={'max_iterations': max_iterations}) sum_capacity_CF_max_sinr = max_sinrn_results.get_result_values_confidence_intervals( 'sum_capacity', P=95, fixed_params={'max_iterations': max_iterations}) sum_capacity_errors_max_sinr = np.abs([i[1] - i[0] for i in sum_capacity_CF_max_sinr]) # sum_capacity_min_leakage = min_leakage_results.get_result_values_list('sum_capacity') # sum_capacity_CF_min_leakage = min_leakage_results.get_result_values_confidence_intervals('sum_capacity', P=95) # sum_capacity_errors_min_leakage = np.abs([i[1] - i[0] for i in sum_capacity_CF_min_leakage]) sum_capacity_mmse = mmse_results.get_result_values_list( 'sum_capacity', fixed_params={'max_iterations': max_iterations}) sum_capacity_CF_mmse = mmse_results.get_result_values_confidence_intervals( 'sum_capacity', P=95, fixed_params={'max_iterations': max_iterations}) sum_capacity_errors_mmse = np.abs([i[1] - i[0] for i in sum_capacity_CF_mmse]) if ax is None: fig, ax = plt.subplots(nrows=1, ncols=1) # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ax.errorbar(SNR_alt_min, sum_capacity_alt_min, sum_capacity_errors_alt_min, fmt='-r*', elinewidth=2.0, label='Alt. Min.') #ax.errorbar(SNR_closed_form, sum_capacity_closed_form, sum_capacity_errors_closed_form, fmt='-b*', elinewidth=2.0, label='Closed Form') ax.errorbar(SNR_max_SINR, sum_capacity_max_sinr, sum_capacity_errors_max_sinr, fmt='-g*', elinewidth=2.0, label='Max SINR') # ax.errorbar(SNR, sum_capacity_min_leakage, sum_capacity_errors_min_leakage, fmt='-k*', elinewidth=2.0, label='Min Leakage.') ax.errorbar(SNR_mmse, sum_capacity_mmse, sum_capacity_errors_mmse, fmt='-m*', elinewidth=2.0, label='MMSE.') # xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ax.set_xlabel('SNR') ax.set_ylabel('Sum Capacity') title = 'Sum Capacity for Different Algorithms ({max_iterations} Max Iterations)\nK={K}, Nr={Nr}, Nt={Nt}, Ns={Ns}, {M}-{modulator}'.replace("{max_iterations}", str(max_iterations)) ax.set_title(title.format(**alt_min_results.params.parameters)) #leg = ax.legend(fancybox=True, shadow=True, loc=2) leg = ax.legend(fancybox=True, shadow=True, loc='lower right', bbox_to_anchor=(0.99, 0.01), ncol=4) ax.grid(True, which='both', axis='both') # Lets plot the mean number of ia iterations ax2 = ax.twinx() mean_alt_min_ia_terations = get_num_mean_ia_iterations(alt_min_results, {'max_iterations': max_iterations}) mean_max_sinrn_ia_terations = get_num_mean_ia_iterations(max_sinrn_results, {'max_iterations': max_iterations}) mean_mmse_ia_terations = get_num_mean_ia_iterations(mmse_results, {'max_iterations': max_iterations}) ax2.plot(SNR_alt_min, mean_alt_min_ia_terations, '--r*') ax2.plot(SNR_max_SINR, mean_max_sinrn_ia_terations, '--g*') ax2.plot(SNR_mmse, mean_mmse_ia_terations, '--m*') # Horizontal line with the max alowed ia iterations ax2.hlines(max_iterations, SNR_alt_min[0], SNR_alt_min[-1], linestyles='dashed') ax2.set_ylim(0, max_iterations*1.1) ax2.set_ylabel('IA Mean Iterations') # Set the X axis limits ax.set_xlim(SNR_alt_min[0], SNR_alt_min[-1]) # Set the Y axis limits #ax.set_ylim(1e-6, 1) fig, ax = subplots(2,2,figsize=(20,15)) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 5, ax[0,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 10, ax[0,1]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 15, ax[1,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 20, ax[1,1]) fig, ax = subplots(2,2,figsize=(20,15)) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 25, ax[0,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 30, ax[0,1]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 35, ax[1,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 40, ax[1,1]) fig, ax = subplots(2,2,figsize=(20,15)) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 45, ax[0,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 50, ax[0,1]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 55, ax[1,0]) plot_capacity(alt_min_results_2x2_1, max_sinrn_results_2x2_1, mmse_CF_init_results_2x2_1, 60, ax[1,1])
apps/ia/IA Results 2x2(1).ipynb
darcamo/pyphysim
gpl-2.0
Analytic methods If we know the parameters of the sampling distribution, we can compute confidence intervals and p-values analytically, which is computationally faster than resampling.
import scipy.stats def EvalNormalCdfInverse(p, mu=0, sigma=1): return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Here's the confidence interval for the estimated mean.
EvalNormalCdfInverse(0.05, mu=90, sigma=2.5) EvalNormalCdfInverse(0.95, mu=90, sigma=2.5)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
normal.py provides a Normal class that encapsulates what we know about arithmetic operations on normal distributions.
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/normal.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/hypothesis.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg2.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py") download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/first.py") from normal import Normal dist = Normal(90, 7.5**2) dist
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
We can use it to compute the sampling distribution of the mean.
dist_xbar = dist.Sum(9) / 9 dist_xbar.sigma
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
And then compute a confidence interval.
dist_xbar.Percentile(5), dist_xbar.Percentile(95)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Central Limit Theorem If you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution. The following function generates samples with difference sizes from an exponential distribution.
def MakeExpoSamples(beta=2.0, iters=1000): """Generates samples from an exponential distribution. beta: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.exponential(beta, n)) for _ in range(iters)] samples.append((n, sample)) return samples
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
This function generates normal probability plots for samples with various sizes.
def NormalPlotSamples(samples, plot=1, ylabel=""): """Makes normal probability plots for samples. samples: list of samples label: string """ for n, sample in samples: thinkplot.SubPlot(plot) thinkstats2.NormalProbabilityPlot(sample) thinkplot.Config( title="n=%d" % n, legend=False, xticks=[], yticks=[], xlabel="random normal variate", ylabel=ylabel, ) plot += 1
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The following plot shows how the sum of exponential variates converges to normal as sample size increases.
thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeExpoSamples() NormalPlotSamples(samples, plot=1, ylabel="sum of expo values")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.
def MakeLognormalSamples(mu=1.0, sigma=1.0, iters=1000): """Generates samples from a lognormal distribution. mu: parmeter sigma: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.lognormal(mu, sigma, n)) for _ in range(iters)] samples.append((n, sample)) return samples thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeLognormalSamples() NormalPlotSamples(samples, ylabel="sum of lognormal values")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.
def MakeParetoSamples(alpha=1.0, iters=1000): """Generates samples from a Pareto distribution. alpha: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.pareto(alpha, n)) for _ in range(iters)] samples.append((n, sample)) return samples thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeParetoSamples() NormalPlotSamples(samples, ylabel="sum of Pareto values")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
If the random variates are correlated, that also violates the CLT, so the sums don't generally converge. To generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.
def GenerateCorrelated(rho, n): """Generates a sequence of correlated values from a standard normal dist. rho: coefficient of correlation n: length of sequence returns: iterator """ x = random.gauss(0, 1) yield x sigma = np.sqrt(1 - rho**2) for _ in range(n - 1): x = random.gauss(x * rho, sigma) yield x def GenerateExpoCorrelated(rho, n): """Generates a sequence of correlated values from an exponential dist. rho: coefficient of correlation n: length of sequence returns: NumPy array """ normal = list(GenerateCorrelated(rho, n)) uniform = scipy.stats.norm.cdf(normal) expo = scipy.stats.expon.ppf(uniform) return expo def MakeCorrelatedSamples(rho=0.9, iters=1000): """Generates samples from a correlated exponential distribution. rho: correlation iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(GenerateExpoCorrelated(rho, n)) for _ in range(iters)] samples.append((n, sample)) return samples thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeCorrelatedSamples() NormalPlotSamples(samples, ylabel="sum of correlated exponential values")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Difference in means Let's use analytic methods to compute a CI and p-value for an observed difference in means. The distribution of pregnancy length is not normal, but it has finite mean and variance, so the sum (or mean) of a few thousand samples is very close to normal.
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct") download( "https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz" ) import first live, firsts, others = first.MakeFrames() delta = firsts.prglngth.mean() - others.prglngth.mean() delta
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The following function computes the sampling distribution of the mean for a set of values and a given sample size.
def SamplingDistMean(data, n): """Computes the sampling distribution of the mean. data: sequence of values representing the population n: sample size returns: Normal object """ mean, var = data.mean(), data.var() dist = Normal(mean, var) return dist.Sum(n) / n
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Here are the sampling distributions for the means of the two groups under the null hypothesis.
dist1 = SamplingDistMean(live.prglngth, len(firsts)) dist2 = SamplingDistMean(live.prglngth, len(others))
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
And the sampling distribution for the difference in means.
dist_diff = dist1 - dist2 dist
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Under the null hypothesis, here's the chance of exceeding the observed difference.
1 - dist_diff.Prob(delta)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
And the chance of falling below the negated difference.
dist_diff.Prob(-delta)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The sum of these probabilities is the two-sided p-value. Testing a correlation Under the null hypothesis (that there is no correlation), the sampling distribution of the observed correlation (suitably transformed) is a "Student t" distribution.
def StudentCdf(n): """Computes the CDF correlations from uncorrelated variables. n: sample size returns: Cdf """ ts = np.linspace(-3, 3, 101) ps = scipy.stats.t.cdf(ts, df=n - 2) rs = ts / np.sqrt(n - 2 + ts**2) return thinkstats2.Cdf(rs, ps)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
The following is a HypothesisTest that uses permutation to estimate the sampling distribution of a correlation.
import hypothesis class CorrelationPermute(hypothesis.CorrelationPermute): """Tests correlations by permutation.""" def TestStatistic(self, data): """Computes the test statistic. data: tuple of xs and ys """ xs, ys = data return np.corrcoef(xs, ys)[0][1]
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.
def ResampleCorrelations(live): """Tests the correlation between birth weight and mother's age. live: DataFrame for live births returns: sample size, observed correlation, CDF of resampled correlations """ live2 = live.dropna(subset=["agepreg", "totalwgt_lb"]) data = live2.agepreg.values, live2.totalwgt_lb.values ht = CorrelationPermute(data) p_value = ht.PValue() return len(live2), ht.actual, ht.test_cdf n, r, cdf = ResampleCorrelations(live) model = StudentCdf(n) thinkplot.Plot(model.xs, model.ps, color="gray", alpha=0.5, label="Student t") thinkplot.Cdf(cdf, label="sample") thinkplot.Config(xlabel="correlation", ylabel="CDF", legend=True, loc="lower right")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.
t = r * np.sqrt((n - 2) / (1 - r**2)) p_value = 1 - scipy.stats.t.cdf(t, df=n - 2) print(r, p_value)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Chi-squared test The reason the chi-squared statistic is useful is that we can compute its distribution under the null hypothesis analytically.
def ChiSquaredCdf(n): """Discrete approximation of the chi-squared CDF with df=n-1. n: sample size returns: Cdf """ xs = np.linspace(0, 25, 101) ps = scipy.stats.chi2.cdf(xs, df=n - 1) return thinkstats2.Cdf(xs, ps)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.
data = [8, 9, 19, 5, 8, 11] dt = hypothesis.DiceChiTest(data) p_value = dt.PValue(iters=1000) n, chi2, cdf = len(data), dt.actual, dt.test_cdf model = ChiSquaredCdf(n) thinkplot.Plot(model.xs, model.ps, color="gray", alpha=0.3, label="chi squared") thinkplot.Cdf(cdf, label="sample") thinkplot.Config(xlabel="chi-squared statistic", ylabel="CDF", loc="lower right")
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
And then we can use the analytic distribution to compute p-values.
p_value = 1 - scipy.stats.chi2.cdf(chi2, df=n - 1) print(chi2, p_value)
code/chap14ex.ipynb
AllenDowney/ThinkStats2
gpl-3.0
Fíjate en la estructura de la lista: se trata de una lista de tuplas con cuatro elementos. Puedes comprobar si el fichero se ha cargado como debe en la siguiente celda:
ultimo_tweet = tweets[-1] print('id =>', ultimo_tweet[0]) print('fecha =>', ultimo_tweet[1]) print('autor =>', ultimo_tweet[2]) print('texto =>', ultimo_tweet[3])
notebooks-py3/superbowl.ipynb
vitojph/kschool-nlp
gpl-3.0
Al lío A partir de aquí puedes hacer distintos tipos de análisis. Añade tantas celdas como necesites para intentar, por ejemplo: calcular distintas estadísticas de la colección: número de mensajes, longitud de los mensajes, presencia de hashtags y emojis, etc. número de menciones a usuarios, frecuencia de aparición de menciones, frecuencia de autores calcular estadísticas sobre usuarios: menciones, mensajes por usuario, etc. calcular estadísticas sobre las hashtags calcular estadísticas sobre las URLs presentes en los mensajes calcular estadísticas sobre los emojis y emoticonos de los mensajes extraer automáticamente las entidades nombradas que aparecen en los mensajes y su frecuencia procesar los mensajes para extraer y analizar opiniones: calcular la subjetividad y la polaridad de los mensajes extraer las entidades nombradas que levantan más pasiones, quiénes son los más queridos y los más odiados, atendiendo a la polaridad de los mensajes comprobar si la polaridad de alguna entidad varía radicalmente a medida que avanza el partido cualquier otra cosa que se te ocurra :-P
from textblob import TextBlob for tweet in tweets: try: t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8')) if t.sentiment.polarity < -0.5: print(tweet[3], '-->', t.sentiment) except IndexError: pass for tweet in tweets: try: t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8')) print(" ".join(t.noun_phrases)) except IndexError: pass for tweet in tweets[:20]: try: t = TextBlob(tweet[3]) # in Python2: t = TextBlob(tweet[3].decode('utf-8')) print(t.translate(to='es')) except IndexError: pass
notebooks-py3/superbowl.ipynb
vitojph/kschool-nlp
gpl-3.0
Basic save and read To save the full system, use:
save_folder_full = '/tmp/testmrio/full' io.save_all(path=save_folder_full)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
To read again from that folder do:
io_read = pymrio.load_all(path=save_folder_full)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
The fileio activities are stored in the included meta data history field:
io_read.meta
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
Storage format Internally, pymrio stores data in csv format, with the 'economic core' data in the root and each satellite account in a subfolder. Metadata as file as a file describing the data format ('file_parameters.json') are included in each folder.
import os os.listdir(save_folder_full)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
The file format for storing the MRIO data can be switched to a binary pickle format with:
save_folder_bin = '/tmp/testmrio/binary' io.save_all(path=save_folder_bin, table_format='pkl') os.listdir(save_folder_bin)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
This can be used to reduce the storage space required on the disk for large MRIO databases. Archiving MRIOs databases To archive a MRIO system after saving use pymrio.archive:
mrio_arc = '/tmp/testmrio/archive.zip' # Remove a potentially existing archive from before try: os.remove(mrio_arc) except FileNotFoundError: pass pymrio.archive(source=save_folder_full, archive=mrio_arc)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
Data can be read directly from such an archive by:
tt = pymrio.load_all(mrio_arc)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
Currently data can not be saved directly into a zip archive. It is, however, possible to remove the source files after archiving:
tmp_save = '/tmp/testmrio/tmp' # Remove a potentially existing archive from before try: os.remove(mrio_arc) except FileNotFoundError: pass io.save_all(tmp_save) print("Directories before archiving: {}".format(os.listdir('/tmp/testmrio'))) pymrio.archive(source=tmp_save, archive=mrio_arc, remove_source=True) print("Directories after archiving: {}".format(os.listdir('/tmp/testmrio')))
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
Several MRIO databases can be stored in the same archive:
# Remove a potentially existing archive from before try: os.remove(mrio_arc) except FileNotFoundError: pass tmp_save = '/tmp/testmrio/tmp' io.save_all(tmp_save) pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version1/', remove_source=True) io2 = io.copy() del io2.emissions io2.save_all(tmp_save) pymrio.archive(source=tmp_save, archive=mrio_arc, path_in_arc='version2/', remove_source=True)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
When loading from an archive which includes multiple MRIO databases, specify one with the parameter 'path_in_arc':
io1_load = pymrio.load_all(mrio_arc, path_in_arc='version1/') io2_load = pymrio.load_all(mrio_arc, path_in_arc='version2/') print("Extensions of the loaded io1 {ver1} and of io2: {ver2}".format( ver1=sorted(io1_load.get_extensions()), ver2=sorted(io2_load.get_extensions())))
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
The pymrio.load function can be used directly to only a specific satellite account of a MRIO database from a zip archive:
emissions = pymrio.load(mrio_arc, path_in_arc='version1/emissions') print(emissions)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
The archive function is a wrapper around python.zipfile module. There are, however, some differences to the defaults choosen in the original: In contrast to zipfile.write, pymrio.archive raises an error if the data (path + filename) are identical in the zip archive. Background: the zip standard allows that files with the same name and path are stored side by side in a zip file. This becomes an issue when unpacking this files as they overwrite each other upon extraction. The standard for the parameter 'compression' is set to ZIP_DEFLATED This is different from the zipfile default (ZIP_STORED) which would not give any compression. See the zipfile docs for further information. Depending on the value given for the parameter 'compression' additional modules might be necessary (e.g. zlib for ZIP_DEFLATED). Futher information on this can also be found in the zipfile python docs. Storing or exporting a specific table or extension Each extension of the MRIO system can be stored separetly with:
save_folder_em= '/tmp/testmrio/emissions' io.emissions.save(path=save_folder_em)
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
This can then be loaded again as separate satellite account:
emissions = pymrio.load(save_folder_em) emissions emissions.D_cba
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
As all data in pymrio is stored as pandas DataFrame, the full pandas stack for exporting tables is available. For example, to export a table as excel sheet use:
io.emissions.D_cba.to_excel('/tmp/testmrio/emission_footprints.xlsx')
doc/source/notebooks/load_save_export.ipynb
konstantinstadler/pymrio
gpl-3.0
Tarea 1 (a). Irradiancia máxima Como estamos considerando el haz láser de un puntero que emite en el visible, como tiempo de exposición emplearemos el tiempo que se tarda en cerrar el párpado. Así con este tiempo de exposición estimar de la gráfica la irradiancia máxima que puede alcanzar el ojo. Escribir el tiempo de exposición empleado y el correspondiente valor de la irradiancia. Tiempo de exposición (parpadeo) = s Irradiancia máxima permisible = W/cm$^2$ Tarea 1 (b). Potencia máxima Vamos a considerar que el haz que alcanza nuestro ojo está colimado con un tamaño equivalente al de nuestra pupila. Empleando dicho tamaño calcular la potencia máxima que puede alcanzar nuestro ojo sin provocar daño. Escribir el tamaño de la pupila considerado, las operaciones y el resultado final de la potencia (en mW) Diámetro o radio de la pupila = mm Cálculos intermedios Potencia máxima permisible = mW Tarea 2. Elección del puntero láser Buscar en internet información sobre un puntero láser visible que sea de alta potencia. Verifi car que dicho puntero láser puede provocar daño ocular (teniendo en cuenta el resultado de la Tarea 1 (b)) Escribir aquí las características técnicas de dicho láser potencia longitud de onda precio otras características página web http://www.ucm.es Tarea 3. Elección del filtro interferencial Vamos a buscar en internet un filtro interferencial comercial que permita evitar el riesgo de daño ocular para el puntero láser seleccionado. Se tratará de un filtro que bloquee la longitud de onda del puntero láser. Tarea 3 (a). Busqueda e información del filtro interferencial Vamos a emplear la información accesible en la casa Semrock ( http://www.semrock.com/filters.aspx ) Seleccionar en esta página web un filtro adecuado. Pinchar sobre cada filtro (sobre la curva de transmitancia, sobre el Part Number, o sobre Show Product Detail) para obtener más información. Escribir aquí las características más relevantes del filtro seleccionado: transmitancia T o densidad óptica OD rango de longitudes de onda precio página web del filtro seleccionado (cambiar la siguiente dirección) http://www.semrock.com/FilterDetails.aspx?id=LP02-224R-25 Tarea 3 (b). Verificación del filtro Empleando el dato de la transmitancia (T) a la longitud de onda del puntero láser comprobar que dicho filtro evitará el riesgo de lesión. Para ello vamos a usar los datos de la transmitancia del filtro seleccionado que aparecen en la página web de Semrock. Para cargar dichos datos en nuestro notebook seguimos los siguientes pasos: Pinchar con el ratón en la página web del filtro seleccionado sobre ASCII Data, que se encuentra en la leyenda de la figura (derecha). Copiar la dirección de la página web que se abre (esta página muestra los datos experimentales de la transmitancia) Pegar esa dirección en la siguiente celda de código, detrás de filename = (Nota: asegurarse de que la dirección queda entre las comillas) En la siguiente celda de código se representa la transmitancia del filtro en escala logarítmica en función de la longitud de onda (en nm).
#### # Parámetros a modificar. INICIO #### filename = "http://www.semrock.com/_ProductData/Spectra/NF01-229_244_DesignSpectrum.txt" # Parámetros a modificar. FIN #### %pylab inline data=genfromtxt(filename,dtype=float,skip_header=4) # Carga los datos longitud_de_onda=data[:,0];transmitancia=data[:,1]; print "Datos cargados OK" import plotly py = plotly.plotly('ofii','i6jc6xsecb') data = [{'x': longitud_de_onda, 'y':transmitancia}] layout={'title': 'Transmitancia Filtro Escogido','yaxis':{'type':'log'}} py.iplot(data,layout=layout,fileopt='overwrite')
Trabajo Filtro Interferencial/.ipynb_checkpoints/TrabajoFiltros-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Then, read the input tablse from the datasets directory
# Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' # Get the paths of the input tables path_A = datasets_dir + os.sep + 'person_table_A.csv' path_B = datasets_dir + os.sep + 'person_table_B.csv' # Read the CSV files and set 'ID' as the key attribute A = em.read_csv_metadata(path_A, key='ID') B = em.read_csv_metadata(path_B, key='ID') A.head() B.head()
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Block Using the Sorted Neighborhood Blocker Once the tables are read, we can do blocking using sorted neighborhood blocker. With the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs. Block Tables to Produce a Candidate Set of Tuple Pairs
# Instantiate attribute equivalence blocker object sn = em.SortedNeighborhoodBlocker()
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes.
# Use block_tables to apply blocking over two input tables. C1 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=3) # Display the candidate set of tuple pairs C1.head()
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Note that the tuple pairs in the candidate set have the same zipcode. The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command.
# Show the metadata of C1 em.show_properties(C1) id(A), id(B)
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables. Handling Missing Values If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True.
# Introduce some missing values A1 = em.read_csv_metadata(path_A, key='ID') A1.ix[0, 'zipcode'] = pd.np.NaN A1.ix[0, 'birth_year'] = pd.np.NaN A1 # Use block_tables to apply blocking over two input tables. C2 = sn.block_tables(A1, B, l_block_attr='zipcode', r_block_attr='zipcode', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', allow_missing=True) # setting allow_missing parameter to True len(C1), len(C2) C2
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
The candidate set C2 includes all possible tuple pairs with missing values. Window Size A tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the window_size argument. Note that it has more results than C1.
C3 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) len(C1) len(C3)
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
Stable Sort Order One final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example.
A["birth_year_plus_id"]=A["birth_year"].map(str)+'-'+A["ID"].map(str) B["birth_year_plus_id"]=B["birth_year"].map(str)+'-'+A["ID"].map(str) C3 = sn.block_tables(A, B, l_block_attr='birth_year_plus_id', r_block_attr='birth_year_plus_id', l_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) C3.head()
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
anhaidgroup/py_entitymatching
bsd-3-clause
<h2>Aufgabe</h2> <ul><li>Schreiben Sie eine vergleichbare Zuweisung für jede der oben aufgelisteten Datenstrukturen</li></ul>
#tuple a = ("a", 1) #check type(a)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Programmsteuerung</h2> <p>Programmkontrolle: Bedingte Verzweigung</p> <p>Mit if kann man den Programmablauf abhängig vom Wahrheitswert von Bedingungen verzweigen lassen. Z.B.:</p>
a = True b = False if a == True: print("a is true") else: print("a is not true")
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
Mit <b>for</b> kann man Schleifen über alles Iterierbare machen. Z.B.:
#chr(x) gibt den char mit dem unicode value x aus for c in range(80,90): print(chr(c),end=" ")
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Aufgaben</h2> <ul><li>Geben Sie alle Buchstaben von A bis z aus, deren Unicode-Code eine gerade Zahl ist.</li>
for c in range(65,112): if c % 2 == 0: print(chr(c))
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<ul>Zählen Sie, wie häufig der Buchstabe "a" im folgenden Satz vorkommt: "Goethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften."</ul>
a = "ABCDEuvwwxyz" for i in a: print(i)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Funktionen</h2> <p>Funktionen dienen der Modularisierung des Programms und der Komplexitätsreduktion. Sie ermöglichen die Wiederverwendung von Programmcode und eine einfachere Fehlersuche.
#diese Funktion dividiert 2 Zahlen: def div(a, b): return a / b #test div(6,2)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Aufgaben</h2> <p>Schreiben Sie eine Funktion, die die Anzahl der Vokale in einem String zählt.</p>
a = "Hallo" def count_vowels(s): result = 0 for i in s: if i in "AEIOUaeiou": result += 1 return result count_vowels(a) s = "hallo" for i in s: print(i)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Dateien lesen und schreiben</h2> <p>open(file, mode='r', encoding=None) können Sie Dateien schreiben oder lesen. </p> <p><b>modes:</b> <br/> "r" - read (default)<br/> "w" - write. Löscht bestehende Inhalte<br/> "a" - append. Hängt neue Inhalte an.<br/> "t" - text (default) <br/> "b" - binary. <br/> "x" - exclusive. Öffnet Schreibzugriff auf eine Datei. Gibt Fehlermeldung, wenn die Datei existiert.</p> <p> <b>encoding</b> "utf-8"<br/> "ascii"<br/> "cp1252"<br/> "iso-8859-1"<br/><p>
words = [] with open("goethe.txt", "w", encoding="utf-8") as fin: for line in fin: re.findall("\w+", s)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
<h2>Aufgabe</h2> <p>Schreiben Sie diesen Text in eine Datei mit den Namen "goethe.txt" (utf-8):<br/> <code> Johann Wolfgang von Goethe (* 28. August 1749 in Frankfurt am Main; † 22. März 1832 in Weimar), geadelt 1782, gilt als einer der bedeutendsten Repräsentanten deutschsprachiger Dichtung. Goethes literarische Produktion umfasst Lyrik, Dramen, erzählende Werke (in Vers und Prosa), autobiografische, kunst- und literaturtheoretische sowie naturwissenschaftliche Schriften. Daneben ist sein umfangreicher Briefwechsel von literarischer Bedeutung. Goethe war Vorbereiter und wichtigster Vertreter des Sturm und Drang. Sein Roman Die Leiden des jungen Werthers machte ihn in Europa berühmt. Gemeinsam mit Schiller, Herder und Wieland verkörpert er die Weimarer Klassik. Im Alter wurde er auch im Ausland als Repräsentant des geistigen Deutschlands angesehen. Am Hof von Weimar bekleidete er als Freund und Minister des Herzogs Carl August politische und administrative Ämter und leitete ein Vierteljahrhundert das Hoftheater. Im Deutschen Kaiserreich wurde er „zum Kronzeugen der nationalen Identität der Deutschen“[1] und als solcher für den deutschen Nationalismus vereinnahmt. Es setzte damit eine Verehrung nicht nur des Werks, sondern auch der Persönlichkeit des Dichters ein, dessen Lebensführung als vorbildlich empfunden wurde. Bis heute zählen Gedichte, Dramen und Romane von ihm zu den Meisterwerken der Weltliteratur.</code> <h2>Reguläre Ausdrücke</h2> <ul> <li>Zeichenklassen<br/>z.B. '.' (hier und im folgenden ohne '') = beliebiges Zeichen <li>Quantifier<br/>z.B. '+' = 1 oder beliebig viele des vorangehenden Zeichens 'ab+' matches 'ab' 'abb' 'abbbbb', aber nicht 'abab' <li>Positionen<br/>z.B. '^' am Anfang der Zeile <li>Sonstiges<br/>Gruppen (x), '|' Oder‚ '|', Non-greedy: ?, '\' Escape character </ul> <p>Beispiel. Aufgabe: Finden Sie alle Großbuchstaben in einem String s.
import re s = "Dies ist ein Beispiel." re.findall(r"[A-ZÄÖÜ]", s) re.findall("\w+", s)
Python_2_1.ipynb
fotis007/python_intermediate
gpl-3.0
Intro Welcome! In this section you'll learn about Sampler-class. Instances of Sampler can be used for flexible sampling of multivariate distributions. To begin with, Sampler gives rise to several building-blocks classes such as - NumpySampler, or NS - ScipySampler - SS What's more, Sampler incorporates a set of operations on Sampler-instances, among which are - "|" for building a mixture of two samplers: s = s1 | s2 - "&amp;" for setting a mixture-weight of a sampler: s = 0.6 &amp; s1 | 0.4 &amp; s2 - " truncate" for truncating the support of underlying sampler's distribution: s.truncate(high=[1.0, 1.5]) - ..all arithmetic operations: s = s1 + s2 or s = s1 + 0.5 These operations can be used for combining building-blocks samplers into complex multivariate-samplers, just like that:
from batchflow import NumpySampler as NS # truncated normal and uniform ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4 ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1) ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0) ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4) .apply(lambda m: m.astype(np.int)) / 4 + (0, 3)) # a mixture of all four ns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4 # take a look at the heatmap of our sampler: h = np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True) plt.imshow(h[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
Building Samplers 1. Numpy, Scipy, TensorFlow - Samplers To build a NumpySampler(NS) you need to specify a name of distribution from numpy.random (or its alias) and the number of independent dimensions:
from batchflow import NumpySampler as NS ns = NS('n', dim=2)
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
take a look at a sample generated by our sampler:
smp = ns.sample(size=200) plt.scatter(*np.transpose(smp))
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
The same goes for ScipySampler based on scipy.stats-distributions, or SS ("mvn" stands for multivariate-normal):
from batchflow import ScipySampler as SS ss = SS('mvn', mean=[0, 0], cov=[[2, 1], [1, 2]]) # note also that you can pass the same params as in smp = ss.sample(2000) # scipy.sample.multivariate_normal, such as `mean` and `cov` plt.scatter(*np.transpose(smp))
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
2. HistoSampler as an estimate of a distribution generating a cloud of points HistoSampler, or HS can be used for building samplers, with underlying distributions given by a histogram. You can either pass a np.histogram-output into the initialization of HS
from batchflow import HistoSampler as HS histo = np.histogramdd(ss.sample(1000000)) hs = HS(histo) plt.scatter(*np.transpose(hs.sample(150)))
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
...or you can specify empty bins and estimate its weights using a method HS.update and a cloud of points:
hs = HS(edges=2 * [np.linspace(-4, 4)]) hs.update(ss.sample(1000000)) plt.imshow(hs.bins, interpolation='bilinear')
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
3. Algebra of Samplers; operations on Samplers Sampler-instances support artithmetic operations (+, *, -,...). Arithmetics works on either * (Sampler, Sampler) - pair * (Sampler, array-like) - pair
# blur using "+" u = NS('u', dim=2) noise = NS('n', dim=2) blurred = u + noise * 0.2 # decrease the magnitude of the noise both = blurred | u + (2, 2) plt.imshow(np.histogramdd(both.sample(1000000), bins=100)[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
You may also want to truncate a sampler's distribution so that sampling points belong to a specific region. The common use-case is to sample normal points inside a box. ..or, inside a ring:
n = NS('n', dim=2).truncate(3, 0.3, expr=lambda m: np.sum(m**2, axis=1)) plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
Not infrequently you need to obtain "normal" sample in integers. For this you can use Sampler.apply method:
n = (4 * NS('n', dim=2)).apply(lambda m: m.astype(np.int)).truncate([6, 6], [-6, -6]) plt.imshow(np.histogramdd(n.sample(1000000), bins=100)[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
Note that Sampler.apply-method allows you to add an arbitrary transformation to a sampler. For instance, Box-Muller transform:
bm = lambda vec2: np.sqrt(-2 * np.log(vec2[:, 0:1])) * np.concatenate([np.cos(2 * np.pi * vec2[:, 1:2]), np.sin(2 * np.pi * vec2[:, 1:2])], axis=1) n = NS('u', dim=2).apply(bm) plt.imshow(np.histogramdd(u.sample(1000000), bins=100)[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
Another useful thing is coordinate stacking ("&" stands for multiplication of distribution functions):
n, u = NS('n'), SS('u') # initialize one-dimensional notrmal and uniform samplers s = n & u # stack them together s.sample(3)
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
4. Alltogether
ns1 = NS('n', dim=2).truncate(2.0, 0.8, lambda m: np.sum(np.abs(m), axis=1)) + 4 ns2 = 2 * NS('u', dim=2).truncate(1, expr=lambda m: np.sum(m, axis=1)) - (1, 1) ns3 = NS('n', dim=2).truncate(1.5, expr=lambda m: np.sum(np.square(m), axis=1)) + (4, 0) ns4 = ((NS('n', dim=2).truncate(2.5, expr=lambda m: np.sum(np.square(m), axis=1)) * 4) .apply(lambda m: m.astype(np.int)) / 4 + (0, 3)) ns = 0.4 & ns1 | 0.2 & ns2 | 0.39 & ns3 | 0.01 & ns4 plt.imshow(np.histogramdd(ns.sample(int(1e6)), bins=100, normed=True)[0])
examples/tutorials/07_sampler.ipynb
analysiscenter/dataset
apache-2.0
1a) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
#What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day? #mothers day in 2010 book_result = data['results'] #print(book_result) print("The hardcover Fiction NYT best-sellers on mothers day in 2010 are:") for i in book_result: #print(i['book_details']) for item in i['book_details']: print("-", item['title'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
1b) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
#mothers day in 2009 response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793') data = response.json() #print(data) print("The hardcover Fiction NYT best-sellers on mothers day in 2009 are:") book_result = data['results'] #print(book_result) for i in book_result: #print(i['book_details']) for item in i['book_details']: print("-",item['title'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
1c) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
#fathers day in 2010 response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2010-06-20/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793') data = response.json() #print(data) print("The hardcover Fiction NYT best-sellers on fathers day in 2010 are:") book_result = data['results'] #print(book_result) for i in book_result: #print(i['book_details']) for item in i['book_details']: print("-", item['title'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
1d) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
#fathers day in 2009 response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-06-21/hardcover-fiction.json?api-key=3880684abea14d86b6280c6dbd80a793') data = response.json() #print(data) book_result = data['results'] print("The hardcover Fiction NYT best-sellers on fathers day in 2009 are:") #print(book_result) for i in book_result: #print(i['book_details']) for item in i['book_details']: print("-", item['title'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
2a) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
#What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015? response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?date=2009-06-06&api-key=3880684abea14d86b6280c6dbd80a793') data = response.json() #print(data) #What are all the different book categories the NYT ranked in June 6, 2009 book_result = data['results'] print("The following are the different book categories the NYT ranked in June 6, 2009:") #print(book_result) for i in book_result: print("-", i['display_name'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
2b) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
#What are all the different book categories the NYT ranked in June 6, 2015? response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?date=2015-06-06&api-key=3880684abea14d86b6280c6dbd80a793') data = response.json() #print(data) book_result = data['results'] print("The following are the different book categories the NYT ranked in June 6, 2015:") #print(book_result) for i in book_result: print("-", i['display_name'])
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
3) Finding the Total Occurrence of Muammar Gaddafi's Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names? Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
Gadafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793') Gadafi_data = Gadafi_response.json() Gadafi_data_result = Gadafi_data['response']['meta']['hits'] print("Gadafi appears", Gadafi_data_result, "times") Gaddafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793') Gaddafi_data = Gaddafi_response.json() Gaddafi_data_result = Gaddafi_data['response']['meta']['hits'] print("Gaddafi appears", Gaddafi_data_result, "times") Kadafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793') Kadafi_data = Kadafi_response.json() Kadafi_data_result = Kadafi_data['response']['meta']['hits'] print("Kadafi appears", Kadafi_data_result, "times") Qaddafi_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=3880684abea14d86b6280c6dbd80a793') Qaddafi_data = Qaddafi_response.json() Qaddafi_data_result = Qaddafi_data['response']['meta']['hits'] print("Qaddafi appears", Qaddafi_data_result, "times")
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
4a) Hipster What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
# testing it for count hipster_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=3880684abea14d86b6280c6dbd80a793') hipster_data = hipster_response.json() hipster_data_result = hipster_data['response']['meta']['hits'] print("hipster appears", hipster_data_result, "times")
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
4b) Hipster What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
#print(hipster_data) #print(hipster_data.keys()) hipster_resp = hipster_data['response'] hipster_resp = hipster_data['response']['docs'] for item in hipster_resp: print(item['headline']['main'], item['pub_date']) #print("The word, hipster, appears for the first time in the following article: ", hipster_resp) #hipster_para = hipster_data['response']['docs'][0]['lead_paragraph'] #print("The word, hipster, appears for the first time in the following paragraph:") #print("-------------------------------------------------------------------------") #print(hipster_para) # TA-COMMENT: (-0.5) Missing the first paragraph of the first story
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
5) Gay Marriage 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present? Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
gay50S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date=19500101&end_date=19591231&api-key=3880684abea14d86b6280c6dbd80a793') gay50S_data = gay50S_response.json() gay50S_data_result = gay50S_data['response']['meta']['hits'] print("Gay Marriage appears", gay50S_data_result, "times in the period 1950-1959") gay60S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date=19600101&end_date=19691231&api-key=3880684abea14d86b6280c6dbd80a793') gay60S_data = gay60S_response.json() gay60S_data_result = gay60S_data['response']['meta']['hits'] print("Gay Marriage appears", gay60S_data_result, "times in the period 1960-1969") # TA-COMMENT: Is there a way to do this programmatically without repeating yourself? #1950 gay50S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19500101&end_date=19591231&api-key=3880684abea14d86b6280c6dbd80a793') gay50S_data = gay50S_response.json() gay50S_data_result = gay50S_data['response']['meta']['hits'] print("Gay Marriage appears", gay50S_data_result, "times in the period 1950-1959") #1960 gay60S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19600101&end_date=19691231&api-key=3880684abea14d86b6280c6dbd80a793') gay60S_data = gay60S_response.json() gay60S_data_result = gay60S_data['response']['meta']['hits'] print("Gay Marriage appears", gay60S_data_result, "times in the period 1960-1969") #1970 gay70S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19700101&end_date=19781231&api-key=3880684abea14d86b6280c6dbd80a793') gay70S_data = gay70S_response.json() gay70S_data_result = gay70S_data['response']['meta']['hits'] print("Gay Marriage appears", gay70S_data_result, "times in the period 1970-1978") #1980 gay80S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19800101&end_date=19891231&api-key=3880684abea14d86b6280c6dbd80a793') gay80S_data = gay80S_response.json() gay80S_data_result = gay80S_data['response']['meta']['hits'] print("Gay Marriage appears", gay80S_data_result, "times in the period 1980-1989") #1990 gay90S_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=19900101&end_date=19991231&api-key=3880684abea14d86b6280c6dbd80a793') gay90S_data = gay90S_response.json() gay90S_data_result = gay90S_data['response']['meta']['hits'] print("Gay Marriage appears", gay90S_data_result, "times in the period 1990-1999") #2000 gay00s_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20000101&end_date=20091231&api-key=3880684abea14d86b6280c6dbd80a793') gay00s_data = gay00s_response.json() gay00s_data_result = gay00s_data['response']['meta']['hits'] print("Gay Marriage appears", gay00s_data_result, "times in the period 2000-2009") # 2010 gm_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=%22gay%20marriage%22&begin_date=20100101&api-key=3880684abea14d86b6280c6dbd80a793') gm_data = gm_response.json() gm_data_result = gm_data['response']['meta']['hits'] print("Gay Marriage appears", gm_data_result, "times from the year 2010")
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
6) What section talks about motorcycles the most? ## Tip: You'll be using facets
motor_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&api-key=3880684abea14d86b6280c6dbd80a793') motor_data = motor_response.json() #motor_data_result = motor_data['response']['meta']['hits'] #print("Motorcycles appear", motor_data_result, "times") motor_info = motor_data['response']['facets']['section_name']['terms'] print("This input gives all the sections and count:", motor_info) #for i in motor_info: #print(i) print("Therefore, Motorcycles appear", motor_info[0]['term'], "section the most")
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
7) Critics's Picks How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60? Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
#first 20 movies movie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&api-key=3880684abea14d86b6280c6dbd80a793') movie_data = movie_response.json() #print(movie_data) #print(movie_data.keys()) count = 0 movie_result = movie_data['results'] for i in movie_result: #print(i) #print(i.keys()) #print(item) if i['critics_pick']: count = count + 1 print("Out of last 20 movies", count, "movies were critics picks") #first 40 movies movie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&&offset=20&api-key=3880684abea14d86b6280c6dbd80a793') movie_data = movie_response.json() #print(movie_data) #print(movie_data.keys()) count_40 = 0 movie_result = movie_data['results'] for i in movie_result: #print(i) #print(i.keys()) #print(item) if i['critics_pick']: count_40 = count_40 + 1 #print(count_40) last_fourty = count + count_40 print("Out of last 40 movies", last_fourty, "movies were critics picks") #first 60 movies movie_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=40&api-key=3880684abea14d86b6280c6dbd80a793') movie_data = movie_response.json() #print(movie_data) #print(movie_data.keys()) count_60 = 0 movie_result = movie_data['results'] for i in movie_result: #print(i) #print(i.keys()) #print(item) if i['critics_pick']: count_60 = count_60 + 1 #print(count_60) last_sixty = last_fourty + count_60 print("Out of last 60 movies", last_sixty, "movies were critics picks")
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
8) Critics with Highest Reviews Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
last_fourty = [] offset = [0, 20] for n in offset: url = "https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=a39223b33e0e46fd82dbddcc4972ff91&offset=" + str(n) movie_reviews_60 = requests.get(url) movie_reviews_60 = movie_reviews_60.json() last_fourty = last_fourty + movie_reviews_60['results'] print(len(last_fourty)) list_byline = [] for i in last_fourty: list_byline.append(i['byline']) print(list_byline) max_occ = max(list_byline) print("The author with the highest number of reviews to his credit is", max_occ)
homework05/Homework05_NYTimes-radhika_graded.ipynb
radhikapc/foundation-homework
mit
A continuación mostramos los paquetes que usaremos regularmente para tratar datos, pandas, numpy, matplotlib. Al ser un programa en Python, se pueden importar paquetes que seguirán siendo válidos hasta el final del notebook.
import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib
intro/sesion0.ipynb
dsevilla/bdge
mit
Lo siguiente hace que los gráficos se muestren inline. Para figuras pequeñas se puede utilizar unas figuras interactivas que permiten zoom, usando %maplotlib nbagg.
%matplotlib inline matplotlib.style.use('ggplot')
intro/sesion0.ipynb
dsevilla/bdge
mit
Numpy Numpy es una de las librerías más utilizadas en Python, y ofrece un interfaz sencillo para operaciones eficientes con números, arrays y matrices. Numpy se utilizará de apoyo muchas veces que haya que hacer procesamiento local de datos recogidos de una base de datos, o como preparación para la graficación de datos. En la celda siguiente se muestra un vídeo introductorio, y también se puede acceder a tutoriales online: Tutorial.
from IPython.display import YouTubeVideo YouTubeVideo('o8fmjaW9a0A') # Yes, it can also embed youtube videos.
intro/sesion0.ipynb
dsevilla/bdge
mit
Numpy permite generar y procesar arrays de datos de forma muy eficiente. A continuación se muestran algunos ejemplos:
a = np.array([4,5,6]) print(a.shape) print(a[0]) a[0] = 9 print (a) np.arange(10) np.arange(1,20)
intro/sesion0.ipynb
dsevilla/bdge
mit
También arrays multidimensionales:
a = np.zeros((2,2)) print (a) a.ndim a.dtype b = np.random.random((2,2)) print (b) a = np.random.random((2,2)) print(a)
intro/sesion0.ipynb
dsevilla/bdge
mit
Se pueden aplicar funciones sobre todo el array o matriz, y el resultado será una matriz idéntica con el operador aplicado. Similar a lo que ocurre con la operación map de algunos lenguajes de programación (incluído Python):
print (a >= .5)
intro/sesion0.ipynb
dsevilla/bdge
mit
También se pueden filtrar los elementos de un array o matriz que cumplan una condición. Para eso se utiliza el operador de indización ([]) con una expresión booleana.
print (a[a >= .5])
intro/sesion0.ipynb
dsevilla/bdge
mit