repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
neuro-data-science/neuro_data_science
python/modeling/connectivity.ipynb
gpl-3.0
import numpy as np import scipy.io as si import networkx as nx import matplotlib.pyplot as plt import bct import sys sys.path.append('../src/') import opencourse.bassett_funcs as bf plt.rcParams['image.cmap'] = 'viridis' plt.rcParams['image.interpolation'] = 'nearest' %matplotlib inline """ Explanation: This code is meant to be read and executed in tandem with the PDF file tutorial_connectivity located in the "course_materials" folder End of explanation """ path_to_data = '../../data/matrices_connectivity.mat' graphs_raw = si.loadmat(path_to_data) graphs_raw = graphs_raw['matrices'] graph_raw = graphs_raw[..., 0] """ Explanation: The dataset This dataset consists of edge information between nodes of brain activity. These nodes might be individual units, such as neurons, or they might be abstract units such as collections of voxels in fMRI. In addition, we have a snapshot of connectivity across multiple instances of time. In this way, we can investigate clusters of nodes and how they change over time. First, we'll import data and extract one graph End of explanation """ fig, ax = plt.subplots() ax.imshow(graph_raw, interpolation='nearest'); """ Explanation: Here's what it looks like... End of explanation """ clustering_coef = bct.clustering_coef_wu(graph_raw) """ Explanation: Task 1 - Characterize local network structure in neural data Calculate clustering coefficient for each node End of explanation """ fig, ax = plt.subplots() ax.hist(clustering_coef) print('mean clustering coefficient: %f' % clustering_coef.mean()) """ Explanation: Plot the distribution + mean End of explanation """ tri = np.where(np.triu(graph_raw) == 0) tri = np.array(tri).T tri_rnd = np.random.permutation(tri) # Randomize matrix graph_raw_r = graph_raw.copy() for (ii, jj), (ii_rnd, jj_rnd) in zip(tri, tri_rnd): graph_raw_r[ii, jj] = graph_raw[ii_rnd, jj_rnd] graph_raw_r[jj, ii] = graph_raw[ii_rnd, jj_rnd] # Clustering coefficient for random matrix clustering_coef_r = bct.clustering_coef_wu(graph_raw_r) # Compare the structure of neural + random matrix fig, axs = plt.subplots(1, 2) axs[0].imshow(graph_raw) axs[0].set_title('Neural') axs[1].imshow(graph_raw_r) axs[1].set_title('Random'); # Compare the clustering coefficients for the two fig, ax = plt.subplots() h1 = ax.hist(clustering_coef, normed=True) h2 = ax.hist(clustering_coef_r, histtype='step', normed=True, color='r', lw=3) ax.legend(['neural', 'random']); ax.set_xlabel('clustering coefficient') """ Explanation: Create a random matrix for comparison End of explanation """ # It is an iterative algorithm, so the random seed affects initialization clusters, q_stat = bct.community_louvain(graph_raw, seed=np.random.randint(10000)) # Repeat this a bunch of times to see how it differs over iterations all_clusters = np.zeros([100, len(graph_raw)], dtype=int) for i in range(100): clusters, q_stat = bct.community_louvain(graph_raw, seed=np.random.randint(10000)) all_clusters[i] = clusters # Visualize cluster identity for each iteration fig, ax = plt.subplots() ax.imshow(all_clusters.T, cmap='viridis') ax.set_title('Neural Clustering') ax.set_xlabel('Iterations') ax.set_ylabel('Nodes'); """ Explanation: As you can see, random data shows clustering coefficients that are much more variable than the data we record in the brain. Let's take this clustering one step further by exploring communities of nodes in the brain. Task 2 - Community detection methods to identify modules in neural data We can use the louvain algorithm to find "communities" of nodes. End of explanation """ # Repeat this a bunch of times to see how it differs over iterations all_clusters_r = np.zeros([100, len(graph_raw_r)], dtype=int) for ii in range(100): clusters, q_stat = bct.community_louvain(graph_raw_r, seed=np.random.randint(10000)) all_clusters_r[ii] = clusters # Visualize cluster identity for each iteration fig, ax = plt.subplots() ax.imshow(all_clusters_r.T, cmap='viridis') ax.set_title('Random Clustering') ax.set_xlabel('Iterations') ax.set_ylabel('Nodes') """ Explanation: Compare this for a random graph: End of explanation """ # Now sort these clustered nodes based on their clustering ind_sorted, ci_sorted = bf.order_partition(graph_raw, all_clusters[0]) X, Y, _ = bf.find_grid_communities(ci_sorted) # Plot the original weights, and then sorted by clusters fig, axs = plt.subplots(1, 2, figsize=(10, 5)) axs[0].imshow(graph_raw) axs[1].imshow(graph_raw[ind_sorted, :][:, ind_sorted]) # Plot diagonals to show each cluster axs[1].plot(X, Y, color='k') plt.autoscale(tight=True) """ Explanation: Now, we'll sort the connectivity matrix so that nodes in the same cluster are near one another. We'll draw out the clusters on the heatmap for one example partiton. We'll do this for the neural data before and after sorting End of explanation """ fig, axs = plt.subplots(3, 3, figsize=(9, 9)) for i in range(3): for j in range(3): axs[i,j].imshow(graphs_raw[:,:,i*3 + j]) plt.autoscale(tight=True) """ Explanation: Task 3 - Module change over time Let's investigate each connectivity matrix across time. Do you notice any changes? End of explanation """ partition_time, q_time = bf.genlouvain_over_time(graphs_raw, omega=1.0) """ Explanation: We can investigate the extent to which communities persist (or dissolve) over time by calculating the same clustering, but including the time dimension in the analysis. For this, we've expanded the genlouvain algorithm below: End of explanation """ fig, ax = plt.subplots() ax.imshow(partition_time, aspect='auto') ax.set_xlabel('Time') ax.set_ylabel('Node') ax.set_title('Community Assignment'); """ Explanation: Below we'll show the community assignment for each node as a function of time. End of explanation """
KEHANG/AutoFragmentModeling
ipython/1. frag_mech_generation/.ipynb_checkpoints/generate_fragment_mechanism-checkpoint.ipynb
mit
import os from tqdm import tqdm from rmgpy import settings from rmgpy.data.rmg import RMGDatabase from rmgpy.kinetics import KineticsData from rmgpy.rmg.model import getFamilyLibraryObject from rmgpy.data.kinetics.family import TemplateReaction from rmgpy.data.kinetics.depository import DepositoryReaction from rmgpy.data.kinetics.common import find_degenerate_reactions from rmgpy.chemkin import saveChemkinFile, saveSpeciesDictionary import afm import afm.fragment import afm.reaction """ Explanation: Generation Flow of Fragment Mechanism Steps: load text fragment mechanism (text based: mech and smiles) create fragments and fragment reactions (from smiles, check isomorphic duplicate, add reaction_repr for fragment reaction) get thermo and kinetics Input: text fragment mechanism and smiles dict Output: chemkin file for fragment mechanism IMPORTANT: USE RMG-Py frag_kinetics_gen_new branch End of explanation """ def read_frag_mech(frag_mech_path): reaction_string_dict = {} current_family = '' with open(frag_mech_path) as f_in: for line in f_in: if line.startswith('#') and ':' in line: _, current_family = [token.strip() for token in line.split(':')] elif line.strip() and not line.startswith('#'): reaction_string = line.strip() if current_family not in reaction_string_dict: reaction_string_dict[current_family] = [reaction_string] else: reaction_string_dict[current_family].append(reaction_string) return reaction_string_dict def parse_reaction_string(reaction_string): reactant_side, product_side = [token.strip() for token in reaction_string.split('==')] reactant_strings = [token.strip() for token in reactant_side.split('+')] product_strings = [token.strip() for token in product_side.split('+')] return reactant_strings, product_strings """ Explanation: 0. helper methods End of explanation """ job_name = 'two-sided' afm_base = os.path.dirname(afm.__path__[0]) working_dir = os.path.join(afm_base, 'examples', 'pdd_chemistry', job_name) # load RMG database to create reactions database = RMGDatabase() database.load( path = settings['database.directory'], thermoLibraries = ['primaryThermoLibrary'], # can add others if necessary kineticsFamilies = 'all', reactionLibraries = [], kineticsDepositories = '' ) thermodb = database.thermo # Add training reactions for family in database.kinetics.families.values(): family.addKineticsRulesFromTrainingSet(thermoDatabase=thermodb) # average up all the kinetics rules for family in database.kinetics.families.values(): family.fillKineticsRulesByAveragingUp() # load fragment from smiles-like string fragment_smiles_filepath = os.path.join(working_dir, 'fragment_smiles.txt') fragments = [] with open(fragment_smiles_filepath) as f_in: for line in f_in: if line.strip() and not line.startswith('#') and ':' in line: label, smiles = [token.strip() for token in line.split(":")] frag = afm.fragment.Fragment(label=label).from_SMILES_like_string(smiles) frag.assign_representative_species() frag.species_repr.label = label for prev_frag in fragments: if frag.isIsomorphic(prev_frag): raise Exception('Isomorphic duplicate found: {0} and {1}'.format(label, prev_frag.label)) fragments.append(frag) # construct label-key fragment dictionary fragment_dict = {} for frag0 in fragments: if frag0.label not in fragment_dict: fragment_dict[frag0.label] = frag0 else: raise Exception('Fragment with duplicated labels found: {0}'.format(frag0.label)) # put aromatic isomer in front of species.molecule # 'cause that's the isomer we want to react for frag in fragments: species = frag.species_repr species.generateResonanceIsomers() for mol in species.molecule: if mol.isAromatic(): species.molecule = [mol] break # load fragment mech in text fragment_mech_filepath = os.path.join(working_dir, 'frag_mech.txt') reaction_string_dict = read_frag_mech(fragment_mech_filepath) # generate reactions fragment_rxns = [] for family_label in reaction_string_dict: # parse reaction strings print "Processing {0}...".format(family_label) for reaction_string in tqdm(reaction_string_dict[family_label]): reactant_strings, product_strings = parse_reaction_string(reaction_string) reactants = [fragment_dict[reactant_string].species_repr for reactant_string in reactant_strings] products = [fragment_dict[product_string].species_repr.molecule[0] for product_string in product_strings] for idx, reactant in enumerate(reactants): for mol in reactant.molecule: mol.props['label'] = reactant_strings[idx] for idx, product in enumerate(products): product.props['label'] = product_strings[idx] # this script requires reactants to be a list of Species objects # products to be a list of Molecule objects. # returned rxns have reactants and products in Species type new_rxns = database.kinetics.generate_reactions_from_families(reactants=reactants, products=products, only_families=[family_label], resonance=True) if len(new_rxns) != 1: print reaction_string + family_label raise Exception('Non-unique reaction is generated with {0}'.format(reaction_string)) # create fragment reactions rxn = new_rxns[0] fragrxn = afm.reaction.FragmentReaction(index=-1, reversible=True, family=rxn.family, reaction_repr=rxn) fragment_rxns.append(fragrxn) """ Explanation: 1. load text-format fragment mech End of explanation """ from rmgpy.data.rmg import getDB from rmgpy.thermo.thermoengine import processThermoData from rmgpy.thermo import NASA import rmgpy.constants as constants import math thermodb = getDB('thermo') # calculate thermo for each species for fragrxn in tqdm(fragment_rxns): rxn0 = fragrxn.reaction_repr for spe in rxn0.reactants + rxn0.products: thermo0 = thermodb.getThermoData(spe) if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']: thermo0.S298.value_si += constants.R * math.log(2) spe.thermo = processThermoData(spe, thermo0, NASA) family = getFamilyLibraryObject(rxn0.family) # Get the kinetics for the reaction kinetics, source, entry, isForward = family.getKinetics(rxn0, \ templateLabels=rxn0.template, degeneracy=rxn0.degeneracy, \ estimator='rate rules', returnAllKinetics=False) rxn0.kinetics = kinetics if not isForward: rxn0.reactants, rxn0.products = rxn0.products, rxn0.reactants rxn0.pairs = [(p,r) for r,p in rxn0.pairs] # convert KineticsData to Arrhenius forms if isinstance(rxn0.kinetics, KineticsData): rxn0.kinetics = rxn0.kinetics.toArrhenius() # correct barrier heights of estimated kinetics if isinstance(rxn0,TemplateReaction) or isinstance(rxn0,DepositoryReaction): # i.e. not LibraryReaction rxn0.fixBarrierHeight() # also converts ArrheniusEP to Arrhenius. fragrxts = [fragment_dict[rxt.label] for rxt in rxn0.reactants] fragprds = [fragment_dict[prd.label] for prd in rxn0.products] fragpairs = [(fragment_dict[p0.label],fragment_dict[p1.label]) for p0,p1 in rxn0.pairs] fragrxn.reactants=fragrxts fragrxn.products=fragprds fragrxn.pairs=fragpairs fragrxn.kinetics=rxn0.kinetics """ Explanation: 2. get thermo and kinetics End of explanation """ for frag in fragments: spe = frag.species_repr thermo0 = thermodb.getThermoData(spe) if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']: thermo0.S298.value_si += constants.R * math.log(2) spe.thermo = processThermoData(spe, thermo0, NASA) if spe.label in ['RCCCCR', 'LCCCCR', 'LCCCCL']: print spe.label print spe.getFreeEnergy(670)/4184 """ Explanation: 2.1 correct entropy for certain fragments End of explanation """ for fragrxn in tqdm(fragment_rxns): rxn0 = fragrxn.reaction_repr if rxn0.family in ['R_Recombination', 'H_Abstraction', 'R_Addition_MultipleBond']: for spe in rxn0.reactants + rxn0.products: if spe.label in ['RCC*CCR', 'LCC*CCR', 'LCC*CCL']: rxn0.kinetics.changeRate(4) fragrxn.kinetics=rxn0.kinetics """ Explanation: 2.2 correct kinetics for reactions with certain fragments End of explanation """ species_list = [] for frag in fragments: species = frag.species_repr species_list.append(species) len(fragments) reaction_list = [] for fragrxn in fragment_rxns: rxn = fragrxn.reaction_repr reaction_list.append(rxn) len(reaction_list) # dump chemkin files chemkin_path = os.path.join(working_dir, 'chem_annotated.inp') dictionaryPath = os.path.join(working_dir, 'species_dictionary.txt') saveChemkinFile(chemkin_path, species_list, reaction_list) saveSpeciesDictionary(dictionaryPath, species_list) """ Explanation: 3. save in chemkin format End of explanation """ def update_atom_count(tokens, parts, R_count): # remove R_count*2 C and R_count*5 H string = '' if R_count == 0: return 'G'.join(parts) else: H_count = int(tokens[2].split('C')[0]) H_count_update = H_count - 5*R_count C_count = int(tokens[3]) C_count_update = C_count - 2*R_count tokens = tokens[:2] + [str(H_count_update)+'C'] + [C_count_update] # Line 1 string += '{0:<16} '.format(tokens[0]) string += '{0!s:<2}{1:>3d}'.format('H', H_count_update) string += '{0!s:<2}{1:>3d}'.format('C', C_count_update) string += ' ' * (4 - 2) string += 'G' + parts[1] return string corrected_chemkin_path = os.path.join(working_dir, 'chem_annotated.inp') output_string = '' with open(chemkin_path) as f_in: readThermo = False for line in f_in: if line.startswith('THERM ALL'): readThermo = True if not readThermo: output_string += line continue if line.startswith('!'): output_string += line continue if 'G' in line and '1' in line: parts = [part for part in line.split('G')] tokens = [token.strip() for token in parts[0].split()] species_label = tokens[0] R_count = species_label.count('R') L_count = species_label.count('L') updated_line = update_atom_count(tokens, parts, R_count+L_count) output_string += updated_line else: output_string += line with open(corrected_chemkin_path, 'w') as f_out: f_out.write(output_string) """ Explanation: 4. correct atom count in chemkin End of explanation """
dtamayo/rebound
ipython_examples/TransitTimingVariations.ipynb
gpl-3.0
import rebound import numpy as np """ Explanation: Calculating Transit Timing Variations (TTV) with REBOUND The following code finds the transit times in a two planet system. The transit times of the inner planet are not exactly periodic, due to planet-planet interactions. First, let's import the REBOUND and numpy packages. End of explanation """ sim = rebound.Simulation() sim.add(m=1) sim.add(m=1e-5, a=1,e=0.1,omega=0.25) sim.add(m=1e-5, a=1.757) sim.move_to_com() """ Explanation: Let's set up a coplanar two planet system. End of explanation """ N=174 transittimes = np.zeros(N) p = sim.particles i = 0 while i<N: y_old = p[1].y - p[0].y # (Thanks to David Martin for pointing out a bug in this line!) t_old = sim.t sim.integrate(sim.t+0.5) # check for transits every 0.5 time units. Note that 0.5 is shorter than one orbit t_new = sim.t if y_old*(p[1].y-p[0].y)<0. and p[1].x-p[0].x>0.: # sign changed (y_old*y<0), planet in front of star (x>0) while t_new-t_old>1e-7: # bisect until prec of 1e-5 reached if y_old*(p[1].y-p[0].y)<0.: t_new = sim.t else: t_old = sim.t sim.integrate( (t_new+t_old)/2.) transittimes[i] = sim.t i += 1 sim.integrate(sim.t+0.05) # integrate 0.05 to be past the transit """ Explanation: We're now going to integrate the system forward in time. We assume the observer of the system is in the direction of the positive x-axis. We want to meassure the time when the inner planet transits. In this geometry, this happens when the y coordinate of the planet changes sign. Whenever we detect a change in sign between two steps, we try to find the transit time, which must lie somewhere within the last step, by bisection. End of explanation """ A = np.vstack([np.ones(N), range(N)]).T c, m = np.linalg.lstsq(A, transittimes)[0] """ Explanation: Next, we do a linear least square fit to remove the linear trend from the transit times, thus leaving us with the transit time variations. End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(10,5)) ax = plt.subplot(111) ax.set_xlim([0,N]) ax.set_xlabel("Transit number") ax.set_ylabel("TTV [hours]") plt.scatter(range(N), (transittimes-m*np.array(range(N))-c)*(24.*365./2./np.pi)); """ Explanation: Finally, let us plot the TTVs. End of explanation """
martinjrobins/hobo
examples/sampling/nested-rejection-sampling.ipynb
bsd-3-clause
import pints import pints.toy as toy import numpy as np import matplotlib.pyplot as plt # Load a forward model model = toy.LogisticModel() # Create some toy data r = 0.015 k = 500 real_parameters = [r, k] times = np.linspace(0, 1000, 100) signal_values = model.simulate(real_parameters, times) # Add independent Gaussian noise sigma = 10 observed_values = signal_values + pints.noise.independent(sigma, signal_values.shape) # Plot plt.plot(times,signal_values,label = 'signal') plt.plot(times,observed_values,label = 'observed') plt.xlabel('Time') plt.ylabel('Values') plt.legend() plt.show() """ Explanation: Nested rejection sampling This example demonstrates how to use nested rejection sampling [1] to sample from the posterior distribution for a logistic model fitted to model-simulated data. Nested sampling is the craziest way to calculate an integral that you'll ever come across, which has found widespread application in physics. The idea is based upon repeatedly partitioning the prior density to a given area of parameter space based on likelihood thresholds. These repeated partitions form sort of Matryoshka dolls of spaces, where the later surfaces are "nested" within the earlier ones. The space between the Matryoshka volumes constitutes "shells", whose volume can itself be approximated. By summing the volumes of these shells, the marginal likelihood can be calculated. It's bonkers, but it works. It works especially well for multimodal distributions, where traditional methods of calculating the marginal likelihood fail. As a very useful bi-product of nested sampling, posterior samples can be produced by importance sampling. [1] "Nested Sampling for General Bayesian Computation", John Skilling, Bayesian Analysis (2006) https://projecteuclid.org/download/pdf_1/euclid.ba/1340370944. First create fake data. End of explanation """ # Create an object with links to the model and time series problem = pints.SingleOutputProblem(model, times, observed_values) # Create a log-likelihood function (adds an extra parameter!) log_likelihood = pints.GaussianLogLikelihood(problem) # Create a uniform prior over both the parameters and the new noise variable log_prior = pints.UniformLogPrior( [0.01, 400, sigma * 0.5], [0.02, 600, sigma * 1.5]) # Create a nested ellipsoidal rejectection sampler sampler = pints.NestedController(log_likelihood, log_prior, method=pints.NestedRejectionSampler) # Set number of iterations sampler.set_iterations(3000) # Set the number of posterior samples to generate sampler.set_n_posterior_samples(300) """ Explanation: Create the nested sampler that will be used to sample from the posterior. End of explanation """ samples = sampler.run() print('Done!') """ Explanation: Run the sampler! End of explanation """ # Plot output import pints.plot pints.plot.histogram([samples], ref_parameters=[r, k, sigma]) plt.show() vTheta = samples[0] pints.plot.pairwise(samples, kde=True) plt.show() """ Explanation: Plot posterior samples versus true parameter values (dashed lines) End of explanation """ pints.plot.series(samples[:100], problem) plt.show() """ Explanation: Plot posterior predictive simulations versus the observed data End of explanation """ print('marginal log-likelihood = ' + str(sampler.marginal_log_likelihood()) + ' ± ' + str(sampler.marginal_log_likelihood_standard_deviation())) """ Explanation: Marginal likelihood estimate Nested sampling calculates the denominator of Bayes' rule through applying the trapezium rule to the integral, $$Z = \int_{0}^{1} \mathcal{L}(X) dX,$$ where $X$ is the prior probability mass. End of explanation """ v_log_likelihood = sampler.log_likelihood_vector() v_log_likelihood = v_log_likelihood[:-sampler._sampler.n_active_points()] X = sampler.prior_space() X = X[:-1] plt.plot(X, v_log_likelihood) plt.xlabel('prior volume enclosed by X(L) > L') plt.ylabel('log likelihood') plt.show() """ Explanation: With PINTS we can access the segments of the discretised integral, meaning we can plot the function being integrated. End of explanation """ m_active = sampler.active_points() m_inactive = sampler.inactive_points() f, axarr = plt.subplots(1,3,figsize=(15,6)) axarr[0].scatter(m_inactive[:,0],m_inactive[:,1]) axarr[0].scatter(m_active[:,0],m_active[:,1],alpha=0.1) axarr[0].set_xlim([0.008,0.022]) axarr[0].set_xlabel('r') axarr[0].set_ylabel('k') axarr[1].scatter(m_inactive[:,0],m_inactive[:,2]) axarr[1].scatter(m_active[:,0],m_active[:,2],alpha=0.1) axarr[1].set_xlim([0.008,0.022]) axarr[1].set_xlabel('r') axarr[1].set_ylabel('sigma') axarr[2].scatter(m_inactive[:,1],m_inactive[:,2]) axarr[2].scatter(m_active[:,1],m_active[:,2],alpha=0.1) axarr[2].set_xlabel('k') axarr[2].set_ylabel('sigma') plt.show() """ Explanation: Examine active and inactive points at end of sampling run At each step of the nested sampling algorithm, the point with the lowest likelihood is discarded (and inactivated) and a new active point is drawn from the prior, with the restriction of that its likelihood exceeds the discarded one. The likelihood of the inactived point essentially defines the height of a segment of the discretised integral for $Z$. Its width is approximately given by $w_i = X_{i-1}-X_{i+1}$, where $X_i = \text{exp}(-i / N)$ and $N$ is the number of active particles and $i$ is the iteration. PINTS keeps track of active and inactive points at the end of the nested sampling run. The active points (orange) are concentrated in a region of high likelihood, whose likelihood always exceeds the discarded inactive points (blue). End of explanation """ samples_new = sampler.sample_from_posterior(1000) pints.plot.pairwise(samples_new, kde=True) plt.show() """ Explanation: Sample some other posterior samples from recent run In nested sampling, we can apply importance sampling to the inactivated points to generate posterior samples. In this case, the weight of each inactive point is given by $w_i \mathcal{L}_i$, where $\mathcal{L}_i$ is its likelihood. Since we use importance sampling, we can always generate an alternative set of posterior samples by re-applying this method. End of explanation """
jdnz/qml-rg
Tutorials/Python_Introduction.ipynb
gpl-3.0
from __future__ import print_function, division """ Explanation: 1. Introduction Perhaps instead of telling you how to write a loop or a conditional in Python, it might be a better option to put Python in context, tell a bit about how programming languages are designed, and why certain trade-offs are chosen. A programming language is something you can learn on your own once you understand why it works the way it does. 2. Compilers, interpreters, JIT compilation A compiler takes a piece of code written in a high-level language and translates it to binary machine code that the CPU can run. Compilation is a complex process that looks at the entire code, checks syntax, does optimizations, links to other binaries, and spits out an executable or some other form of binary code such as a dynamic library. Interpreted languages parse the code line by line, and thus they only translate something to a machine-executable format one command at a time. This means that you can have an interactive shell and you can type in commands one by one, and see the result immediately. If you make an error, your previous variables and computations are not lost: the interpreter keeps track of them and you can still access them. In contrast, unless you have some mechanism in your compiled code to save interim calculations, an error will terminate the program, and its full memory space is liberated and control is returned to the operating system. For interactive work, an interpreter is much more suitable. This explains why scientific languages like R, Mathematica, and MATLAB work in this fashion. On the other hand, since there are no optimizations whatsoever, they tend to be sluggish. So for numerical calculations, compiled languages are better: Fortran, C, or newer ones that were designed with safety and concurrency in mind, such as Go and Rust. A newer paradigm does just-in-time (JIT) compilation: you get an interactive shell, but everything you enter is actually compiled quickly, and then run. That is, a JIT system combines the best of two worlds. Most modern languages are either written with JIT in mind, such as Scala and Julia, or were adapted to be used in this fashion. Apart from these paradigms, there are abominations like Java: it is both compiled and slow, running on a perfectly horrific level of abstraction called the Java Virtual Machine. MATLAB is a multiparadigm language that is designed to maximize user frustration, although it is primarily interpreted. The following table gives a few examples of each paradigm in approximate temporal order. | Compiled | Interpreted | JIT | Horror | | ------------- |---------------------------------| ----------- |--------------| | Fortran (1957)| Lisp (1958) | | | | | BASIC (1964) | | | | C (1972) | S (1976) | | | | C++ (1983) | Perl (1987), Mathematica (1988) | | MATLAB (1984)| | Haskell (1990)| R (1993) | | Java (1995) | | Go (2009) | | Scala (2004)| | | Rust (2010) | | Julia (2012)| | 3. So what is Python? Python is a language specification born in 1991. As it is the case with many languages (Fortran, C, C++, Haskell), the language specification and its actual implementations are independently developed, although the development is correlated. What you normally call Python, and this is the Python that ships with your operating system or with Anaconda, is actually the reference implementation of the language, which is formally called CPython. This reference implementation is a Python interpreter written in the C language. The Python language was the first language that was designed with humans in mind: code was meant to be easy to read by humans. This was a response to write-only languages that introduce tricky syntax that is difficult to decipher. Both Mathematica and MATLAB are guilty of being write-only languages, and so are the latest standards of C++. Here is a priceless Mathematica one-liner: ArrayPlot@Log[BinCounts[Through@{Re,Im}@#&amp;/@NestList[(5#c+Re@#^6-2.7)#+c^5&amp;,.1+.2|,9^7],a={-1,1,0.001},a]+1] You clearly don't syntax highlighting for this. No matter how hard you try, it would be difficult to write something as convoluted as this in Python. Python also wants to have exactly one obvious way to do something, which was anything but true for a similar scripting language called Perl that many of us refuse to admit that we ever used it. By some clever design decisions, it is extremely easy to call low-level code from Python, and this makes it the best glue language: you can call C, Fortran, Julia, Lisp, and whatever functions from Python with ease. Try that from Mathematica. The default CPython implementation is an interpreter, and therefore it comes with a shell (the funny screen where you type stuff in). This shell, however, is not any good by today's standards. IPython was conceived to have a good shell for Python. In principle, IPython can use any Python interpreter on the back (CPython, Pypy, and others). Jupyter provides a notebook interface based on IPython that allows you to practice literate programming, that is, mixing code, text, mathematical formula, and images in the same environment, making it attractive for scientists. Mathematica's notebook interface is far more advanced than that of Jupyter, but development is rapid, and the functionality keeps expanding. Both IPython and Jupyter were conceived for Python, but now they work with many other languages. Due to its ease of use and glue language nature, Python became massively popular among programmers. They developed thousands of packages for Python, everything from controlling robots to running websites. It was never designed to be a language for scientific computing. Yet, it became a de facto next-best alternative to MATLAB after the unification of the various numerical libraries under the umbrella of numpy and SciPy. With the development of SymPy, it acquired properties similar in functionality to Mathematica. With Pandas, it takes on R as the choice for statistical modelling. With TensorFlow, it is overtaking distributed frameworks like Hadoop and Spark in large-scale machine learning. We can keep listing packages, but you get the idea. The package ecosystem gives Python users superpowers. The reference implementation, CPython, by virtue of being an interpreter, is slow, but it is not the only implementation of the language. Pypy is a JIT implementation of the language, started in 2007. It is up to 20-40x faster on pure Python code. The problem is that its foreign language interface is incompatible with CPython, so the glue language nature is gone, and many important Python packages do not work with it. Cython is an extension of Python that generates C code that in turn can be compiled for speed. As a user of Python, you probably don't want to deal with this directly, but it is nevertheless an option if you want speed. To put this together, we can extend the table above: | Compiled | Interpreted | JIT | Horror | | ------------- |------------------ | -----------|-------------| | | CPython (0.9, 1991)| | | | | CPython (1.0, 1994)| | | | | CPython (2.0, 2000)| | Jython (2001)| |Cython (2007) | CPython (3.0, 2008)|Pypy (2.7, 2007)| | | | CPython (3.6, 2016)|Pypy (3.2, 2014), Pyston (2014)| | 3.1 Global interpreter lock Python was originally conceived in 1991: until the second half of the 2000s, consumer-grade CPUs were single core. Thus Python was not designed to be easy to parallelize. To understand what goes on here, we have to understand what "running parallel" means. Conceptually the simplest case is when you have several computers: each one accesses its own memory space and communicates via the network. This is called distributed memory model. For the next level, we have to understand what a process is. The operating system that you run, let that be Android, macOS, Linux, and even Windows on a good day, ensures that when you run a program, it has its own, protected memory space. It cannot access the memory space allocated to a different program, and other programs cannot access its own allocated memory space. In fact, the operating system itself cannot access the memory space of any of the running programmes: it can terminate them and free the memory, but it cannot access the content of the memory (in principle). A thing that runs with its allocated, protected memory space is called a process. Multiprocessing means running several processes at the same time. If the processes run on several cores on a multicore processor working on the same calculation, you end up with a scheme similar to the distributed memory model: the processes must communicate with one another if they want to exchange data. It does not happen through the network, but the operating system's help must be invoked. This is a shared memory model with isolated memory spaces. Going between multiprocessing and distributed memory processing is straightforward, at least from the users' perspective. Multithreading means that one single process uses several CPU cores. It means that each thread can access an arbitrary piece of data belonging to the process. Now imagine you have some variable a and two processes want to increase its value by 1. First, process 1 reads it, learns that the value is 5, and wants to write back 6. The second process reads out 5 as well, and writes back 6. So the final value is 6, instead of 7. This is called a race condition. To get around it, the thread can declare a lock: no other thread can access that part of the code until the lock is released. If the thread that declared the lock waits for another lock to be released, a deadlock can occur: this is an infinite cycle from which there is no exit. Python allows you to have multiprocessing, but multithreading is implicitly forbidden. To avoid race conditions and deadlocks, the interpreter maintains a global lock on every variable: this is called the global interpreter lock (GIL). Multiprocessing is inherently less efficient, so there is an increasing pressure to remove the 26-year-old GIL. Pypy introduced an experimental software transaction memory that replaces the GIL. It is an inefficient implementation and it is more of a proof of concept, but it works. Cython allows you to release the GIL and write multithreaded code in C, if that is your thing. There are also plans that upcoming releases of CPython would slowly outphase the GIL in favour of a software transaction memory, but it will take decades. 3.2 Python 2 versus 3 Python 3 is the present and future of the Python language. It is actively developed, whereas Python 2 only receives security updates, and its end-of-life was declared several times (although it refuses to die). Python 3 is a more elegant and consistent language, which is also faster than older versions, at least starting from version 3.5. Yet, there are still some libraries out there that do not work with Python 3. With the release of Python 3.5 in 2015, now most people recommend Python 3. Anaconda changed to recommending Python 3 in January 2017. The transition between Python 2 and 3 is a tale of how to do it wrong. Most people never asked for Python 3, and for the first seven years of Python 3, the changes were mainly below the hood. Perhaps the most important change was the proper handling of UTF characters, which sounds abstract for a scientist, until you learn that you can type in Greek characters in mathematical formulas if you use Python 3. In any case, the two differences every Python-using scientist should be aware of are related to printing and integer division. If you start your code with this line, you ensure that your code will work in both versions identically: End of explanation """ l = [] for i in range(10): l.append(i) print(l) """ Explanation: Printing had a weird implementation in Python 2 that was rectified, and now printing is a function like every other. This means that you must use brackets when you print something. Then in Python 2, there were two ways of doing integer division: 2 / 3 and 2 // 3 both gave zero. In Python 3, the former triggers a type upgrade to floats. If you import division from future, you get the same behaviour in Python 2. 3.4 Don't know how to code? Completely new to Python? A good start for any programming language is a Jupyter kernel if the language has one. Jupyter was originally designed for Python, so naturally it has a matching kernel. Why Jupyter? It is a uniform interface for many languages (Python, Julia, R, Scala, Haskell, even bloody MATLAB has a Jupyter kernel), so you can play with a new language in a familiar, interpreter-oriented environment. If you never coded in your life, it is also a good start, as you get instant feedback on your initial steps in what essentially is a tab in your browser. If you are coming from MATLAB, or you advanced beyond the skills of writing a few dozens lines of code in Python, I recommend using Spyder. It is an awesome integrated environment for doing scientific work in Python: it includes instant access to documentation, variable inspection, code navigation, an IPython console, plus cool tools for writing beautiful and efficient code. For tutorials, check out the Learning tab in Anaconda Navigator. Both videos and other tutorials are available in great multitude. 4. Where to find code and how (don't reinvent the wheel, round 1) The fundamental difference between a computer scientist and an arbitrary other scientist is that the former will first try to find other people's code to achieve a task, whereas the latter type is suspicious of alien influence and will try to code up everything from scratch. Find a balance. Here we are not talking about packages: we are talking about snippets of code. The chances are slim that you want to do something in Python that N+1 humans did not do before. Two and a half places to look for code: The obvious internet search will point you to the exact solution on Stackoverflow. Code search engines are junk, so for even half-trivial queries that include idiomatic use of a programming language, they will not show up much. This is when you can turn to GitHub's Advanced Search. It will not let you search directly for code, but you can restrict your search by language, and look at relevant commits and issues. You have a good chance of finding what you want. GitHub has a thing called gist. These are short snippets (1-150 lines) of code under git control. The gist search engine is awesome for finding good code. Exercise 1. Find three different ways of iterating over a dictionary and printing out each key-value pairs. Explain the design principle of one obvious way of doing something through this example. If you do not know what a dictionary is, that is even better. 5. Why am I committing a crime against humanity by using MATLAB? Hate speech follows: Licence fee: MathWorks is second biggest enemy of science after academic publishers. You need a pricey licence on every computer where you want to use it. Considering that the language did not see much development since 1984, it does not seem like a great deal. They, however, ensure that subsequent releases break something, so open source replacement efforts like Octave will never be able to catch up. Package management does not exist. Maintenance: maintaining a toolbox is a major pain since the language forces you to have a very large number of files. Slow: raw MATLAB code is on par with Python in terms of inefficiency. It can be fast, but only when the operations you use actually translate to low-level linear algebra operations. MEX: this system was designed to interact with C code. In reality, it only ensures that you tear your hair out if you try to use it. Interface is not decoupled correctly. You cannot use the editor while running a code in the interpreter. Seriously? In 2017? Name space mangling: imported functions override older ones. There is no other option. You either overwrite, or you do not use a toolbox. Write-only language: this one can be argued. With an excessive use of parentheses, MATLAB code can be pretty hard to parse, but allegedly some humans mastered it. 6. Package management (don't reinvent the wheel, round 2) Once you go beyond the basic hurdles of Python, you definitely want to use packages. Many of them are extremely well written, efficient, and elegant. Although most of the others are complete junk. Package management in Python used to be terrible, but nowadays it is simply bad (this is already a step up from MATLAB or Mathematica). So where does the difficulty stem from? From compilation. Since Python interacts so well with compiled languages, it is the most natural thing to do to bypass the GIL with C or Cython code for some quick calculations, and then get everything back to Python. The problem is that we have to deal with three major operating systems and at least three compiler chain families. Python allows the distribution of pre-compiled packages through a system called wheels, which works okay if the developers have access to all the platforms. Anaconda itself is essentially a package management system for Python, shipping precompiled binaries that supposed to work together well. So, assuming you have Anaconda, and you know which package you want to install, try this first: conda install whatever_package If the package is not in the Anaconda ecosytem, you can use the standard Python Package Index (PyPI) through the ultra-universal pip command: pip install whatever_package If you do not have Anaconda or you use some shared computer, change this to pip install whatever_package --user. This will install the package locally to your home folder. Depending on your operating system, several things can happen. Windows: if there are no binaries in Anaconda or on PyPI, good luck. Compilation is notoriously difficult to get right on Windows both for package developers and for users. macOS: if there are no binaries in Anaconda or on PyPI, start scratching your head. There are two paths to follow: (i) the code will compile with Apple's purposefully maimed Clang variant. In this case, if you XCode, things will work with a high chance of success. The downside: Apple hates you. They keep removing support for compiling multithreaded from Clang. (ii) Install the uncontaminated GNU Compiler Chain (gcc) with brew. You still have a high chance of making it work. The problems begin if the compilation requires many dependent libraries to be present, which may or may not be supported by brew. Linux: there are no binaries by design. The compiler chain is probably already there. The pain comes from getting the development headers of all necessary libraries, not to mention, the right version of the libraries. Ubuntu tends to have outdated libraries. Exercise 2. Install the conic optimization library Picos. In Anaconda, proceed in two steps: install cvxopt with conda, and then Picos from PyPI. If you are not using Anaconda, a pip install will be just fine. 7. Idiomatic Python 7.1 Tricks with lists Python has few syntactic candies, precisely because it wants to keep code readable. One thing you can do, though, is defining lists in a functional programming way, that is, it will be familiar to Mathematica users. This is the crappy way of filling a list with values: End of explanation """ l = [i**2 for i in range(10)] print(l) """ Explanation: This is more Pythonesque: End of explanation """ print(sum([i for i in range(10)])) print(sum(i for i in range(10))) """ Explanation: What you have inside the square bracket is a generator expression. Sometimes you do not need the list, only its values. In such cases, it suffices to use the generator expression. The following two lines of code achieve the same thing: End of explanation """ [i for i in range(10) if i % 2 == 0] """ Explanation: Which one is more efficient? Why? You can also use conditionals in the generator expressions. For instance, this is a cheap way to get even numbers: End of explanation """ for _ in range(10): print("Vomit") """ Explanation: Exercise 3. List all odd square numbers below 1000. 7.2 PEP8 And on the seventh day, God created PEP8. Python Enhancement Proposal (PEP) is a series of ideas and good practices for writing nice Python code and evolving the language. PEP8 is the set of policies that tells you what makes Python syntax pretty (meaning it is easy to read for any other Python programmer). In an ideal world, everybody should follow it. Start programming in Python by keeping good practices in mind. As a starter, Python uses indentation and indentation alone to tell the hierarchy of code. Use EXACTLY four space characters as indentation, always. If somebody tells you to use one tab, butcher the devil on the spot. Bad: End of explanation """ for _ in range(10): print("OMG, the code generating this is so prettily idented") """ Explanation: Good: End of explanation """ print([1,2,3,4]) # Ugly crap print([1, 2, 3, 4]) # My god, this is so much easier to read! """ Explanation: The code is more readable if it is a bit leafy. For this reason, leave a space after every comma just as you would do in natural languages: End of explanation """ for i in range(2,5): print(i) for j in range( -10,0, 1): print(j ) """ Explanation: Spyder has tools for helping you keeping to PEP8, but it is not so straightforward in Jupyter unfortunately. Exercise 4. Clean up this horrific mess: End of explanation """ t = (2, 3, 4) print(t) print(type(t)) """ Explanation: 7.3 Tuples, swap Tuples are like lists, but with a fixed number of entries. Technically, this is a tuple: End of explanation """ very_interesting_list = [i**2-1 for i in range(10) if i % 2 != 0] for i, e in enumerate(very_interesting_list): print(i, e) """ Explanation: You would, however, seldom use it in this form, because you would just use a list. They come handy in certain scenarios, like enumerating a list: End of explanation """ another_interesting_list = [i**2+1 for i in range(10) if i % 2 == 0] for i, j in zip(very_interesting_list, another_interesting_list): print(i, j) """ Explanation: Here enumerate returns you a tuple with the running index and the matching entry of the list. You can also zip several lists and create a stream of tuples: End of explanation """ a, b, c = 1, 2, 3 print(a, b, c) """ Explanation: You can use tuple-like assignment to initialize multiple variables: End of explanation """ a, b = b, a print(a, b) """ Explanation: This syntax in turn enables you the most elegant way of swapping the value of two variables: End of explanation """ l = [i for i in range(10)] print(l) print(l[2:5]) print(l[2:]) print(l[:-1]) l[-2] """ Explanation: 7.4 Indexing You saw that you can use in, zip, and enumerate to iterate over lists. You can also use slicing on one-dimensional lists: End of explanation """ import numpy as np a = np.array([[(i+1)*(j+1)for j in range(5)] for i in range(3)]) print(a) print(a[:, 0]) print(a[0, :]) """ Explanation: Note that the upper index is not inclusive (the same as in range). The index -1 refers to the last item, -2 to the second last, and so on. Python lists are zero-indexed. Unfortunately, you cannot do convenient double indexing on multidimensional lists. For this, you need numpy. End of explanation """ import sympy as sp import numpy as np from sympy.interactive import printing printing.init_printing(use_latex='mathjax') print(np.sqrt(2)) sp.sqrt(2) """ Explanation: Exercise 5. Get the bottom-right 2x2 submatrix of a. 8. Types Python will hide the pain of working with types: you don't have to declare the type of any variable. But this does not mean they don't have a type. The type gets assigned automatically via an internal type inference mechanism. To demonstrate this, we import the main numerical and symbolic packages, along with an option to pretty-print symbolic operations. End of explanation """ print(type(np.sqrt(2))) print(type(sp.sqrt(2))) """ Explanation: The types tell you why these two look different: End of explanation """ a = [0. for _ in range(5)] b = np.zeros(5) print(a) print(b) print(type(a)) print(type(b)) """ Explanation: The symbolic representation is, in principle, infinite precision, whereas the numerical representation uses 64 bits. As we said above, you can do some things with numpy arrays that you cannot do with lists. Their types can be checked: End of explanation """ print(type(list(b))) print(type(np.array(a))) """ Explanation: There are many differences between numpy arrays and lists. The most important ones are that lists can expand, but arrays cannot, and lists can contain any object, whereas numpy arrays can only contain things of the same type. Type conversion is (usually) easy: End of explanation """ from sympy import sqrt from numpy import sqrt sqrt(2) """ Explanation: This is where the trouble begins: End of explanation """ b = np.zeros(3) b[0] = sp.pi b[1] = sqrt(2) b[2] = 1/3 print(b) """ Explanation: Because of this, never import everything from a package: from numpy import * is forbidden. Exercise 6. What would you do to keep everything at infinite precision to ensure the correctness of a computational proof? This does not seem to be working: End of explanation """ sp.sqrt """ Explanation: 9. Read the fine documentation (and write it) Python packages and individual functions typically come with documentation. Documentation is often hosted on ReadTheDocs. For individual functions, you can get the matching documentation as you type. Just press Shift+Tab on a function: End of explanation """ def multiply(a, b): """Multiply two numbers together. :param a: The first number to be multiplied. :type a: float. :param b: The second number to be multiplied. :type b: float. :returns: the multiplication of the two numbers. """ return a*b """ Explanation: In Spyder, Ctrl+I will bring up the documentation of the function. This documentation is called docstring, and it is extremely easy to write, and you should do it yourself if you write a function. It is epsilon effort and it will take you a second to write it. Here is an example: End of explanation """ multiply """ Explanation: Now you can press Shift+Tab to see the above documentation: End of explanation """
deculler/DataScienceTableDemos
Clicks.ipynb
bsd-2-clause
clicks = Table.read_table("http://stat.columbia.edu/~rachel/datasets/nyt1.csv") clicks """ Explanation: This workbook shows a example derived from the EDA exercise in Chapter 2 of Doing Data Science, by o'Neil abd Schutt End of explanation """ age_upper_bounds = [18, 25, 35, 45, 55, 65] def age_range(n): if n == 0: return '0' lower = 1 for upper in age_upper_bounds: if lower <= n < upper: return str(lower) + '-' + str(upper-1) lower = upper return str(lower) + '+' # a little test np.unique([age_range(n) for n in range(100)]) clicks["Age Range"] = clicks.apply(age_range, 'Age') clicks["Person"] = 1 clicks """ Explanation: Well. Half a million rows. That would be painful in excel. Add a column of 1's, so that a sum will count people. End of explanation """ clicks_by_age = clicks.group('Age Range', sum) clicks_by_age clicks_by_age.select(['Age Range', 'Clicks sum', 'Impressions sum', 'Person sum']).barh('Age Range') """ Explanation: Now we can group the table by Age Range and count how many clicks come from each range. End of explanation """ clicks_by_age['Gender Mix'] = clicks_by_age['Gender sum'] / clicks_by_age['Person sum'] clicks_by_age["CTR"] = clicks_by_age['Clicks sum'] / clicks_by_age['Impressions sum'] clicks_by_age.select(['Age Range', 'Person sum', 'Gender Mix', 'CTR']) # Format some columns as percent with limited precision clicks_by_age.set_format('Gender Mix', PercentFormatter(1)) clicks_by_age.set_format('CTR', PercentFormatter(2)) clicks_by_age """ Explanation: Now we can do some other interesting summaries of these categories End of explanation """ impressed = clicks.where(clicks['Age'] > 0).where('Impressions') impressed # Impressions by age and gender impressed.pivot(rows='Gender', columns='Age Range', values='Impressions', collect=sum) impressed.pivot("Age Range", "Gender", "Clicks",sum) impressed.pivot_hist('Age Range','Impressions') distributions = impressed.pivot_bin('Age Range','Impressions') distributions impressed['Gen'] = [['Male','Female'][i] for i in impressed['Gender']] impressed """ Explanation: We might want to do the click rate calculation a little more carefully. We don't care about clicks where there are zero impressions or missing age/gender information. So let's filter those out of our data set. End of explanation """ # How does gender and clicks vary with age? gi = impressed.group('Age Range', np.mean).select(['Age Range', 'Gender mean', 'Clicks mean']) gi.set_format(['Gender mean', 'Clicks mean'], PercentFormatter) gi """ Explanation: Group returns a new table. If we wanted to specify the formats on columns of this table, assign it to a name. End of explanation """
SHDShim/pytheos
examples/6_p_scale_test_Dorogokupets2007_Au.ipynb
apache-2.0
%config InlineBackend.figure_format = 'retina' """ Explanation: For high dpi displays. End of explanation """ import matplotlib.pyplot as plt import numpy as np from uncertainties import unumpy as unp import pytheos as eos """ Explanation: 0. General note This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2007. 1. Global setup End of explanation """ eta = np.linspace(1., 0.65, 8) print(eta) dorogokupets2007_au = eos.gold.Dorogokupets2007() help(dorogokupets2007_au) dorogokupets2007_au.print_equations() dorogokupets2007_au.print_equations() dorogokupets2007_au.print_parameters() v0 = 67.84742110765599 dorogokupets2007_au.three_r v = v0 * (eta) temp = 2500. p = dorogokupets2007_au.cal_p(v, temp * np.ones_like(v)) """ Explanation: 3. Compare End of explanation """ print('for T = ', temp) for eta_i, p_i in zip(eta, p): print("{0: .3f} {1: .2f} ".format(eta_i, p_i)) v = dorogokupets2007_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6) print(1.-(v/v0)) """ Explanation: <img src='./tables/Dorogokupets2007_Au.png'> End of explanation """
DJCordhose/ai
notebooks/rl/berater-v4.ipynb
mit
# !pip install git+https://github.com/openai/baselines >/dev/null # !pip install gym >/dev/null import numpy import gym from gym.utils import seeding from gym import spaces def state_name_to_int(state): state_name_map = { 'S': 0, 'A': 1, 'B': 2, 'C': 3, } return state_name_map[state] def int_to_state_name(state_as_int): state_map = { 0: 'S', 1: 'A', 2: 'B', 3: 'C' } return state_map[state_as_int] class BeraterEnv(gym.Env): """ The Berater Problem Actions: There are 3 discrete deterministic actions: - 0: First Direction - 1: Second Direction - 2: Third Direction / Go home """ metadata = {'render.modes': ['ansi']} showStep = False showDone = True envEpisodeModulo = 100 def __init__(self): self.map = { 'S': [('A', 100), ('B', 400), ('C', 200 )], 'A': [('B', 250), ('C', 400), ('S', 100 )], 'B': [('A', 250), ('C', 250), ('S', 400 )], 'C': [('A', 400), ('B', 250), ('S', 200 )] } self.action_space = spaces.Discrete(3) self.observation_space = spaces.Box(low=numpy.array([0,-1000,-1000,-1000,-1000,-1000,-1000]), high=numpy.array([3,1000,1000,1000,1000,1000,1000]), dtype=numpy.float32) self.reward_range = (-1, 1) self.totalReward = 0 self.stepCount = 0 self.isDone = False self.envReward = 0 self.envEpisodeCount = 0 self.envStepCount = 0 self.reset() self.optimum = self.calculate_customers_reward() def seed(self, seed=None): self.np_random, seed = seeding.np_random(seed) return [seed] def step(self, actionArg): paths = self.map[self.state] action = actionArg destination, cost = paths[action] lastState = self.state lastObState = state_name_to_int(lastState) customerReward = self.customer_reward[destination] info = {"from": self.state, "to": destination} self.state = destination reward = (-cost + self.customer_reward[destination]) / self.optimum self.customer_visited(destination) done = destination == 'S' and self.all_customers_visited() stateAsInt = state_name_to_int(self.state) self.totalReward += reward self.stepCount += 1 self.envReward += reward self.envStepCount += 1 if self.showStep: print( "Episode: " + ("%4.0f " % self.envEpisodeCount) + " Step: " + ("%4.0f " % self.stepCount) + #lastState + ':' + str(lastObState) + ' --' + str(action) + '-> ' + self.state + ':' + str(stateAsInt) + lastState + ' --' + str(action) + '-> ' + self.state + ' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) + ' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum) ) if done and not self.isDone: self.envEpisodeCount += 1 if BeraterEnv.showDone: episodes = BeraterEnv.envEpisodeModulo if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0): episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo print( "Done: " + ("episodes=%6.0f " % self.envEpisodeCount) + ("avgSteps=%6.2f " % (self.envStepCount/episodes)) + ("avgTotalReward=% 3.2f" % (self.envReward/episodes) ) ) if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0: self.envReward = 0 self.envStepCount = 0 self.isDone = done observation = self.getObservation(stateAsInt) return observation, reward, done, info def getObservation(self, position): result = numpy.array([ position, self.getEdgeObservation('S','A'), self.getEdgeObservation('S','B'), self.getEdgeObservation('S','C'), self.getEdgeObservation('A','B'), self.getEdgeObservation('A','C'), self.getEdgeObservation('B','C'), ], dtype=numpy.float32) return result def getEdgeObservation(self, source, target): reward = self.customer_reward[target] cost = self.getCost(source,target) result = reward - cost return result def getCost(self, source, target): paths = self.map[source] targetIndex=state_name_to_int(target) for destination, cost in paths: if destination == target: result = cost break return result def customer_visited(self, customer): self.customer_reward[customer] = 0 def all_customers_visited(self): return self.calculate_customers_reward() == 0 def calculate_customers_reward(self): sum = 0 for value in self.customer_reward.values(): sum += value return sum def reset(self): self.totalReward = 0 self.stepCount = 0 self.isDone = False reward_per_customer = 1000 self.customer_reward = { 'S': 0, 'A': reward_per_customer, 'B': reward_per_customer, 'C': reward_per_customer, } self.state = 'S' return self.getObservation(state_name_to_int(self.state)) """ Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Berater Environment v4 Changes from v3 clean up plot performance switched back to ppo2 Next Steps create a complete customer graph including costs of travel non existing connection has hightst penalty per episode set certain rewards to 0 to simulate different customers per consultant make sure things generalize well Links Visualizing progress: https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb Installation (required for colab) End of explanation """ BeraterEnv.showStep = True BeraterEnv.showDone = True env = BeraterEnv() print(env) observation = env.reset() print(observation) for t in range(1000): action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close() print(observation) """ Explanation: Try out Environment End of explanation """ !rm -r logs !mkdir logs !mkdir logs/berater # https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py # log_dir = logger.get_dir() log_dir = '/content/logs/berater/' import gym from baselines import deepq from baselines import bench from baselines import logger from baselines.common.vec_env.dummy_vec_env import DummyVecEnv from baselines.common.vec_env.vec_monitor import VecMonitor from baselines.ppo2 import ppo2 BeraterEnv.showStep = False BeraterEnv.showDone = False env = BeraterEnv() wrapped_env = DummyVecEnv([lambda: BeraterEnv()]) monitored_env = VecMonitor(wrapped_env, log_dir) model = ppo2.learn(network='mlp', env=monitored_env, total_timesteps=50000) # monitored_env = bench.Monitor(env, log_dir) # https://en.wikipedia.org/wiki/Q-learning#Influence_of_variables # %time model = deepq.learn(\ # monitored_env,\ # seed=42,\ # network='mlp',\ # lr=1e-3,\ # gamma=0.99,\ # total_timesteps=30000,\ # buffer_size=50000,\ # exploration_fraction=0.5,\ # exploration_final_eps=0.02,\ # print_freq=1000) model.save('berater-ppo-v4.pkl') monitored_env.close() """ Explanation: Train model 0.73 would be perfect total reward End of explanation """ !ls -l $log_dir from baselines.common import plot_util as pu results = pu.load_results(log_dir) import matplotlib.pyplot as plt import numpy as np r = results[0] # plt.ylim(-1, 1) # plt.plot(np.cumsum(r.monitor.l), r.monitor.r) plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100)) """ Explanation: Visualizing Results https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb End of explanation """ import numpy as np observation = env.reset() state = np.zeros((1, 2*128)) dones = np.zeros((1)) BeraterEnv.showStep = True BeraterEnv.showDone = False for t in range(1000): actions, _, state, _ = model.step(observation, S=state, M=dones) observation, reward, done, info = env.step(actions[0]) if done: print("Episode finished after {} timesteps".format(t+1)) break env.close() """ Explanation: Enjoy model End of explanation """
tensorflow/docs-l10n
site/ja/guide/upgrade.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import tensorflow as tf print(tf.__version__) """ Explanation: コードを TensorFlow 2 に自動的にアップグレードする <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/upgrade"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/upgrade.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colabで実行</a> </td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/upgrade.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td> <td> <a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/upgrade.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td> </table> TensorFlow 2.0には引数の並べ替え、シンボル名の変更、パラメータのデフォルト値の変更など、多くのAPIの変更が含まれています。これらの変更をすべて手動で実行するのは退屈で、エラーが発生しやすくなります。変更を合理化し、可能な限りシームレスに TF 2.0 に移行できるよう、TensorFlow チームはレガシーコードの新しい API への移行を支援する tf_upgrade_v2 ユーティリティを作成しています。 注意: tf_upgrade_v2 は TensorFlow 1.13 以降(すべての TF 2.0 ビルドを含む)に自動的にインストールされています。 一般的な使用方法は以下のとおりです。 <pre class="devsite-terminal devsite-click-to-copy prettyprint lang-bsh">tf_upgrade_v2 \<br> --intree my_project/ \<br> --outtree my_project_v2/ \<br> --reportfile report.txt</pre> これにより、既存の TensorFlow 1.x Python スクリプトが TensorFlow 2.0 に変換され、アップグレード処理が高速化します。 この変換スクリプトは可能な限り自動化を行いますが、スクリプトでは実行できない構文やスタイルの変更もあります。 互換性モジュール いくつかの API シンボルは単に文字列置換を使用するだけではアップグレードできません。コードを確実に TensorFlow 2.0 に対応させるため、アップグレードスクリプトには compat.v1 モジュールが含まれています。このモジュールは tf.foo のような TF 1.x のシンボルを同等の tf.compat.v1.foo 参照に置換します。この互換性モジュールは優れていますが、置換箇所を手作業で見直し、それらを tf.compat.v1 名前空間ではなく tf.* 名前空間の新しい API に早急に移行することをお勧めします。 TensorFlow 2.x で廃止されているモジュール(tf.flags や tf.contrib など)があるため、一部の変更は compat.v1 に切り替えても対応できません。このようなコードをアップグレードするには、追加のライブラリ(absl.flags など)を使用するか、tensorflow/addons にあるパッケージに切り替える必要があるかもしれません。 推奨アップグレード手順 このガイドの残りの部分では、アップグレードスクリプトの使用方法を説明します。アップグレードスクリプトは簡単に使用できますが、次の手順の一環として使用することを強く推奨します。 単体テスト: アップグレード対象のコードにカバレッジ率が適度な単体テストスイートを確実に用意します。このコードは Python で記述されているため、さまざまなミスから保護されることはありません。また、すべての依存物が TensorFlow 2.0 との互換性を確保できるようにアップグレード済みであることを確認してください。 TensorFlow 1.14 のインストール: TensorFlow を最新の TensorFlow 1.x バージョン(1.14 以上)にアップグレードします。このバージョンには tf.compat.v2 に最終的な TensorFlow 2.0 API が含まれています。 1.14 でテスト: この時点で単体テストに合格することを確認します。単体テストはアップグレード中に何度も実行することになるため、安全な状態で開始することが重要です。 アップグレードスクリプトの実行: テストを含むソースツリー全体で tf_upgrade_v2 を実行します。これにより、TensorFlow 2.0 で利用できるシンボルのみを使用する形式にコードがアップグレードされます。廃止されたシンボルは tf.compat.v1 でアクセスできます。このようなシンボルは最終的には手動での対応が必要ですが、すぐに対応する必要はありません。 変換後のテストを TensorFlow 1.14 で実行: コードは引き続き TensorFlow 1.14 で正常に動作するはずです。もう一度単体テストを実行してください。テストで何らかのエラーが発生する場合は、アップグレードスクリプトにバグがあります。その場合はお知らせください。 アップグレードレポートの警告とエラーを確認: このスクリプトは再確認が必要な変換や、必要な手動対応を説明するレポートファイルを書き出します。たとえば、残っているすべての contrib インスタンスを手動で削除する必要がある場合などです。RFC で詳細を確認してください。 TensorFlow 2.0 のインストール: この時点で TensorFlow 2.0 に切り替えても安全です。 v1.disable_v2_behavior でのテスト: テストの main 関数で v1.disable_v2_behavior() を使用してテストをもう一度実行すると、1.14 で実行した場合と同じ結果になるはずです。 V2の動作を有効化: テストが v2 API を使用して動作するようになったため、v2 の動作をオンにすることを検討し始めることができます。コードの記述方法によっては、若干の変更が必要になる場合があります。詳細については、移行ガイドを参照してください。 アップグレードスクリプトの使用 セットアップ 始める前に、TensorlFlow 2.0 がインストールされていることを確認してください。 End of explanation """ !git clone --branch r1.13.0 --depth 1 https://github.com/tensorflow/models """ Explanation: テスト対象のコードがある tensorflow/models git リポジトリをクローンします。 End of explanation """ !tf_upgrade_v2 -h """ Explanation: ヘルプを読む スクリプトは TensorFlow と共にインストールされています。組み込みのヘルプは次のとおりです。 End of explanation """ !head -n 65 models/samples/cookbook/regression/custom_regression.py | tail -n 10 """ Explanation: TF1 のコード例 単純な TensorFlow 1.0 のスクリプトは次のとおりです。 End of explanation """ !(cd models/samples/cookbook/regression &amp;&amp; python custom_regression.py) """ Explanation: TensorFlow 2.0 がインストールされている状態では動作しません。 End of explanation """ !tf_upgrade_v2 \ --infile models/samples/cookbook/regression/custom_regression.py \ --outfile /tmp/custom_regression_v2.py """ Explanation: 単一ファイル アップグレードスクリプトは単体の Python ファイルに対して実行できます。 End of explanation """ # upgrade the .py files and copy all the other files to the outtree !tf_upgrade_v2 \ --intree models/samples/cookbook/regression/ \ --outtree regression_v2/ \ --reportfile tree_report.txt """ Explanation: コードの修正策が見つからない場合、スクリプトはエラーを出力します。 ディレクトリツリー この単純な例を含む一般的なプロジェクトでは、複数のファイルが使用されています。通常はパッケージ全体をアップグレードするため、スクリプトをディレクトリツリーに対して実行することもできます。 End of explanation """ !(cd regression_v2 && python custom_regression.py 2>&1) | tail """ Explanation: dataset.make_one_shot_iterator 関数に関して警告が 1 つ表示されていることに注意してください。 これで、スクリプトが TensorFlow 2.0 で動作するようになりました。 tf.compat.v1 モジュールのため、変換後のスクリプトは TensorFlow 1.14 でも実行されることに注意してください。 End of explanation """ !head -n 20 tree_report.txt """ Explanation: 詳細レポート このスクリプトは、詳細な変更のリストも報告します。この例では安全でない可能性のある変換が 1 つ検出され、ファイルの先頭で警告が表示されています。 End of explanation """ %%writefile dropout.py import tensorflow as tf d = tf.nn.dropout(tf.range(10), 0.2) z = tf.zeros_like(d, optimize=False) !tf_upgrade_v2 \ --infile dropout.py \ --outfile dropout_v2.py \ --reportfile dropout_report.txt > /dev/null !cat dropout_report.txt """ Explanation: 再度 Dataset.make_one_shot_iterator function に関して警告が 1 つ表示されていることに注意してください。 その他の場合、重要な変更の根拠が出力されます。 End of explanation """ !cat dropout_v2.py """ Explanation: 変更されたファイルの内容は次のとおりです。スクリプトがどのように引数名を追加し、移動および名前変更された引数を処理しているかに注目してください。 End of explanation """ !tf_upgrade_v2 \ --intree models/research/deeplab \ --outtree deeplab_v2 \ --reportfile deeplab_report.txt > /dev/null """ Explanation: 大規模なプロジェクトでは、若干のエラーが発生する可能性があります。たとえば、deeplab モデルを変換します。 End of explanation """ !ls deeplab_v2 """ Explanation: 次のような出力ファイルが生成されました。 End of explanation """ !cat deeplab_report.txt | grep -i models/research/deeplab | grep -i error | head -n 3 """ Explanation: しかし、エラーが発生していました。レポートは、実行前に修正する必要があるものを正確に把握するのに役立ちます。最初の 3 つのエラーは次のとおりです。 End of explanation """ !cat dropout.py !tf_upgrade_v2 --mode SAFETY --infile dropout.py --outfile dropout_v2_safe.py > /dev/null !cat dropout_v2_safe.py """ Explanation: "Safety" モード この変換スクリプトには tensorflow.compat.v1 モジュールを使用するようにインポートを変更するだけの侵襲性の低い SAFETY モードもあります。 End of explanation """
FRBs/FRB
docs/nb/DM_Halos and DM_IGM.ipynb
bsd-3-clause
# imports from importlib import reload import numpy as np from scipy.interpolate import InterpolatedUnivariateSpline as IUS from astropy import units as u from frb.halos.models import ModifiedNFW from frb.halos import models as frb_halos from frb.halos import hmf as frb_hmf from frb.dm import igm as frb_igm from frb.figures import utils as ff_utils from matplotlib import pyplot as plt plt.rcParams['font.size'] = 17 """ Explanation: DM_Halos and DM_IGM Splitting $\langle DM_{cosmic}\rangle$ into its constituents. End of explanation """ help(frb_igm.f_diffuse) # Define redshifts zvals = np.linspace(0, 8) # Get <n_e> f_diffuse, rho_diffuse = frb_igm.f_diffuse(zvals, return_rho = True) # Plot fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7)) fig.tight_layout() ax1 = axs[0] ax1.plot(zvals, f_diffuse, lw=2) ax1.set_ylabel(r'$\langle f_{diffuse, cosmic}\rangle$') ax2 = axs[1] ax2.plot(zvals, rho_diffuse.to('Msun*Mpc**-3'), lw=2) ax2.set_yscale("log") ax2.set_xlabel('z') ax2.set_ylabel(r'$\langle \rho_{diffuse, cosmic}\rangle$ $M_\odot~Mpc^{-3}$') plt.show() """ Explanation: $\langle \rho_{diffuse, cosmic}\rangle$ Use f_diffuse to calculate the average mass fraction of diffuse gas and diffuse gas density (physical). Math described in DM_cosmic.ipynb. End of explanation """ help(frb_igm.ne_cosmic) # Define redshifts zvals = np.linspace(0, 8) # Get <n_e> avg_ne = frb_igm.ne_cosmic(zvals) # Visualize fig = plt.figure(figsize = (10, 6)) plt.plot(zvals, avg_ne, label=r'$\langle n_{e, cosmic}\rangle$', lw=2) plt.yscale("log") plt.legend(loc = "upper left") plt.xlabel('z') plt.ylabel(r'$\langle n_{e, cosmic}\rangle$ [$cm^{-3}$]') plt.show() """ Explanation: $\langle n_{e,cosmic}\rangle$ End of explanation """ help(frb_igm.average_DM) DM_cosmic, zvals = frb_igm.average_DM(8, cumul=True) # Visualize fig = plt.figure(figsize = (10, 6)) plt.plot(zvals, DM_cosmic, lw=2) plt.xlabel('z') plt.ylabel(r'$\langle DM_{cosmic}\rangle$ $pc~cm^{-3}$') plt.show() """ Explanation: $\langle DM_{cosmic}\rangle$ See DM_cosmic.ipynb for details regarding its computation. End of explanation """ help(frb_igm.average_DMhalos) # evaluation frb_igm.average_DMhalos(0.1) # get cumulative DM_halos dm, zvals = frb_igm.average_DMhalos(0.1, cumul = True) dm zvals fhot_array = [0.2, 0.5, 0.75] rmax_array = [0.5, 1.0 , 2.0] # <DM_halos> for different f_hot fig, axs = plt.subplots(2,1, sharex=True, figsize = (8,7)) fig.tight_layout() ax1 = axs[0] for f_hot in fhot_array: DM_halos, zeval = frb_igm.average_DMhalos(3, f_hot = f_hot, cumul=True) ax1.plot(zeval, DM_halos, label="{:0.1f}".format(f_hot)) ax1.legend(title="f_hot") ax1.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$') # <DM_halos> for different rmax ax2 = axs[1] for rmax in rmax_array: DM_halos, zeval = frb_igm.average_DMhalos(3, rmax = rmax, cumul = True) ax2.plot(zeval, DM_halos, label="{:0.1f}".format(rmax)) ax2.legend(title="rmax") ax2.set_xlabel('z') ax2.set_ylabel(r'$\langle DM_{halos}\rangle$ $pc~cm^{-3}$') plt.show() # Limits of calculation frb_igm.average_DMhalos(3.1) # Failure above redshift 5 frb_igm.average_DMhalos(5.1) help(frb_igm.average_DMIGM) # Sanity check. <DM_cosmic> - (<DM_halos> + <DM_IGM) = 0 dm, zvals = frb_igm.average_DM(0.1, cumul= True) dm_halos, _ = frb_igm.average_DMhalos(0.1, cumul = True) dm_igm, _ = frb_igm.average_DMIGM(0.1, cumul = True) plt.plot(zvals, dm - dm_halos - dm_igm) plt.ylabel(r"DM $pc~cm^{-3}$") plt.xlabel("z") plt.show() """ Explanation: $\langle DM_{halos}\rangle$ and $\langle DM_{IGM}\rangle$ The fraction of free electrons present in halos should be equal to the fraction of diffuse gas in halos assuming the ionization state of the individual species is only dependent on redshift (and not gas density as well). $$ \begin{aligned} \frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{diffuse,halos}}{\rho_{diffuse,cosmic}}\ & = \frac{\rho_{b, halos}f_{hot}}{\rho_{b, cosmic}f_{diffuse, cosmic}}\ \end{aligned} $$ Here $\rho_b$ refers to baryon density. $f_{hot}$ refers to the fraction of baryons in halos that is in the hot phase ($\sim10^7$ K). The remaining baryons are either in the neutral phase or in dense objects like stars. Assuming halos have the same baryon mass fraction as the universal average ($\Omega_b/\Omega_M$) $$ \begin{aligned} \frac{\langle n_{e, halos}\rangle}{\langle n_{e, cosmic}\rangle} & = \frac{\rho_{m, halos}f_{hot}}{\rho_{m, cosmic}f_{diffuse, cosmic}}\ & = \frac{f_{halos} f_{hot}}{f_{diffuse, cosmic}}\ \end{aligned} $$ $f_{halos}$ can be computed as a function of redshift by integrating the halo mass function (HMF) times mass over some mass range and dividing it by the density of matter in the universe. This allows us to compute a line of sight integral of $\langle n_{e, halos} \rangle$ to get $\langle DM_{halos}\rangle$. $\langle DM_{IGM}\rangle$ is just obtained by subtracting this from $\langle DM_{cosmic}\rangle$. Apart from $f_{hot}$ being an obvious free parameter, we also allow variation in the radial extent of halos. This is encoded in the parameter $r_{max}$ which is the radial extent of halos in units of $r_{200}$. Setting $r_{max}>1$ (for all halos; currently it is mass independent) smoothly extends the NFW profile and the modifid profile of the encased diffuse baryons. End of explanation """
supergis/git_notebook
geospatial/geojson/pygeojson.ipynb
gpl-3.0
from pprint import * """ Explanation: GeoJSON的python支持库。 openthings@163.com, 2016-04. IETF标准项目:https://github.com/geojson PyPi支持库: https://pypi.python.org/pypi/geojson * 其它的支持库包括:GeoPandas, Shaply, GDAL, GIScript End of explanation """ from geojson import Point Point((-115.81, 37.24)) # doctest: +ELLIPSIS """ Explanation: Installation python-geojson is compatible with Python 2.6, 2.7, 3.2, 3.3, and 3.4. It is listed on PyPi as ‘geojson’. The recommended way to install is via pip: pip install geojson GeoJSON Objects This library implements all the GeoJSON Objects described in The GeoJSON Format Specification. Point End of explanation """ from geojson import MultiPoint MultiPoint([(-155.52, 19.61), (-156.22, 20.74), (-157.97, 21.46)]) # doctest: +ELLIPSIS #{"coordinates": [[-155.5..., 19.6...], [-156.2..., 20.7...], [-157.9..., 21.4...]], "type": "MultiPoint"} """ Explanation: Visualize the result of the example above here. General information about Point can be found in Section 2.1.2 and Appendix A: Point within The GeoJSON Format Specification. MultiPoint End of explanation """ from geojson import LineString lstring = LineString([(8.919, 44.4074), (8.923, 44.4075)]) # doctest: +ELLIPSIS #{"coordinates": [[8.91..., 44.407...], [8.92..., 44.407...]], "type": "LineString"} pprint(lstring) """ Explanation: Visualize the result of the example above here. General information about MultiPoint can be found in Section 2.1.3 and Appendix A: MultiPoint within The GeoJSON Format Specification. LineString End of explanation """ from geojson import MultiLineString mlstring = MultiLineString([ [(3.75, 9.25), (-130.95, 1.52)], [(23.15, -34.25), (-1.35, -4.65), (3.45, 77.95)] ]) # doctest: +ELLIPSIS #{"coordinates": [[[3.7..., 9.2...], [-130.9..., 1.52...]], [[23.1..., -34.2...], #[-1.3..., -4.6...], [3.4..., 77.9...]]], "type": "MultiLineString"} pprint(mlstring) """ Explanation: Visualize the result of the example above here. General information about LineString can be found in Section 2.1.4 and Appendix A: LineString within The GeoJSON Format Specification. MultiLineString End of explanation """ from geojson import Polygon # no hole within polygon polya = Polygon([[(2.38, 57.322), (23.194, -20.28), (-120.43, 19.15), (2.38, 57.322)]]) # doctest: +ELLIPSIS #{"coordinates": [[[2.3..., 57.32...], [23.19..., -20.2...], [-120.4..., 19.1...]]], "type": "Polygon"} pprint(polya) # hole within polygon polyb = Polygon([ [(2.38, 57.322), (23.194, -20.28), (-120.43, 19.15), (2.38, 57.322)], [(-5.21, 23.51), (15.21, -10.81), (-20.51, 1.51), (-5.21, 23.51)] ]) # doctest: +ELLIPSIS #{"coordinates": [[[2.3..., 57.32...], [23.19..., -20.2...], [-120.4..., 19.1...]], #[[-5.2..., 23.5...], [15.2..., -10.8...], [-20.5..., 1.5...], [-5.2..., 23.5...]]], "type": "Polygon"} pprint(polyb) """ Explanation: Visualize the result of the example above here. General information about MultiLineString can be found in Section 2.1.5 and Appendix A: MultiLineString within The GeoJSON Format Specification. Polygon End of explanation """ from geojson import MultiPolygon mp = MultiPolygon([ ([(3.78, 9.28), (-130.91, 1.52), (35.12, 72.234), (3.78, 9.28)],), ([(23.18, -34.29), (-1.31, -4.61), (3.41, 77.91), (23.18, -34.29)],) ]) # doctest: +ELLIPSIS #{"coordinates": [[[[3.7..., 9.2...], [-130.9..., 1.5...], [35.1..., 72.23...]]], #[[[23.1..., -34.2...], [-1.3..., #-4.6...], [3.4..., 77.9...]]]], "type": "MultiPolygon"} pprint(mp) """ Explanation: Visualize the results of the example above here. General information about Polygon can be found in Section 2.1.6 and Appendix A: Polygon within The GeoJSON Format Specification. MultiPolygon End of explanation """ from geojson import GeometryCollection, Point, LineString my_point = Point((23.532, -63.12)) my_line = LineString([(-152.62, 51.21), (5.21, 10.69)]) gc = GeometryCollection([my_point, my_line]) # doctest: +ELLIPSIS #{"geometries": [{"coordinates": [23.53..., -63.1...], "type": "Point"}, #{"coordinates": [[-152.6..., 51.2...], [5.2..., 10.6...]], "type": "LineString"}], "type": "GeometryCollection"} pprint(gc) """ Explanation: Visualize the result of the example above here. General information about MultiPolygon can be found in Section 2.1.7 and Appendix A: MultiPolygon within The GeoJSON Format Specification. GeometryCollection End of explanation """ from geojson import Feature, Point my_point = Point((-3.68, 40.41)) f1 = Feature(geometry=my_point) # doctest: +ELLIPSIS #{"geometry": {"coordinates": [-3.68..., 40.4...], "type": "Point"}, "properties": {}, "type": "Feature"} pprint(f1) f2 = Feature(geometry=my_point, properties={"country": "Spain"}) # doctest: +ELLIPSIS #{"geometry": {"coordinates": [-3.68..., 40.4...], "type": "Point"}, "properties": {"country": "Spain"}, #"type": "Feature"} pprint(f2) f3 = Feature(geometry=my_point, id=27) # doctest: +ELLIPSIS #{"geometry": {"coordinates": [-3.68..., 40.4...], "type": "Point"}, "id": 27, "properties": {}, "type": "Feature"} pprint(f3) """ Explanation: Visualize the result of the example above here. General information about GeometryCollection can be found in Section 2.1.8 and Appendix A: GeometryCollection within The GeoJSON Format Specification. Feature End of explanation """ from geojson import Feature, Point, FeatureCollection my_feature = Feature(geometry=Point((1.6432, -19.123))) my_other_feature = Feature(geometry=Point((-80.234, -22.532))) fc = FeatureCollection([my_feature, my_other_feature]) # doctest: +ELLIPSIS #{"features": [{"geometry": {"coordinates": [1.643..., -19.12...], "type": "Point"}, "properties": {}, "type": #"Feature"}, {"geometry": {"coordinates": [-80.23..., -22.53...], "type": "Point"}, "properties": {}, "type": #"Feature"}], "type": "FeatureCollection"} pprint(fc) """ Explanation: Visualize the results of the examples above here. General information about Feature can be found in Section 2.2 within The GeoJSON Format Specification. FeatureCollection End of explanation """ import geojson my_point = geojson.Point((43.24, -1.532)) pprint(my_point) # doctest: +ELLIPSIS #{"coordinates": [43.2..., -1.53...], "type": "Point"} dump = geojson.dumps(my_point, sort_keys=True) pprint(dump) # doctest: +ELLIPSIS #'{"coordinates": [43.2..., -1.53...], "type": "Point"}' gj = geojson.loads(dump) # doctest: +ELLIPSIS #{"coordinates": [43.2..., -1.53...], "type": "Point"} pprint(gj) """ Explanation: Visualize the result of the example above here. General information about FeatureCollection can be found in Section 2.3 within The GeoJSON Format Specification. GeoJSON encoding/decoding All of the GeoJSON Objects implemented in this library can be encoded and decoded into raw GeoJSON with the geojson.dump, geojson.dumps, geojson.load, and geojson.loads functions. End of explanation """ import geojson class MyPoint(): def __init__(self, x, y): self.x = x self.y = y @property def __geo_interface__(self): return {'type': 'Point', 'coordinates': (self.x, self.y)} point_instance = MyPoint(52.235, -19.234) geojson.dumps(point_instance, sort_keys=True) # doctest: +ELLIPSIS #'{"coordinates": [52.23..., -19.23...], "type": "Point"}' """ Explanation: Custom classes This encoding/decoding functionality shown in the previous can be extended to custom classes using the interface described by the geo_interface Specification. End of explanation """ import geojson my_line = LineString([(-152.62, 51.21), (5.21, 10.69)]) my_feature = geojson.Feature(geometry=my_line) list(geojson.utils.coords(my_feature)) # doctest: +ELLIPSIS #[(-152.62..., 51.21...), (5.21..., 10.69...)] """ Explanation: Helpful utilities coords geojson.utils.coords yields all coordinate tuples from a geometry or feature object. End of explanation """ import geojson new_point = geojson.utils.map_coords(lambda x: x/2, geojson.Point((-115.81, 37.24))) geojson.dumps(new_point, sort_keys=True) # doctest: +ELLIPSIS #'{"coordinates": [-57.905..., 18.62...], "type": "Point"}' """ Explanation: map_coords geojson.utils.map_coords maps a function over all coordinate tuples and returns a geometry of the same type. Useful for translating a geometry in space or flipping coordinate order. End of explanation """ import geojson validation = geojson.is_valid(geojson.Point((-3.68,40.41,25.14))) print(validation['valid']) #'no' print(validation['message']) #'the "coordinates" member must be a single position' """ Explanation: validation geojson.is_valid provides validation of GeoJSON objects. End of explanation """ import geojson geojson.utils.generate_random("LineString") # doctest: +ELLIPSIS #{"coordinates": [...], "type": "LineString"} """ Explanation: generate_random geojson.utils.generate_random yields a geometry type with random data End of explanation """
authman/DAT210x
Module5/Module5 - Lab9.ipynb
mit
import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D matplotlib.style.use('ggplot') # Look Pretty """ Explanation: DAT210x - Programming with Python for DS Module5- Lab9 End of explanation """ def drawLine(model, X_test, y_test, title, R2): fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(X_test, y_test, c='g', marker='o') ax.plot(X_test, model.predict(X_test), color='orange', linewidth=1, alpha=0.7) title += " R2: " + str(R2) ax.set_title(title) print(title) print("Intercept(s): ", model.intercept_) plt.show() def drawPlane(model, X_test, y_test, title, R2): # This convenience method will take care of plotting your # test observations, comparing them to the regression plane, # and displaying the R2 coefficient fig = plt.figure() ax = Axes3D(fig) ax.set_zlabel('prediction') # You might have passed in a DataFrame, a Series (slice), # an NDArray, or a Python List... so let's keep it simple: X_test = np.array(X_test) col1 = X_test[:,0] col2 = X_test[:,1] # Set up a Grid. We could have predicted on the actual # col1, col2 values directly; but that would have generated # a mesh with WAY too fine a grid, which would have detracted # from the visualization x_min, x_max = col1.min(), col1.max() y_min, y_max = col2.min(), col2.max() x = np.arange(x_min, x_max, (x_max-x_min) / 10) y = np.arange(y_min, y_max, (y_max-y_min) / 10) x, y = np.meshgrid(x, y) # Predict based on possible input values that span the domain # of the x and y inputs: z = model.predict( np.c_[x.ravel(), y.ravel()] ) z = z.reshape(x.shape) ax.scatter(col1, col2, y_test, c='g', marker='o') ax.plot_wireframe(x, y, z, color='orange', alpha=0.7) title += " R2: " + str(R2) ax.set_title(title) print(title) print("Intercept(s): ", model.intercept_) plt.show() """ Explanation: A Convenience Function This convenience method will take care of plotting your test observations, comparing them to the regression line, and displaying the R2 coefficient End of explanation """ # .. your code here .. """ Explanation: The Assignment Let's get started! First, as is your habit, inspect your dataset in a text editor, or spread sheet application. The first thing you should notice is that the first column is both unique (the name of each) college, as well as unlabeled. This is a HINT that it must be the index column. If you do not indicate to Pandas that you already have an index column, it'll create one for you, which would be undesirable since you already have one. Review the .read_csv() documentation and discern how to load up a dataframe while indicating which existing column is to be taken as an index. Then, load up the College dataset into a variable called X: End of explanation """ X.Private = X.Private.map({'Yes':1, 'No':0}) """ Explanation: This line isn't necessary for your purposes; but we'd just like to show you an additional way to encode features directly. The .map() method is like .apply(), but instead of taking in a lambda / function, you simply provide a mapping of keys:values. If you decide to embark on the "Data Scientist Challenge", this line of code will save you the trouble of converting it through other means: End of explanation """ # .. your code here .. """ Explanation: Create your linear regression model here and store it in a variable called model. Don't actually train or do anything else with it yet: End of explanation """ # .. your code here .. """ Explanation: The first relationship we're interested in is the number of accepted students, as a function of the amount charged for room and board. Using indexing, create two slices (series). One will just store the room and board column, the other will store the accepted students column. Then use train_test_split to cut your data up into X_train, X_test, y_train, y_test, with a test_size of 30% and a random_state of 7. End of explanation """ # .. your code here .. """ Explanation: Fit and score your model appropriately. Store the score in the score variable. End of explanation """ drawLine(model, X_test, y_test, "Accept(Room&Board)", score) """ Explanation: We'll take it from here, buddy: End of explanation """ # .. your code here .. drawLine(model, X_test, y_test, "Accept(Enroll)", score) """ Explanation: Duplicate the process above; this time, model the number of accepted students, as a function of the number of enrolled students per college. End of explanation """ # .. your code here .. drawLine(model, X_test, y_test, "Accept(F.Undergrad)", score) """ Explanation: Duplicate the process above; this time, model the number of accepted students, as as function of the number of failed undergraduate students per college. End of explanation """ # .. your code here .. drawPlane(model, X_test, y_test, "Accept(Room&Board,Enroll)", score) """ Explanation: Duplicate the process above (almost). This time is going to be a bit more complicated. Instead of modeling one feature as a function of another, you will attempt to do multivariate linear regression to model one feature as a function of TWO other features. Model the number of accepted students as a function of the amount charged for room and board and the number of enrolled students. To do this, instead of creating a regular slice for a single-feature input, simply create a slice that contains both columns you wish to use as inputs. Your training labels will remain a single slice. End of explanation """
gregcaporaso/sketchbook
2015.07.12-species-classifiers.ipynb
bsd-3-clause
%pylab inline from __future__ import division import numpy as np import pandas as pd import skbio import qiime_default_reference """ Explanation: In this recipe, we're going to build taxonomic classifiers for amplicon sequencing. We'll do this for 16S using some scikit-learn classifiers. End of explanation """ ### ## UPDATE THIS CELL TO USE THE DEFAULT REFERENCE AGAIN!! ### unaligned_ref_fp = qiime_default_reference.get_reference_sequences() aligned_ref_fp = "/Users/caporaso/data/gg_13_8_otus/rep_set_aligned/97_otus.fasta" #qiime_default_reference.get_template_alignment() tax_ref_fp = "/Users/caporaso/data/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt" #qiime_default_reference.get_reference_taxonomy() """ Explanation: We're going to work with the qiime-default-reference so we have easy access to some sequences. For reasons we'll look at below, we're going to load the unaligned reference sequences (which are 97% OTUs) and the aligned reference sequences (which are 85% OTUs). If you want to adapt this recipe to train and test a classifier on other files, just set the variable names below to the file paths that you'd like to use for training. End of explanation """ fwd_primer = skbio.DNA("GTGCCAGCMGCCGCGGTAA", {'id':'fwd-primer'}) rev_primer = skbio.DNA("GGACTACHVGGGTWTCTAAT", {'id':'rev-primer'}).reverse_complement() """ Explanation: Several recent studies of amplicon taxonomic assignment methods (Mizrahi-Man et al. 2013, Werner et al. 2012) have suggested that training Naive Bayes taxonomic classifiers against only the region of a sequence that was amplified, rather than a full length sequence, will give better taxonomic assignment results. So, lets start by slicing our reference sequences by finding some commonly used 16S primers so we train only on the fragment of the 16S that we would amplify in an amplicon survey. We'll define the forward and reverse primers as skbio.DNA objects. The primers that we're using here are pulled from Supplementary File 1 of Caporaso et al. 2012. Note that we're reverse complementing the reverse primer when we load it here so that it's in the same orientation as our reference sequences. End of explanation """ def seq_to_regex(seq): """ Convert a sequence to a regular expression """ result = [] sequence_class = seq.__class__ for base in str(seq): if base in sequence_class.degenerate_chars: result.append('[{0}]'.format( ''.join(sequence_class.degenerate_map[base]))) else: result.append(base) return ''.join(result) """ Explanation: The typical way to approach the problem of finding the boundaries of a short sequence in a longer sequence would be to use pairwise alignment. But, we're going to try a different approach here since pairwise alignment is inherently slow (it scales quadratically). Because these are sequencing primers, they're designed to be unique (so there shouldn't be multiple matches of a primer to a sequence), and they're designed to match as many sequences as possible. So let's try using regular expressions to match our sequencing primers in the reference database. Regular expression matching scales lineaerly, so is much faster to apply to many sequences. First, we'll define a function to generate a regular expression from a Sequence object. This functionality will be in scikit-bio's next official release (it was recently added as part of issue #1005). End of explanation """ regex = '({0}.*{1})'.format(seq_to_regex(fwd_primer), seq_to_regex(rev_primer)) regex """ Explanation: We can then apply this to define a regular expression that will match our forward primer, the following sequence, and then the reverse primer. We can use the resulting matches then to find the region of our sequences that is bound by our forward and reverse primer. End of explanation """ seq_count = 0 match_count = 0 for seq in skbio.io.read(unaligned_ref_fp, format='fasta', constructor=skbio.DNA): seq_count += 1 for match in seq.find_with_regex(regex): match_count += 1 match_percentage = (match_count / seq_count) * 100 print('{0} of {1} ({2:.2f}%) sequences have exact matches to the regular expression.'.format(match_count, seq_count, match_percentage)) """ Explanation: Next, let's apply this to all of our unaligned sequence and find out how many reference sequences our pattern matches. End of explanation """ starts = [] stops = [] for seq in skbio.io.read(aligned_ref_fp, format='fasta', constructor=skbio.DNA): for match in seq.find_with_regex(regex, ignore=seq.gaps()): starts.append(match.start) stops.append(match.stop) """ Explanation: So we're matching only about 80% of our reference sequences with this pattern. The implication for this application is that we'd only know how to slice 80% of our sequences, and as a result, we'd only have 80% of our sequence to train on. In addition to this being a problem because we want to train on as many sequences possible, it's very likely that there are certain taxonomic groups are left out all together. So, using regular expressions this way won't work. However... this is exactly what multiple sequence alignments are good for. If we could match our primers against aligned reference sequences, then finding matches in 80% of our sequences would give us an idea of how to slice all of our sequences, since the purpose of a multiple sequence alignment is to normalize the position numbers across all of the sequences in a sequence collection. The problem is that the gaps in the alignment would make it harder to match our regular expression, as gaps would show up that disrupt our matches. We can get around this using the ignore parameter to DNA.find_with_regex, which takes a boolean vector (a fancy name for an array or list of boolean values) indicating positions that should be ignore in the regular expression match. Let's try applying our regular expression to the aligned reference sequences and keeping track of where each match starts and stops. End of explanation """ pd.Series(starts).describe() pd.Series(stops).describe() locus = slice(int(np.median(starts)), int(np.median(stops))) locus subset_fraction = 1.0 kmer_counts = [] seq_ids = [] for seq in skbio.io.read(aligned_ref_fp, format='fasta', constructor=skbio.DNA): if np.random.random() > subset_fraction: continue seq_ids.append(seq.metadata['id']) sliced_seq = seq[locus].degap() kmer_counts.append(sliced_seq.kmer_frequencies(8)) from sklearn.feature_extraction import DictVectorizer X = DictVectorizer().fit_transform(kmer_counts) taxonomy_level = 7 # id_to_taxon = {} with open(tax_ref_fp) as f: for line in f: id_, taxon = line.strip().split('\t') id_to_taxon[id_] = '; '.join(taxon.split('; ')[:taxonomy_level]) y = [id_to_taxon[seq_id] for seq_id in seq_ids] from sklearn.feature_selection import SelectPercentile X = SelectPercentile().fit_transform(X, y) from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) %matplotlib inline import matplotlib.pyplot as plt def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues): plt.figure() plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() plt.ylabel('Known taxonomy') plt.xlabel('Predicted taxonomy') plt.tight_layout() plt.show() from sklearn.svm import SVC y_pred = SVC(C=10, kernel='rbf', degree=3, gamma=0.001).fit(X_train, y_train).predict(X_test) from sklearn.metrics import confusion_matrix, f1_score cm = confusion_matrix(y_test, y_pred) cm_normalized = cm / cm.sum(axis=1)[:, np.newaxis] plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix') print("F-score: %1.3f" % f1_score(y_test, y_pred, average='micro')) from sklearn.naive_bayes import MultinomialNB y_pred = MultinomialNB().fit(X_train, y_train).predict(X_test) cm = confusion_matrix(y_test, y_pred) cm_normalized = cm / cm.sum(axis=1)[:, np.newaxis] plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix') print("F-score: %1.3f" % f1_score(y_test, y_pred, average='micro')) """ Explanation: If we now look at the distribution of the start and stop positions of each regular expression match, we see that each distribution is narrowly focused around certain positions. We can use those to define the region that we want to slice from our reference alignment, and then remove the gaps from all sequences to train our classifiers. End of explanation """
tensorflow/docs-l10n
site/en-snapshot/lite/performance/quantization_debugger.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ # Quantization debugger is available from TensorFlow 2.7.0 !pip uninstall -y tensorflow !pip install tf-nightly !pip install tensorflow_datasets --upgrade # imagenet_v2 needs latest checksum import matplotlib.pyplot as plt import numpy as np import pandas as pd import tensorflow as tf import tensorflow_datasets as tfds import tensorflow_hub as hub #@title Boilerplates and helpers MODEL_URI = 'https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5' def process_image(data): data['image'] = tf.image.resize(data['image'], (224, 224)) / 255.0 return data # Representative dataset def representative_dataset(dataset): def _data_gen(): for data in dataset.batch(1): yield [data['image']] return _data_gen def eval_tflite(tflite_model, dataset): """Evaluates tensorflow lite classification model with the given dataset.""" interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_idx = interpreter.get_input_details()[0]['index'] output_idx = interpreter.get_output_details()[0]['index'] results = [] for data in representative_dataset(dataset)(): interpreter.set_tensor(input_idx, data[0]) interpreter.invoke() results.append(interpreter.get_tensor(output_idx).flatten()) results = np.array(results) gt_labels = np.array(list(dataset.map(lambda data: data['label'] + 1))) accuracy = ( np.sum(np.argsort(results, axis=1)[:, -5:] == gt_labels.reshape(-1, 1)) / gt_labels.size) print(f'Top-5 accuracy (quantized): {accuracy * 100:.2f}%') model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(224, 224, 3), batch_size=1), hub.KerasLayer(MODEL_URI) ]) model.compile( loss='sparse_categorical_crossentropy', metrics='sparse_top_k_categorical_accuracy') model.build([1, 224, 224, 3]) # Prepare dataset with 100 examples ds = tfds.load('imagenet_v2', split='test[:1%]') ds = ds.map(process_image) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.representative_dataset = representative_dataset(ds) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_model = converter.convert() test_ds = ds.map(lambda data: (data['image'], data['label'] + 1)).batch(16) loss, acc = model.evaluate(test_ds) print(f'Top-5 accuracy (float): {acc * 100:.2f}%') eval_tflite(quantized_model, ds) """ Explanation: Inspecting Quantization Errors with Quantization Debugger <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/quantization_debugger"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/quantization_debugger.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> Although full-integer quantization provides improved model size and latency, the quantized model won't always work as expected. It's usually expected for the model quality (e.g. accuracy, mAP, WER) to be slightly lower than the original float model. However, there are cases where the model quality can go below your expectation or generated completely wrong results. When this problem happens, it's tricky and painful to spot the root cause of the quantization error, and it's even more difficult to fix the quantization error. To assist this model inspection process, quantization debugger can be used to identify problematic layers, and selective quantization can leave those problematic layers in float so that the model accuracy can be recovered at the cost of reduced benefit from quantization. Note: This API is experimental, and there might be breaking changes in the API in the course of improvements. Quantization Debugger Quantization debugger makes it possible to do quantization quality metric analysis in the existing model. Quantization debugger can automate processes for running model with a debug dataset, and collecting quantization quality metrics for each tensors. Note: Quantization debugger and selective quantization currently only works for full-integer quantization with int8 activations. Prerequisites If you already have a pipeline to quantize a model, you have all necessary pieces to run quantization debugger! Model to quantize Representative dataset In addition to model and data, you will need to use a data processing framework (e.g. pandas, Google Sheets) to analyze the exported results. Setup This section prepares libraries, MobileNet v3 model, and test dataset of 100 images. End of explanation """ converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_dataset(ds) # my_debug_dataset should have the same format as my_representative_dataset debugger = tf.lite.experimental.QuantizationDebugger( converter=converter, debug_dataset=representative_dataset(ds)) """ Explanation: We can see that the original model has a much higher top-5 accuracy for our small dataset, while the quantized model has a significant accuracy loss. Step 1. Debugger preparation Easiest way to use the quantization debugger is to provide tf.lite.TFLiteConverter that you have been using to quantize the model. End of explanation """ debugger.run() """ Explanation: Step 2. Running the debugger and getting the results When you call QuantizationDebugger.run(), the debugger will log differences between float tensors and quantized tensors for the same op location, and process them with given metrics. End of explanation """ RESULTS_FILE = '/tmp/debugger_results.csv' with open(RESULTS_FILE, 'w') as f: debugger.layer_statistics_dump(f) !head /tmp/debugger_results.csv """ Explanation: The processed metrics can be accessed with QuantizationDebugger.layer_statistics, or can be dumped to a text file in CSV format with QuantizationDebugger.layer_statistics_dump(). End of explanation """ layer_stats = pd.read_csv(RESULTS_FILE) layer_stats.head() """ Explanation: For each row in the dump, the op name and index comes first, followed by quantization parameters and error metrics (including user-defined error metrics, if any). The resulting CSV file can be used to pick problematic layers with large quantization error metrics. With pandas or other data processing libraries, we can inspect detailed per-layer error metrics. End of explanation """ layer_stats['range'] = 255.0 * layer_stats['scale'] layer_stats['rmse/scale'] = layer_stats.apply( lambda row: np.sqrt(row['mean_squared_error']) / row['scale'], axis=1) layer_stats[['op_name', 'range', 'rmse/scale']].head() plt.figure(figsize=(15, 5)) ax1 = plt.subplot(121) ax1.bar(np.arange(len(layer_stats)), layer_stats['range']) ax1.set_ylabel('range') ax2 = plt.subplot(122) ax2.bar(np.arange(len(layer_stats)), layer_stats['rmse/scale']) ax2.set_ylabel('rmse/scale') plt.show() """ Explanation: Step 3. Data analysis There are various ways to analyze the resulting. First, let's add some useful metrics derived from the debugger's outputs. (scale means the quantization scale factor for each tensor.) Range (256 / scale) RMSE / scale (sqrt(mean_squared_error) / scale) The RMSE / scale is close to 1 / sqrt(12) (~ 0.289) when quantized distribution is similar to the original float distribution, indicating a good quantized model. The larger the value is, it's more likely for the layer not being quantized well. End of explanation """ layer_stats[layer_stats['rmse/scale'] > 0.7][[ 'op_name', 'range', 'rmse/scale', 'tensor_name' ]] """ Explanation: There are many layers with wide ranges, and some layers that have high RMSE/scale values. Let's get the layers with high error metrics. End of explanation """ suspected_layers = list( layer_stats[layer_stats['rmse/scale'] > 0.7]['tensor_name']) """ Explanation: With these layers, you can try selective quantization to see if not quantizing those layers improves model quality. End of explanation """ suspected_layers.extend(list(layer_stats[:5]['tensor_name'])) """ Explanation: In addition to these, skipping quantization for the first few layers also helps improving quantized model's quality. End of explanation """ debug_options = tf.lite.experimental.QuantizationDebugOptions( denylisted_nodes=suspected_layers) debugger = tf.lite.experimental.QuantizationDebugger( converter=converter, debug_dataset=representative_dataset(ds), debug_options=debug_options) selective_quantized_model = debugger.get_nondebug_quantized_model() eval_tflite(selective_quantized_model, ds) """ Explanation: Selective Quantization Selective quantization skips quantization for some nodes, so that the calculation can happen in the original floating-point domain. When correct layers are skipped, we can expect some model quality recovery at the cost of increased latency and model size. However, if you're planning to run quantized models on integer-only accelerators (e.g. Hexagon DSP, EdgeTPU), selective quantization would cause fragmentation of the model and would result in slower inference latency mainly caused by data transfer cost between CPU and those accelerators. To prevent this, you can consider running quantization aware training to keep all the layers in integer while preserving the model accuracy. Quantization debugger's option accepts denylisted_nodes and denylisted_ops options for skipping quantization for specific layers, or all instances of specific ops. Using suspected_layers we prepared from the previous step, we can use quantization debugger to get a selectively quantized model. End of explanation """ debug_options = tf.lite.experimental.QuantizationDebugOptions( denylisted_ops=['MEAN']) debugger = tf.lite.experimental.QuantizationDebugger( converter=converter, debug_dataset=representative_dataset(ds), debug_options=debug_options) selective_quantized_model = debugger.get_nondebug_quantized_model() eval_tflite(selective_quantized_model, ds) """ Explanation: The accuracy is still lower compared to the original float model, but we have notable improvement from the whole quantized model by skipping quantization for ~10 layers out of 111 layers. You can also try to not quantized all ops in the same class. For example, to skip quantization for all mean ops, you can pass MEAN to denylisted_ops. End of explanation """ debug_options = tf.lite.experimental.QuantizationDebugOptions( layer_debug_metrics={ 'mean_abs_error': (lambda diff: np.mean(np.abs(diff))) }, layer_direct_compare_metrics={ 'correlation': lambda f, q, s, zp: (np.corrcoef(f.flatten(), (q.flatten() - zp) / s)[0, 1]) }, model_debug_metrics={ 'argmax_accuracy': (lambda f, q: np.mean(np.argmax(f) == np.argmax(q))) }) debugger = tf.lite.experimental.QuantizationDebugger( converter=converter, debug_dataset=representative_dataset(ds), debug_options=debug_options) debugger.run() CUSTOM_RESULTS_FILE = '/tmp/debugger_results.csv' with open(CUSTOM_RESULTS_FILE, 'w') as f: debugger.layer_statistics_dump(f) custom_layer_stats = pd.read_csv(CUSTOM_RESULTS_FILE) custom_layer_stats[['op_name', 'mean_abs_error', 'correlation']].tail() """ Explanation: With these techniques, we are able to improve the quantized MobileNet V3 model accuracy. Next we'll explore advanced techniques to improve the model accuracy even more. Advanced usages Whith following features, you can further customize your debugging pipeline. Custom metrics By default, the quantization debugger emits five metrics for each float-quant difference: tensor size, standard deviation, mean error, max absolute error, and mean squared error. You can add more custom metrics by passing them to options. For each metrics, the result should be a single float value and the resulting metric will be an average of metrics from all examples. layer_debug_metrics: calculate metric based on diff for each op outputs from float and quantized op outputs. layer_direct_compare_metrics: rather than getting diff only, this will calculate metric based on raw float and quantized tensors, and its quantization parameters (scale, zero point) model_debug_metrics: only used when float_model_(path|content) is passed to the debugger. In addition to the op-level metrics, final layer output is compared to the reference output from the original float model. End of explanation """ debugger.model_statistics """ Explanation: The result of model_debug_metrics can be separately seen from debugger.model_statistics. End of explanation """ from tensorflow.lite.python import convert """ Explanation: Using (internal) mlir_quantize API to access in-depth features Note: Some features in the folowing section, TFLiteConverter._experimental_calibrate_only and converter.mlir_quantize are experimental internal APIs, and subject to change in a non-backward compatible way. End of explanation """ converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.representative_dataset = representative_dataset(ds) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter._experimental_calibrate_only = True calibrated_model = converter.convert() # Note that enable_numeric_verify and enable_whole_model_verify are set. quantized_model = convert.mlir_quantize( calibrated_model, enable_numeric_verify=True, enable_whole_model_verify=True) debugger = tf.lite.experimental.QuantizationDebugger( quant_debug_model_content=quantized_model, debug_dataset=representative_dataset(ds)) """ Explanation: Whole model verify mode The default behavior for the debug model generation is per-layer verify. In this mode, the input for float and quantize op pair is from the same source (previous quantized op). Another mode is whole-model verify, where the float and quantize models are separated. This mode would be useful to observe how the error is being propagated down the model. To enable, enable_whole_model_verify=True to convert.mlir_quantize while generating the debug model manually. End of explanation """ selective_quantized_model = convert.mlir_quantize( calibrated_model, denylisted_nodes=suspected_layers) eval_tflite(selective_quantized_model, ds) """ Explanation: Selective quantization from an already calibrated model You can directly call convert.mlir_quantize to get the selective quantized model from already calibrated model. This would be particularly useful when you want to calibrate the model once, and experiment with various denylist combinations. End of explanation """
kubeflow/pipelines
components/google-cloud/google_cloud_pipeline_components/experimental/dataflow/python_job/DataflowPythonJobOp_sample.ipynb
apache-2.0
!pip3 install -U google-cloud-pipeline-components -q """ Explanation: Vertex Pipelines: Dataflow Python Job OP Overview This notebook shows how to use the DataflowPythonJobOp to create a Python Dataflow Job component. DataflowPythonJobOp creates a pipeline component that prepares data by submitting an Apache Beam job (authored in Python) to Cloud Dataflow for execution. The Python Beam code is run with Cloud Dataflow Runner. learn more about Google Cloud Dataflow Runner here. For more details on DataflowPythonJobOp interface please see the API doc. Install required packages End of explanation """ PROJECT_ID = 'YOUR_PROJECT_ID' LOCATION = "us-central1" PIPELINE_ROOT = 'gs://YOUR_BUCKET_NAME' # No ending slash # Dataflow sample parameters PIPELINE_NAME = 'dataflow-pipeline-sample' OUTPUT_FILE = '{}/wc/wordcount.out'.format(PIPELINE_ROOT) """ Explanation: Before you begin Set your Project ID, Location, Pipeline Root, and a few parameters required for the Dataflow sample. End of explanation """ from google_cloud_pipeline_components.experimental.dataflow import DataflowPythonJobOp from google_cloud_pipeline_components.experimental.wait_gcp_resources import WaitGcpResourcesOp """ Explanation: Import libraries End of explanation """ import kfp.dsl as dsl import json @dsl.pipeline( name=PIPELINE_NAME, description='Dataflow launch python pipeline' ) def pipeline( python_file_path:str = 'gs://ml-pipeline-playground/samples/dataflow/wc/wc.py', project_id:str = PROJECT_ID, location:str = LOCATION, staging_dir:str = PIPELINE_ROOT, requirements_file_path:str = 'gs://ml-pipeline-playground/samples/dataflow/wc/requirements.txt', ): dataflow_python_op = DataflowPythonJobOp( project=project_id, location=location, python_module_path=python_file_path, temp_location = staging_dir, requirements_file_path = requirements_file_path, args = ['--output', OUTPUT_FILE], ) dataflow_wait_op = WaitGcpResourcesOp( gcp_resources = dataflow_python_op.outputs["gcp_resources"]) """ Explanation: Create a pipeline using DataflowPythonJobOp and WaitGcpResourcesOp In this section we create a pipeline using the DataflowPythonJobOp and the Apache Beam WordCount Examples. Then we use the 'WaitGcpResourcesOp' to poll the resource status and wait for it to finish. To use the 'WaitGcpResourcesOp' component, first create the DataflowPythonJobOp component which outputs a JSON formatted gcp_resources proto, then pass it to the wait component. End of explanation """
chinskiy/KDD-99
exploratory_analysis.ipynb
mit
%matplotlib inline #%matplotlib notebook import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatches import constants data_10_percent = 'kddcup.data_10_percent' data_full = 'kddcup.data' data = pd.read_csv(data_10_percent, names=constants.names) # Remove Traffic features computed using a two-second time window data.drop(constants.traffic_features, inplace=True, axis=1) data.head() data.describe() """ Explanation: KDD Cup 1999 http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html Feature list Table 1: Basic features of individual TCP connections. |feature name | description | type | |-------------|-------------|------| |duration |length (number of seconds) of the connection | continuous | |protocol_type |type of the protocol, e.g. tcp, udp, etc. | discrete | |service | network service on the destination, e.g., http, telnet, etc. | discrete | |src_bytes | number of data bytes from source to destination | continuous | |dst_bytes | number of data bytes from destination to source | continuous | |flag | normal or error status of the connection | discrete | |land | 1 if connection is from/to the same host/port; 0 otherwise | discrete | |wrong_fragment | number of ''wrong'' fragments | continuous | |urgent | number of urgent packets | continuous | Table 2: Content features within a connection suggested by domain knowledge. | feature name | description | type | |---------------|-------------|------| | hot | number of ''hot'' indicators | continuous | | num_failed_logins | number of failed login attempts | continuous | | logged_in | 1 if successfully logged in; 0 otherwise | discrete | | num_compromised | number of ''compromised'' conditions | continuous | | root_shell | 1 if root shell is obtained; 0 otherwise | discrete | | su_attempted | 1 if ''su root'' command attempted; 0 otherwise | discrete | | num_root | number of ''root'' accesses | continuous | | num_file_creations | number of file creation operations | continuous | | num_shells | number of shell prompts | continuous | | num_access_files | number of operations on access control files | continuous | | num_outbound_cmds | number of outbound commands in an ftp session | continuous | | is_hot_login | 1 if the login belongs to the ''hot'' list; 0 otherwise | discrete | | is_guest_login | 1 if the login is a ''guest''login; 0 otherwise | discrete | End of explanation """ from sklearn import preprocessing le_dicts = {} for categorical_name in constants.categorical_names: le = preprocessing.LabelEncoder() le.fit(data[categorical_name]) le_dicts[categorical_name] = dict(zip(le.transform(le.classes_), le.classes_)) print(categorical_name, ':', le_dicts[categorical_name]) data[categorical_name + '_num'] = le.fit_transform(data[categorical_name]) """ Explanation: Categorical features to numeric labels End of explanation """ data['label'].value_counts() """ Explanation: Discrete feature analysis End of explanation """ data['label_binary_num'] = data.label.apply(lambda label: 0 if label == 'normal.' else 1) data['label_binary_num'].value_counts() data['label_four'] = data.label.apply(lambda label: constants.label_to_four_attack_class[label]) data['label_four_num'] = data.label_four.apply(lambda label: constants.five_classes_to_num[label]) pd.value_counts(data['label_four'], sort=True).plot.bar() #all data pd.value_counts(data['protocol_type'], sort=True).plot.bar() #all data according to label_binary pd.pivot_table(data[['protocol_type_num', 'label_binary_num']].assign(count=1), index=['label_binary_num'], columns=['protocol_type_num'], aggfunc='count').plot(kind='bar', color=constants.my_colors) handles = [mpatches.Patch(label=le_dicts['protocol_type'][i], color=constants.my_colors[i]) for i in sorted(le_dicts['protocol_type'])] plt.legend(handles=handles, loc=2) plt.show() pd.pivot_table(data[['protocol_type', 'label_binary_num']].assign(count=1), index=['label_binary_num'], columns=['protocol_type'], aggfunc='count') data['service'].value_counts()[:10] #all data by service pd.value_counts(data['service'], sort=True).mask(lambda x: x < 200)\ .dropna()\ .plot(kind='bar', logy=True, figsize=(20, 5)) fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(15, 10)) pd.pivot_table(data[['service_num', 'service', 'label_binary_num']].assign(count=1), index=['service'], columns=['label_binary_num'], aggfunc='count')['count'][0].mask(lambda x: x < 200)\ .dropna().sort_values(ascending=False).plot(kind='bar', logy=True, ax=axes[0]) pd.pivot_table(data[['service_num', 'service', 'label_binary_num']].assign(count=1), index=['service'], columns=['label_binary_num'], aggfunc='count')['count'][1].mask(lambda x: x < 200)\ .dropna().sort_values(ascending=False).plot(kind='bar', logy=True, ax=axes[1]) # without NA in any column pd.pivot_table(data[['service', 'label_binary_num']].assign(count=1), index=['service'], columns=['label_binary_num'], aggfunc='count').sort_values(('count', 0), ascending=False).dropna() pd.value_counts(data['flag'], sort=True).plot(kind='bar', logy=True, figsize=(15, 5)) #flag according to label_binary pd.pivot_table(data[['flag_num', 'label_binary_num']].assign(count=1), index=['label_binary_num'], columns=['flag_num'], aggfunc='count').plot(kind='bar', color=constants.my_colors, logy=True, legend=False, figsize=(15, 5)) handles = [mpatches.Patch(label=le_dicts['flag'][i], color=constants.my_colors[i]) for i in sorted(le_dicts['flag'])] plt.legend(handles=handles) plt.show() pd.pivot_table(data[['flag', 'label_binary_num']].assign(count=1), index=['label_binary_num'], columns=['flag'], aggfunc='count') """ Explanation: data normal = 0, attack = 1 End of explanation """ # Corr with binary label data.drop(constants.categorical_names + constants.label_names, axis=1).corrwith(data.label_binary_num).sort_values() # Corr with 5 labels data.drop(constants.categorical_names + constants.label_names, axis=1).corrwith(data.label_four_num).sort_values() # corr heatmap # last 2 are label_binary_num and label_four_num that's why it so hot plt.figure(figsize=(7,7)) plt.matshow(data.drop(constants.categorical_names + \ ['label', 'label_four'] + \ constants.names_without_changes, axis=1).corr(), fignum=1) for i, elem in enumerate(data.drop(constants.categorical_names + \ ['label', 'label_four'] + \ constants.names_without_changes, axis=1).columns.tolist()): print(i, elem) """ Explanation: Correlation End of explanation """ from sklearn.ensemble import ExtraTreesClassifier forest = ExtraTreesClassifier(n_estimators=500, random_state=42) data_test = data.drop(constants.categorical_names +\ constants.label_names +\ constants.names_without_changes, axis=1) #for 2 labels forest.fit(data_test, data['label_binary_num']) importances = forest.feature_importances_ indices = np.argsort(importances)[::-1] plt.figure(figsize=(15, 4)) plt.bar(range(data_test.shape[1]), importances[indices], color="r", align="center") plt.xticks(range(data_test.shape[1]), indices) plt.xlim([-1, data_test.shape[1]]) plt.show() #for 5 labels forest.fit(data_test, data['label_four_num']) importances = forest.feature_importances_ indices = np.argsort(importances)[::-1] plt.figure(figsize=(15, 4)) plt.bar(range(data_test.shape[1]), importances[indices], color="r", align="center") plt.xticks(range(data_test.shape[1]), indices) plt.xlim([-1, data_test.shape[1]]) plt.show() for i, elem in enumerate(data_test.columns.tolist()): print(i, elem) """ Explanation: Feature importance End of explanation """
jswoboda/SimISR
ExampleNotebooks/SingleBeamExample.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import os,inspect from SimISR import Path import scipy as sp from SimISR.utilFunctions import readconfigfile,makeconfigfile from SimISR.IonoContainer import IonoContainer,MakeTestIonoclass from SimISR.runsim import main as runsim from SimISR.analysisplots import analysisdump import seaborn as sns """ Explanation: Single Beam This notebook will run the ISR simulator with a set of data created from a function that makes test data. The results along with error bars are plotted below. End of explanation """ # set the number of pulses npulses = 2000 curloc = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) testpath = os.path.join(os.path.split(curloc)[0],'Testdata','Notebookexample1') if not os.path.isdir(testpath): os.mkdir(testpath) defaultpath = os.path.join(os.path.split(curloc)[0],'Test') defcon = os.path.join(defaultpath,'statsbase.ini') (sensdict,simparams) = readconfigfile(defcon) tint = simparams['IPP']*npulses ratio1 = tint/simparams['Tint'] simparams['Tint']=ratio1 * simparams['Tint'] simparams['Fitinter'] = ratio1 * simparams['Fitinter'] simparams['TimeLim'] = tint simparams['startfile']='startfile.h5' makeconfigfile(os.path.join(testpath,'stats.ini'),simparams['Beamlist'],sensdict['Name'],simparams) """ Explanation: Set up Config Files Setting up a configuration files and the directory needed to run the simulation. The simualtor assumes that for each simulation there is a dedicated directory to save out data along the different processing stages. The simulator also assumes that there is a configuration file which is created in the following cell using a default one that comes with the code base. The only parameter the user should have to set is the number of pulses. End of explanation """ finalpath = os.path.join(testpath,'Origparams') if not os.path.isdir(finalpath): os.mkdir(finalpath) z = (50.+sp.arange(120)*5.) nz = len(z) coords = sp.column_stack((sp.zeros((nz,2)),z)) Icont1=MakeTestIonoclass(testv=False,testtemp=True,N_0=1e11,z_0=250.0,H_0=50.0,coords=coords,times =sp.array([[0,1e6]])) Icontstart = MakeTestIonoclass(testv=False,testtemp=False,N_0=1e11,z_0=250.0,H_0=50.0,coords=coords,times =sp.array([[0,1e6]])) finalfile = os.path.join(finalpath,'0 stats.h5') Icont1.saveh5(finalfile) Icontstart.saveh5(os.path.join(testpath,'startfile.h5')) """ Explanation: Make Input Data This section will create a set of input parmeters that can be used to create ISR Data. It uses a function MakeTestIonoclass which will create a set of plasma parameters that varies with altitude depending on the the function inputs. This data is put into an ionocontainer class, which is used as a container class to move data between the radarData class, fitter class and plotting modules. It has a standard format so any radar data or plasma parameters for the simulator can be saved in this. A start file is also made which will be used as the starting parameter values used in the fitter. The starting points for the fitter use a nearest neighbor in space to what is found in the start file. End of explanation """ functlist = ['spectrums','radardata','fitting'] config = os.path.join(testpath,'stats.ini') runsim(functlist,testpath,config,True) """ Explanation: Run Simulation The simulation is run through the submodule runsim and its main function, renamed in this as runsim. This function will call all of the neccesary classes and functions to run the simulator. It will save out the data based off of an internal set of file names. This function must get a configuration file and a list of functionalities it is to perform. Below the runsim function will create spectra form the plasma parameters, create radar data and then fit it. End of explanation """ sns.set_style("whitegrid") sns.set_context("notebook") fig1,axmat =plt.subplots(1,3,figsize = (16,7),sharey=True) axvec = axmat.flatten() fittedfile = os.path.join(testpath,'Fitted','fitteddata.h5') fitiono = IonoContainer.readh5(fittedfile) paramlist = ['Ne','Te','Ti'] indlist =[sp.argwhere(ip==fitiono.Param_Names)[0][0] for ip in paramlist] n_indlist =[sp.argwhere(('n'+ip)==fitiono.Param_Names)[0][0] for ip in paramlist] altin =Icont1.Cart_Coords[:,2] altfit = fitiono.Cart_Coords[:,2] in_ind=[[1,0],[1,1],[0,1]] pbounds = [[1e10,1.2e11],[200.,3000.],[200.,2500.],[-100.,100.]] for i,iax in enumerate(axvec): iinind = in_ind[i] ifitind = indlist[i] n_ifitind = n_indlist[i] #plot input indata = Icont1.Param_List[:,0,iinind[0],iinind[1]] iax.plot(indata,altin) #plot fitted data fitdata = fitiono.Param_List[:,0,ifitind] fit_error = fitiono.Param_List[:,0,n_ifitind] ploth=iax.plot(fitdata,altfit)[0] iax.set_xlim(pbounds[i]) iax.errorbar(fitdata,altfit,xerr=fit_error,fmt='-o',color=ploth.get_color()) iax.set_title(paramlist[i]) """ Explanation: Plotting The data is plotted along with error bars derived from the fitter. End of explanation """
rjenc29/numerical
tensorflow/5_word2vec.ipynb
mit
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import print_function import collections import math import numpy as np import os import random import tensorflow as tf import zipfile from matplotlib import pylab from six.moves import range from six.moves.urllib.request import urlretrieve from sklearn.manifold import TSNE """ Explanation: Deep Learning Assignment 5 The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data. End of explanation """ url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" if not os.path.exists(filename): filename, _ = urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) """ Explanation: Download the data from the source website if necessary. End of explanation """ def read_data(filename): """Extract the first file enclosed in a zip file as a list of words""" with zipfile.ZipFile(filename) as f: data = tf.compat.as_str(f.read(f.namelist()[0])).split() return data words = read_data(filename) print('Data size %d' % len(words)) """ Explanation: Read the data into a string. End of explanation """ vocabulary_size = 50000 def build_dataset(words): count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 # dictionary['UNK'] unk_count = unk_count + 1 data.append(index) count[0][1] = unk_count reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reverse_dictionary data, count, dictionary, reverse_dictionary = build_dataset(words) print('Most common words (+UNK)', count[:5]) print('Sample data', data[:10]) del words # Hint to reduce memory. """ Explanation: Build the dictionary and replace rare words with UNK token. End of explanation """ print(data[:10]) print(count[:10]) """ Explanation: Let's display the internal variables to better understand their structure: End of explanation """ data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ skip_window target skip_window ] buffer = collections.deque(maxlen=span) for _ in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) for i in range(batch_size // num_skips): target = skip_window # target label at the center of the buffer targets_to_avoid = [ skip_window ] for j in range(num_skips): while target in targets_to_avoid: target = random.randint(0, span - 1) targets_to_avoid.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j, 0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labels print('data:', [reverse_dictionary[di] for di in data[:32]]) for num_skips, skip_window in [(2, 1), (4, 2)]: data_index = 0 batch, labels = generate_batch(batch_size=16, num_skips=num_skips, skip_window=skip_window) print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window)) print(' batch:', [reverse_dictionary[bi] for bi in batch]) print(' labels:', [reverse_dictionary[li] for li in labels.reshape(16)]) for num_skips, skip_window in [(2, 1), (4, 2)]: data_index = 1 batch, labels = generate_batch(batch_size=16, num_skips=num_skips, skip_window=skip_window) print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window)) print(' batch:', [reverse_dictionary[bi] for bi in batch]) print(' labels:', [reverse_dictionary[li] for li in labels.reshape(16)]) """ Explanation: Function to generate a training batch for the skip-gram model. End of explanation """ print(batch) print(labels) """ Explanation: Note: the labels is a sliding random value of the word surrounding the words of the batch. It is not obvious with the output above, but all the data are based on index, and not the word directly. End of explanation """ batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. here we limit the # validation samples to the words that have a low numeric ID, which by # construction are also the most frequent. valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # Only pick dev samples in the head of the distribution. valid_examples = np.array(random.sample(range(valid_window), valid_size)) num_sampled = 64 # Number of negative examples to sample. graph = tf.Graph() with graph.as_default(): # Input data. train_dataset = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Variables. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) softmax_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) softmax_biases = tf.Variable(tf.zeros([vocabulary_size])) # Model. # Look up embeddings for inputs. embed = tf.nn.embedding_lookup(embeddings, train_dataset) # Compute the softmax loss, using a sample of the negative labels each time. loss = tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed, labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size)) # Optimizer. # Note: The optimizer will optimize the softmax_weights AND the embeddings. # This is because the embeddings are defined as a variable quantity and the # optimizer's `minimize` method will by default modify all variable quantities # that contribute to the tensor it is passed. # See docs on `tf.train.Optimizer.minimize()` for more details. optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss) # Compute the similarity between minibatch examples and all embeddings. # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup( normalized_embeddings, valid_dataset) similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings)) num_steps = 100001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print('Initialized') average_loss = 0 for step in range(num_steps): batch_data, batch_labels = generate_batch( batch_size, num_skips, skip_window) feed_dict = {train_dataset : batch_data, train_labels : batch_labels} _, l = session.run([optimizer, loss], feed_dict=feed_dict) average_loss += l if step % 2000 == 0: if step > 0: average_loss = average_loss / 2000 # The average loss is an estimate of the loss over the last 2000 batches. print('Average loss at step %d: %f' % (step, average_loss)) average_loss = 0 # note that this is expensive (~20% slowdown if computed every 500 steps) if step % 10000 == 0: sim = similarity.eval() for i in range(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = reverse_dictionary[nearest[k]] log = '%s %s,' % (log, close_word) print(log) final_embeddings = normalized_embeddings.eval() """ Explanation: Train a skip-gram model. End of explanation """ print(final_embeddings[0]) """ Explanation: This is what an embedding looks like: End of explanation """ print(np.sum(np.square(final_embeddings[0]))) num_points = 400 tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000) two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :]) def plot(embeddings, labels): assert embeddings.shape[0] >= len(labels), 'More labels than embeddings' pylab.figure(figsize=(15,15)) # in inches for i, label in enumerate(labels): x, y = embeddings[i,:] pylab.scatter(x, y) pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') pylab.show() words = [reverse_dictionary[i] for i in range(1, num_points+1)] plot(two_d_embeddings, words) """ Explanation: All the values are abstract, there is practical meaning of the them. Moreover, the final embeddings are normalized as you can see here: End of explanation """
empet/Math
Plotly-interactive-visualization-of-complex-valued-functions.ipynb
bsd-3-clause
import numpy as np import numpy.ma as ma from numpy import pi import matplotlib.pyplot as plt import matplotlib.colors def hsv_colorscale(S=1, V=1): if S < 0 or S > 1 or V < 0 or V > 1: raise ValueError('Parameters S (saturation), V (value, brightness) must be in [0,1]') argument = np.array([-pi, -5*pi/6, -2*pi/3, -3*pi/6, -pi/3, -pi/6, 0, pi/6, pi/3, 3*pi/6, 2*pi/3, 5*pi/6, pi]) H = argument/(2*np.pi)+1 H = np.mod(H,1) Sat = S*np.ones_like(H) Val = V*np.ones_like(H) HSV = np.dstack((H, Sat, Val)) RGB = matplotlib.colors.hsv_to_rgb(HSV) colormap = (255* np.squeeze(RGB)).astype(int) #Define and return the Plotly hsv colorscale adapted to polar coordinates for complex valued functions step_size = 1 / (colormap.shape[0]-1) return [[round(k*step_size, 4), f'rgb{tuple(c)}'] for k, c in enumerate(colormap)] pl_hsv=hsv_colorscale() pl_hsv """ Explanation: Plotly interactive visualization of complex valued functions In this Jupyter Notebook we generate via Plotly, an interactive plot of a complex valued function, $f:D\subset\mathbb{C}\to\mathbb{C}$. A complex function is visualized using a version of the domain coloring method. Compared with other types of domain coloring, the Plotly interactive plot is much more informative. It displays for each point $z$ in a rectangular region of the complex plane, not only the hsv color associated to $\arg(f(z)$ (argument of $f(z)$), but also the values $\arg(f(z)$ and $\log(|f(z)|)$ (the log modulus). First we define a Plotly hsv (hue, saturation, value) colorscale, adapted to the range of the numpy functions, np.angle, respectively, np.arctan2. We plot a Heatmap over a rectangular region in the complex plane, colored via this colorscale, according to the values of the $\arg(f(z))$. Over the Heatmap are plotted a few contour lines of the log modulus $\log(|f(z)|)$. End of explanation """ def evaluate_function(func, re=(-1,1), im=(-1,1), N=100): # func is the complex function to be ploted #re, im are the interval ends on the real and imaginary axes, defining the rectangular region in the complex plane #N gives the number of points in an interval of length 1 l = re[1]-re[0] h = im[1]-im[0] resL = int(N*l) #horizontal resolution resH = int(N*h) #vertical resolution X = np.linspace(re[0], re[1], resL) Y = np.linspace(im[0], im[1], resH) x, y = np.meshgrid(X,Y) z = x+1j*y w = func(z) argument = np.angle(w) modulus = np.absolute(w) log_modulus = ma.log(modulus) return X,Y, argument, log_modulus def get_levels(fmodul, nr=10): #define the levels for contour plot of the modulus |f(z)| #fmodul is the log modulus of f(z) computed on meshgrid #nr= the number of contour lines mv = np.nanmin(fmodul) Mv = np.nanmax(fmodul) size = (Mv-mv)/float(nr) return [mv+k*size for k in range(nr+1)] import plotly.graph_objects as go """ Explanation: The following two functions compute data needed for visualization: End of explanation """ def plotly_contour_lines(contour_data): #contour_data is a matplotlib.contour.QuadContourSet object returned by plt.contour contours=contour_data.allsegs # if len(contours)==0: raise ValueError('Something wrong hapend in computing contour lines') #contours is a list of lists; if contour is a list in contours, its elements are arrays. #Each array defines a segment of contour line at some level xl = []# list of x coordinates of points on a contour line yl = []# y #lists of coordinates for contour lines consisting in one point: xp = []# yp = [] for k,contour in enumerate(contours): L = len(contour) if L!= 0: # sometimes the list of points at the level np.max(modulus) is empty for ar in contour: if ar.shape[0] == 1: xp += [ar[0,0], None] yp += [ar[0,1], None] else: xl += ar[:,0].tolist() yl += ar[:,1].tolist() xl.append(None) yl.append(None) lines = go.Scatter(x=xl, y=yl, mode='lines', name='modulus', line=dict(width=1, color='#a5bab7', shape='spline', smoothing=1), hoverinfo='skip' ) if len(xp) == 0: return lines else: points = go.Scatter(x=xp, y=yp, mode='markers', marker=dict(size=4, color='#a5bab7'), hoverinfo='none') return lines, points """ Explanation: We extract the contour lines from an attribute of the matplotlib.contour.QuadContourSet object returned by the matplotlib.pyplot.contour. The function defined in the next cell retrieves the points on the contour lines segments, and defines the corresponding Plotly traces: End of explanation """ def set_plot_layout(title, width=500, height=500): return go.Layout(title_text=title, title_x=0.5, width=width, height=height, showlegend=False, xaxis_title='Re(z)', yaxis_title='Im(z)') """ Explanation: Set the layout of the plot: End of explanation """ def text_to_display(x, y, argum, modulus): m, n = argum.shape return [['z='+'{:.2f}'.format(x[j])+'+' + '{:.2f}'.format(y[i])+' j'+'<br>arg(f(z))='+'{:.2f}'.format(argum[i][j])+\ '<br>log(modulus(f(z)))='+'{:.2f}'.format(modulus[i][j]) for j in range(n)] for i in range(m)] """ Explanation: Define a function that associates to each point $z$ in a meshgrid, the strings containing the values to be displayed when hovering the mouse over that point: End of explanation """ def plotly_plot(f, re=(-1,1), im=(-1,1), N=50, nr=10, title='', width=500, height=500, **kwargs): x, y, argument, log_modulus=evaluate_function(f, re=re, im=im, N=N) levels = get_levels(log_modulus, nr=nr) plt.figure(figsize=(0.05,0.05)) plt.axis('off') cp = plt.contour(x,y, log_modulus, levels=levels) cl = plotly_contour_lines(cp) text = text_to_display(x,y, argument, log_modulus.data) tickvals = [-np.pi, -2*np.pi/3, -np.pi/3, 0, np.pi/3, 2*np.pi/3, np.pi] #define the above values as strings with pi-unicode ticktext=['-\u03c0', '-2\u03c0/3', '-\u03c0/3', '0', '\u03c0/3', '2\u03c0/3', '\u03c0'] dom = go.Heatmap(x=x, y=y, z=argument, colorscale=pl_hsv, text=text, hoverinfo='text', colorbar=dict(thickness=20, tickvals=tickvals, ticktext=ticktext, title='arg(f(z))')) if len(cl) == 2 and isinstance(cl[0], go.Scatter): data = [dom, cl[0], cl[1]] else: data = [dom, cl] layout = set_plot_layout(title=title) fig=go.Figure(data=data, layout =layout) return fig """ Explanation: Finally, the function plotly_plot calls all above defined functions in order to generate the phase plot of a given complex function: End of explanation """ fig = plotly_plot(lambda z: np.sin(z)/(1-np.cos(z**3)), re=(-2,2), im=(-2,2), nr=22, title='$f(z)=\\sin(z)/(1-\\cos(z^3))$'); fig.update_layout(xaxis_range=[-2,2], yaxis_range=[-2,2]) fig.show() from IPython.display import IFrame url = "https://chart-studio.plotly.com/~empet/13957" IFrame(url, width=700, height=700) from IPython.core.display import HTML def css_styling(): styles = open("./custom.css", "r").read() return HTML(styles) css_styling() """ Explanation: As an example we take the function $f(z)=\sin(z)/(1-cos(z^3))$: End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/nicam16-7s/seaice.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-7s', 'seaice') """ Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: MIROC Source ID: NICAM16-7S Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:40 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation """
johnhw/summerschool2016
classifying_audio_streams/audio_1.ipynb
mit
import numpy as np import sklearn.datasets, sklearn.linear_model, sklearn.neighbors import matplotlib.pyplot as plt #import seaborn as sns import sys, os, time import scipy.io.wavfile, scipy.signal %matplotlib inline import matplotlib as mpl from IPython.core.display import HTML mpl.rcParams['figure.figsize'] = (18.0, 10.0) import pandas as pd from jslog import js_key_update # This code logs keystrokes IN THIS JUPYTER NOTEBOOK WINDOW ONLY (not any other activity) # (don't type your passwords in this notebook!) # Log file is ../jupyter_keylog.csv %%javascript function push_key(e,t,n){var o=keys.push([e,t,n]);o>500&&(kernel.execute("js_key_update(["+keys+"])"),keys=[])}var keys=[],tstart=window.performance.now(),last_down_t=0,key_states={},kernel=IPython.notebook.kernel;document.onkeydown=function(e){var t=window.performance.now()-tstart;key_states[e.which]=[t,last_down_t],last_down_t=t},document.onkeyup=function(e){var t=window.performance.now()-tstart,n=key_states[e.which];if(void 0!=n){var o=n[0],s=n[1];if(0!=s){var a=t-o,r=o-s;push_key(e.which,a,r),delete n[e.which]}}}; """ Explanation: Supervised Classification of Audio Streams: Part I End of explanation """ ### Perceptron demo ### Load the classic Iris dataset. This has petal and sepal measurements ### of three species of irises. The iris species can be classified ### from these measurements. Here, we use only the first two ### measurment dimensions (just to make it plot nicely) iris = sklearn.datasets.load_iris() # we just choose a 2D slice of this data iris_2d = iris.data[:,0:2] ## Parallel co-ordinates plot for feature,target in zip(iris.data, iris.target): color = plt.get_cmap('jet')(target*100) plt.plot(feature, c=color); plt.xticks([0,1,2,3]) plt.xlabel("Feature") plt.ylabel("Activation") plt.title("Parallel coordinates plot of the Iris data") # plot the 2D data ## Problems with "viridis"? Use "jet" instead plt.scatter(iris_2d[:,0], iris_2d[:,1], c=iris.target, cmap='viridis', s=80) plt.title("Original iris data") def perceptron(data, targets, title): plt.figure() plt.scatter(data[:,0], data[:,1], c=targets, cmap='viridis', s=80) plt.title("Binary classes (%s)" % title) # find a separating plane per = sklearn.linear_model.Perceptron( n_iter=5, eta0=1) per.fit(data, targets, [-1,1]) # plot the original data plt.figure() # predict output value across the space res = 150 # we generate a set of points covering the x,y space xm, ym = np.meshgrid(np.linspace(0,8,res), np.linspace(0,5,res)) # then predict the perceptron output value at each position zm = per.predict(np.c_[xm.ravel(), ym.ravel()]) zm = zm.reshape(xm.shape) # and plot it plt.contourf(xm,ym,zm, cmap='viridis', alpha=0.5) plt.scatter(data[:,0], data[:,1], c=targets, cmap='viridis', s=80) plt.title("Decision boundary (%s)" % title) # make binary targets (either class 0 or other class) binary_1 = np.where(iris.target==0, -1, 1) # this one is separable binary_2 = np.where(iris.target==1, -1, 1) # this one is *not* linearly separable perceptron(iris_2d, binary_1, "separable") perceptron(iris_2d, binary_2, "not separable") """ Explanation: Topic goal We will explore how to classify audio streams to make a touch sensor from a microphone, using supervised machine learning approaches. This introduces classification as a way of building controls from sensors, how to evaluate performance meaningfully, and the issues that are encountered in turning time series like audio into usable inputs. Outline In the next two hours, we will: [Part I] * <a href="#touch"> Discuss acoustic touch sensing </a> * <a href="#ml"> Quickly review machine learning, and supervised classification </a> * <a href="#evaluation"> Examine how to evaluate a classifier </a> * <a href="#features"> Discuss feature transforms for audio </a> <a href="#practical"> Practical: build a simple binary classifier to discriminate two scratching sounds. </a> [Part II] * <a href="audio_2.ipynb#overfitting"> Discuss overfitting and how to avoid it </a> * <a href="audio_2.ipynb#block"> See how to split test and training data for time series. </a> * <a href="audio_2.ipynb#ensembling"> Discuss ensembling techniques </a> * <a href="audio_2.ipynb#adv_features"> Look at more advanced audio features </a> <a href="audio_2.ipynb#challenge"> Challenge: build the best acoustic touch classifier for Stane-like acoustic touch sensing waveforms. </a> <a id="touch"> </a> Motivation This topic is essentially the operations behind the Stane project Paper Video This used 3D printed textures on mobile devices. Scratching the fingernail across the surface generates distinctive sounds, which are propagated through the case and picked up by a piezo microphone. <img src="imgs/stane_real.png" width="400px"> <img src="imgs/shell.png" width="400px"> <img src="imgs/disc.png" width="400px"> Different regions have different textures, and thus the area being rubbed can be determined by analysing the audio signal. <img src="imgs/piezo.png" width="400px"> What is machine learning? <a id="ml"> </a> Machine learning can be summarised as making predictions from data Machine learning focuses on trying to estimate the value of unknown variables given some data which might predict them. Machine learning algorithms generally have a training phase, where input data is used to update the parameters of a function that tries to model the data, and a testing phase, where data unseen by the learning algorithm is used to evaluate how well the model is able to do prediction. Supervised learning Supervised learning involves learning a relationship between attribute variables and target variables; in other words learning a function which maps input measurements to target values. This can be in the context of making discrete decisions (is this image a car or not?) or learning continuous relationships (how loud will this aircraft wing be if I make the shape like this?). Some mathematical notation We consider datasets which consist of a series of measurements. We learn from a training set of data. Each measurement is called a sample or datapoint, and each measurement type is called a feature. If we have $n$ samples and $d$ features, we form a matrix $X$ of size $n \times d$, which has $n$ rows of $d$ measurements. $d$ is the dimension of the measurements. $n$ is the sample size. Each row of $X$ is called a feature vector. For example, we might have 200 images of digits, each of which is a sequence of $8\times8=64$ measurements of brightness, giving us a $200 \times 64$ dataset. The rows of image values are the features. In a supervised learning situation, we will also have a vector of targets $Y$. These will be the output values we assign to each of the training feature vectors; one target per row of the training features. We want to learn a function $$y^\prime = f(x^\prime)$$ which works for a value $x^\prime$ that we have not seen before; i.e. we want to be able to predict a value $y^\prime$ based on a model ($f(x)$) that we learned from data. If we are doing classification, these targets will be categorical labels e.g. [0,1,2,3]. Simple learning -- the perceptron Let's look at a quick example of a simple supervised learning task -- binary classification. That is we take some training features $X$ and some binary indicator labels $Y$ and learn a function $f(x)$ which has two possible outputs: [0,1] (or equivalently [-1,1]). A very simple such classifier is the perceptron which attempts to find a linear weighting of the inputs such that the sign of the output matches the class label. In 2D this would be drawing a line between the two classes; in the 3D a plane and so on. The function that will be learned is linear (i.e. is just a weighting of the inputs). Since there is only one output variable, the weights a 1D vector, and these weights are denoted $w$. There are $d$ weights, one for each feature. The function to be learned is of the form: $$f(x) = \text{sgn}(w^Tx),$$ where $\text{sgn}$ is the sign function and $w^T$ is the weight vector transposed so as to form the weighted sum. We can get some insight into what the perceptron is doing by plotting the decision boundary in the feature space. This shows us which parts of the space the classifier indicates are +1 and which are -1. We do this just by evaluating $f(x)$ over a grid of points. Limitations The perceptron is very simple, but can only learn functions that divide the feature space with a hyperplane. If the datapoints to be classified have classes that cannot be separated this way in the feature space, the perceptron cannot learn the function. End of explanation """ # This time we will use some real audio data # We load the "Sonar" data set [https://archive.ics.uci.edu/ml/datasets/Connectionist+Bench+(Sonar,+Mines+vs.+Rocks)] # This is a set of 60 sonar measurements, from bouncing sonar waves # off either rocks ("R") or mines ("M") # Each of the 60 measurments represents a frequency band # (created by emitting a frequency-swept chirp sound and recording the response) # # The classification problem is to tell the mines apart from the rocks sonar_data = pd.read_csv("data/sonar.all-data") # separate features sonar_features = np.array(sonar_data)[:,0:60].astype(np.float64) # we use label_binarize to convert "M" and "R" labels into {0,1} # the ravel() just flattens the resulting 1D matrix into a vector sonar_labels = sklearn.preprocessing.label_binarize(np.array(sonar_data)[:,60], classes=['M', 'R'])[:,0] plt.plot(sonar_features[sonar_labels==0,:].T, 'r', alpha=0.1) plt.plot(sonar_features[sonar_labels==1,:].T, 'g', alpha=0.1) plt.plot(np.mean(sonar_features[sonar_labels==0,:].T,axis=1), 'r', label="Mine") plt.plot(np.mean(sonar_features[sonar_labels==1,:].T,axis=1), 'g', label="Rock") plt.legend() plt.xlabel("Feature (Frequency band)") plt.ylabel("Activation") plt.title("Parallel coordinates plot of Sonar data") # split into a train and test section, holding out 20% (0.2) of the data for testing sonar_train_features, sonar_test_features, sonar_train_labels, sonar_test_labels = sklearn.cross_validation.train_test_split( sonar_features, sonar_labels, test_size=0.3, random_state=0) # fit an SVM svm = sklearn.svm.SVC(C=200, gamma=.02) svm.fit(sonar_train_features, sonar_train_labels) sonar_predicted_labels = svm.predict(sonar_test_features) HTML('<h2> <font color="green"> Classifcation accuracy: %.2f%% </font></h2>' % (100*svm.score(sonar_test_features, sonar_test_labels))) """ Explanation: Evaluating without deceiving yourself <a id="evaluation"> </a> We need to be able to quantify how well our learning algorithms perform on predicting unseen data given the model that has been learned. This involves testing on data that was not presented to the learning algorithm during the training phase. This means you must ALWAYS split your data in completely separate training and test sets. Train on the training data to get a model which you test on the test set. NEVER test on data you trained on -- we'll discuss this more after the practical. Classifiers An obvious metric is accuracy, the ratio of correctly classified examples to the total number of examples. End of explanation """ # we can plot the receiver-operator curve: the graph of false positive rate against true positive rate scores = svm.decision_function(sonar_test_features) fpr, tpr, thresholds = sklearn.metrics.roc_curve(sonar_test_labels, scores) plt.plot(fpr,tpr) plt.plot([0,1], [0,1]) plt.plot([0,1], [1,0]) plt.fill_between(fpr, tpr, facecolor='none', hatch='/', alpha=0.2) plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.legend(["ROC", "Chance", "EER line"]) """ Explanation: Why is accuracy not enough? This is an easy to interpret but sometimes insufficient metric for performance. One common situation where it fails is where the dataset is not balanced (e.g. there are many more examples for one label than another). If 95% of the dataset are of class 0, and 5% of class 1, predicting 0 regardless of the input has a 95% accuracy rate. Receiver-operator curves A very useful tool for capturing binary classification performance is the receiver-operator curve (ROC curve). This works with any classifier that produces scores as outputs (e.g. continuous values in the range [0,1]). Classifiers that only produce discrete class labels cannot be used to generate a ROC curve. To plot the curve, we iterate through a set of threshold values $\tau_1, \tau_2, \dots$, and plot the accuracy we would get if we thresholded the classifiers at $\tau_i$. A classifier with chance performance will have an ROC curve with $y=x$; a very good classifier will have the curve bent up towards the upper-left corner. AUC The area under the curve (AUC) of the ROC curve (i.e. the integral of the ROC curve) is a useful summary metric for performance. An AUC of 1.0 indicates perfect classification. An AUC of 0.5 indicates chance performance. End of explanation """ import ipy_table # we can print the confusion matrix confusion_matrix = sklearn.metrics.confusion_matrix(sonar_test_labels, sonar_predicted_labels).astype(np.float64) # normalise so that sum(row)=1 confusion_matrix = (confusion_matrix.T / np.sum(confusion_matrix, axis=1)).T # ipy_table.make_table just pretty prints the resulting matrix ipy_table.make_table([["", "Pred. Class 1", "Pred. Class 2"],["True Class 1", confusion_matrix[0,0], confusion_matrix[1,0]], ["True Class 2", confusion_matrix[0,1], confusion_matrix[1,1]]]) ipy_table.set_cell_style(1, 1, color='lightgray') ipy_table.set_cell_style(2, 2, color='lightgray') """ Explanation: Confusion matrices for multiclass problems Confusion matrices are effective tools for communicating where classifiers are going wrong for multi-class problems: i.e. which labels are being confused with which? A confusion matrix shows the distribution of predicted labels for each true label as a matrix of size $k \times k$ for $k$ labels. Perfect classification results in a confusion matrix with a single diagonal of 1s (every test example predicts the label to be the true label). This matrix can reveal classes which are poorly separated in multi-class problems. End of explanation """ # 1024 samples of a saw tooth y = np.tile(np.linspace(0,1,32), (32,)) plt.plot(y) plt.title("Sawtooth") plt.xlabel("Time") plt.ylabel("Amplitude") # compute Fast Fourier Transform fft = np.fft.fft(y) # note that the result is **complex** # we have a phase (angle) and a magnitude (strength) # for each sinusoidal component plt.figure() plt.plot(np.abs(fft)) plt.title("FFT(Sawtooth)") plt.xlabel("Frequency") plt.ylabel("Amplitude") # note that the spectrum is symmetric: only the first half # of the spectrum carries information # plot the first half only plt.figure() plt.plot(np.abs(fft)[0:len(fft)//2]) plt.title("FFT(Sawtooth) first half only") plt.xlabel("Frequency") plt.ylabel("Amplitude") # phase spectrum; this is very clean in this # synthetic example, but is normally much more messy # to interpret plt.figure() plt.plot(np.angle(fft)[0:len(fft)//2]) plt.title("FFT(Sawtooth) first half only, Phase spectrum") plt.xlabel("Frequency") plt.ylabel("Phase") """ Explanation: Processing time series for classification <a id="features"> </a> Feature vectors In almost all machine learning contexts, we predict outputs given a fixed length set of features; the input space a fixed dimension $d$. Each of those features is usually (but not always) continuous-valued. Sometimes the data fall naturally into this space (e.g. classifying the iris type by 3 physical measurements). In cases such as in audio classification, though, we want to make predictions based on time series; a set of measurements of the same variable set made repeatedly over time. Windowing One general solution to this time series problem is to use a delay embedding -- a fixed length sequence of previous measurements. For example the measurements $[x_{t=t}, x_{t=t-1}, x_{t=t-2}, \dots, x_{t=t-d}]$ might make up the feature vector. If we just use $d$ consecutive measurements, this process is known as windowing, because we chop up the data into fixed length windows by "sliding" a time window along the data. Consecutive (but possible discontiguous or overlapping) windows are almost universally used in audio contexts. For example, we might split a speech stream, recorded at 8Khz, into 160 sample (40ms) windows, and then try and classify each window as to what phoneme that window contains. The idea is that 20ms is enough to distinguish an "aah" sound from a "ssh" sound. <img src="imgs/contiguous_windows.png"> <img src="imgs/discontiguous_windows.png"> <img src="imgs/overlapping_windows.png"> These windows can overlap, which increases the size of the training set, but excessive overlapping can capture lots of redundant features examples. This can increase overfitting and training time without improving the classifier performance. Balancing the size of the windows (and thus the feature vector size $d$) and the amount of overlap is a matter of experimentation and domain knowledge. Feature transforms Often the "natural" raw form of the data can be difficult to classify. This might be because it has very high dimension, it is very noisy, or the classification boundary just isn't very compatible with your classifier (e.g. the class borders in the original space are highly-nonlinear and you are using a linear classifier). Feature engineering is the art of finding transforms of the raw data that increase classifier performance. These can often be simple, such as dropping some of the measurements entirely (under the assumption that they are irrelvant), or averaging measurements together (under the assumption that this reduces noise). Audio transforms Audio data tends to be very high dimensional -- you might get 4000 to 44100 measurements for a single second of data. A single audio sample has very little information indeed; it is the longer-term (millisecond to second) properties that have all the interesting information. In an HCI task we probably wouldn't expect there to be hundreds of interaction "events" or state changes in a second; humans can't control things that quickly. So want transforms that pull out interesting features over time. The classical feature transform is the Fourier transform, which rewrites a signal varying over time as a sum of sinusoidal (periodic) components. This functions much like our ear works, splitting up audio in frequency bands, each of which has a phase and an amplitude. End of explanation """ def magnitude_plot(x): # zero pad to see the ringing clearly x_pad = np.hstack((x, np.zeros(len(x)*4))) # plot the log magnitude spectrum (first half only) logfft =np.log(np.abs(np.fft.fft(x_pad)))[0:len(x_pad)//2] plt.plot(logfft) plt.axhline(np.median(logfft), c='g', ls=':', label="Median") plt.axhline(np.max(logfft), c='r', ls='--', label="Max") plt.ylim(-20,6) # 512 samples of a sine wave x = np.linspace(-150,150,512) y = np.sin(x) plt.plot(x,y) plt.title("Unwindowed sine wave") plt.figure() # Raw FFT (leakage is present) magnitude_plot(y) plt.title("Raw magnitude spectrum") # Window with the Hann function plt.figure() window = scipy.signal.blackmanharris(512) window_y = window * y plt.plot(x,window) plt.plot(x,window_y) plt.title("Windowed sine wave") plt.figure() magnitude_plot(window_y) plt.title("Note that the peak is much 'deeper' and 'sharper'") """ Explanation: Windowing/spectral leakage The Fourier transform and related transforms expect an infinite, periodic signal. In audio classification, we have a short sample of a non-periodic signal (although there may well be periodic components). If we just take the FT of a chunk of a signal, there will be instaneous "jumps" at the start and end which corrupt the spectrum (this is called spectral leakage). Any transform which expects periodic or infinite data usually benefits from a window function. Applying a window function to taper off the function eliminates this problem. End of explanation """ from window import window_data # print the window_data API out help(window_data) ## The skeleton of a solution # load the wave file and normalise # load "data/rub_1.wav" and "data/rub_2.wav" def load_wave(fname): # load and return a wave file sr, wave = scipy.io.wavfile.read(fname) return wave/32768.0 rub_1 = load_wave("data/rub1.wav") rub_2 = load_wave("data/rub2.wav") rub_1_features = window.window_data(rub_1, 100) rub_2_features = window.window_data(rub_2, 100) rub_1_labels = np.zeros(len(rub_1_features,)) rub_2_labels = np.ones(len(rub_2_features,)) rub_features = np.vstack([rub_1_features, rub_2_features]) rub_labels = np.hstack([rub_1_labels, rub_2_labels]) print rub_features.shape, rub_labels.shape rubfft_features = np.abs(np.fft.fft(rub_features)) rubfft_train_features, rubfft_test_features, rub_train_labels, rub_test_labels = sklearn.cross_validation.train_test_split( rubfft_features, rub_labels, test_size=0.3, random_state=0) rub_train_features, rub_test_features, rub_train_labels, rub_test_labels = sklearn.cross_validation.train_test_split( rub_features, rub_labels, test_size=0.3, random_state=0) print rub_train_features.shape, rub_train_labels.shape svm = sklearn.svm.SVC(gamma=0.1, C=100) svm.fit(rub_train_features, rub_train_labels) # we can plot the receiver-operator curve: the graph of false positive rate against true positive rate scores = svm.decision_function(rub_test_features) print scores.shape, rub_test_labels.shape fpr, tpr, thresholds = sklearn.metrics.roc_curve(rub_test_labels, scores) plt.plot(fpr,tpr) plt.plot([0,1], [0,1]) plt.plot([0,1], [1,0]) plt.fill_between(fpr, tpr, facecolor='none', hatch='/', alpha=0.2) plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.legend(["ROC", "Chance", "EER line"]) print sklearn.metrics.auc(fpr, tpr) # split into windows # test/train split # train classifier # evaluate classifier """ Explanation: Practical exercise: a baseline classifier <a id="practical"> </a> Task: Build a simple audio classifier You have two audio files: data/rub_1.wav and data/rub_2.wav which are acoustic microphone recordings from a Stane device being rubbed. The wave files are 8Khz, 16 bit, mono, and are 10 seconds long each. The task is to build and evaluate a binary classifier that distinguishes the two sounds reliably. You should try this once with the raw audio features (i.e. no transform), and then see if you can get better performance using the FFT or other feature transforms. Use one of these classifiers from sklearn (see the documentation here for details: http://scikit-learn.org/stable/supervised_learning.html#supervised-learning): * KNeighborsClassifier * SVC * LinearDiscriminantAnalysis * RandomForestClassifier If you have no strong preference, try the LinearDiscriminatAnalysis classifier, which has no parameters to adjust. Steps: 1. Get the files loaded and normalised (see below) 1. Set up the windowing of the data (see below for windowing function) 1. Set up a basic classifier 1. Get a ROC curve, AUC and accuracy up 1. Add an FFT magnitude transform 1. Retrain and test 1. Adjust windowing to improve performance Tips: Load wavefiles using scipy.io.wavfile.read() (see below). A function window_data() is provided to window data to make fixed length features. You need to set the parameters! You should probably use a small value for subsample (e.g. 0.1) to make the process go faster, at least to begin with. You should compute accuracy, AUC and plot a ROC curve (see above for the code to do that). After trying out the raw data, you should try transforming the data. The np.fft module provides FFT functionality. Be aware that you need to feed real (as in, not complex!) data to the classifier -- if you take a complex FFT, you need to convert to a magnitude or phase representation (e.g. using np.abs(fft(window))) to get the magnitude spectrum or np.angle(fft(window) to get the phase spectrum) End of explanation """ import window window.window_data(np.zeros(200,), 20, 20) """ Explanation: Link to Audio Classification Part II End of explanation """
shengqiu/renthop
xgboost.ipynb
gpl-2.0
import os import sys import operator import numpy as np import pandas as pd from scipy import sparse import xgboost as xgb from sklearn import model_selection, preprocessing, ensemble from sklearn.metrics import log_loss from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer """ Explanation: It seems the current high scoring script is written in R using H2O. So let us do one in python using XGBoost. Thanks to this script for feature engineering ideas. We shall start with importing the necessary modules End of explanation """ def runXGB(train_X, train_y, test_X, test_y=None, feature_names=None, seed_val=0, num_rounds=1000): param = {} param['objective'] = 'multi:softprob' param['eta'] = 0.1 param['max_depth'] = 6 param['silent'] = 1 param['num_class'] = 3 param['eval_metric'] = "mlogloss" param['min_child_weight'] = 1 param['subsample'] = 0.7 param['colsample_bytree'] = 0.7 param['seed'] = seed_val num_rounds = num_rounds plst = list(param.items()) xgtrain = xgb.DMatrix(train_X, label=train_y) if test_y is not None: xgtest = xgb.DMatrix(test_X, label=test_y) watchlist = [ (xgtrain,'train'), (xgtest, 'test') ] model = xgb.train(plst, xgtrain, num_rounds, watchlist, early_stopping_rounds=20) else: xgtest = xgb.DMatrix(test_X) model = xgb.train(plst, xgtrain, num_rounds) pred_test_y = model.predict(xgtest) return pred_test_y, model """ Explanation: Now let us write a custom function to run the xgboost model. End of explanation """ data_path = "../input/" train_file = data_path + "train.json" test_file = data_path + "test.json" train_df = pd.read_json(train_file) test_df = pd.read_json(test_file) print(train_df.shape) print(test_df.shape) """ Explanation: Let us read the train and test files and store it. End of explanation """ features_to_use = ["bathrooms", "bedrooms", "latitude", "longitude", "price"] """ Explanation: We do not need any pre-processing for numerical features and so create a list with those features. End of explanation """ # count of photos # train_df["num_photos"] = train_df["photos"].apply(len) test_df["num_photos"] = test_df["photos"].apply(len) # count of "features" # train_df["num_features"] = train_df["features"].apply(len) test_df["num_features"] = test_df["features"].apply(len) # count of words present in description column # train_df["num_description_words"] = train_df["description"].apply(lambda x: len(x.split(" "))) test_df["num_description_words"] = test_df["description"].apply(lambda x: len(x.split(" "))) # convert the created column to datetime object so as to extract more features train_df["created"] = pd.to_datetime(train_df["created"]) test_df["created"] = pd.to_datetime(test_df["created"]) # Let us extract some features like year, month, day, hour from date columns # train_df["created_year"] = train_df["created"].dt.year test_df["created_year"] = test_df["created"].dt.year train_df["created_month"] = train_df["created"].dt.month test_df["created_month"] = test_df["created"].dt.month train_df["created_day"] = train_df["created"].dt.day test_df["created_day"] = test_df["created"].dt.day train_df["created_hour"] = train_df["created"].dt.hour test_df["created_hour"] = test_df["created"].dt.hour # adding all these new features to use list # features_to_use.extend(["num_photos", "num_features", "num_description_words","created_year", "created_month", "created_day", "listing_id", "created_hour"]) """ Explanation: Now let us create some new features from the given features. End of explanation """ categorical = ["display_address", "manager_id", "building_id", "street_address"] for f in categorical: if train_df[f].dtype=='object': #print(f) lbl = preprocessing.LabelEncoder() lbl.fit(list(train_df[f].values) + list(test_df[f].values)) train_df[f] = lbl.transform(list(train_df[f].values)) test_df[f] = lbl.transform(list(test_df[f].values)) features_to_use.append(f) """ Explanation: We have 4 categorical features in our data display_address manager_id building_id listing_id So let us label encode these features. End of explanation """ train_df['features'] = train_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x])) test_df['features'] = test_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x])) print(train_df["features"].head()) tfidf = CountVectorizer(stop_words='english', max_features=200) tr_sparse = tfidf.fit_transform(train_df["features"]) te_sparse = tfidf.transform(test_df["features"]) """ Explanation: We have features column which is a list of string values. So we can first combine all the strings together to get a single string and then apply count vectorizer on top of it. End of explanation """ train_X = sparse.hstack([train_df[features_to_use], tr_sparse]).tocsr() test_X = sparse.hstack([test_df[features_to_use], te_sparse]).tocsr() target_num_map = {'high':0, 'medium':1, 'low':2} train_y = np.array(train_df['interest_level'].apply(lambda x: target_num_map[x])) print(train_X.shape, test_X.shape) """ Explanation: Now let us stack both the dense and sparse features into a single dataset and also get the target variable. End of explanation """ cv_scores = [] kf = model_selection.KFold(n_splits=5, shuffle=True, random_state=2016) for dev_index, val_index in kf.split(range(train_X.shape[0])): dev_X, val_X = train_X[dev_index,:], train_X[val_index,:] dev_y, val_y = train_y[dev_index], train_y[val_index] preds, model = runXGB(dev_X, dev_y, val_X, val_y) cv_scores.append(log_loss(val_y, preds)) print(cv_scores) break """ Explanation: Now let us do some cross validation to check the scores. Please run it in local to get the cv scores. I am commenting it out here for time. End of explanation """ preds, model = runXGB(train_X, train_y, test_X, num_rounds=400) out_df = pd.DataFrame(preds) out_df.columns = ["high", "medium", "low"] out_df["listing_id"] = test_df.listing_id.values out_df.to_csv("xgb_starter2.csv", index=False) """ Explanation: Now let us build the final model and get the predictions on the test set. End of explanation """
NORCatUofC/rain
n-year/notebooks/N-Year Storms.ipynb
mit
from __future__ import absolute_import, division, print_function, unicode_literals import pandas as pd from datetime import datetime, timedelta import operator import matplotlib.pyplot as plt from collections import namedtuple %matplotlib notebook # The following code is adopted from Pat's Rolling Rain N-Year Threshold.pynb # Loading in hourly rain data from CSV, parsing the timestamp, and adding it as an index so it's more useful rain_df = pd.read_csv('data/ohare_hourly_20160929.csv') rain_df['datetime'] = pd.to_datetime(rain_df['datetime']) rain_df = rain_df.set_index(pd.DatetimeIndex(rain_df['datetime'])) rain_df = rain_df['19700101':] chi_rain_series = rain_df['HOURLYPrecip'].resample('1H', label='right').max().fillna(0) chi_rain_series.head() # N-Year Storm variables # These define the thresholds laid out by bulletin 70, and transfer mins and days to hours n_year_threshes = pd.read_csv('../../n-year/notebooks/data/n_year_definitions.csv') n_year_threshes = n_year_threshes.set_index('Duration') dur_str_to_hours = { '5-min':5/60.0, '10-min':10/60.0, '15-min':15/60.0, '30-min':0.5, '1-hr':1.0, '2-hr':2.0, '3-hr':3.0, '6-hr':6.0, '12-hr':12.0, '18-hr':18.0, '24-hr':24.0, '48-hr':48.0, '72-hr':72.0, '5-day':5*24.0, '10-day':10*24.0 } n_s = [int(x.replace('-year','')) for x in reversed(list(n_year_threshes.columns.values))] duration_strs = sorted(dur_str_to_hours.items(), key=operator.itemgetter(1), reverse=False) n_year_threshes # Roll through the rain series in intervals at the various durations, and look for periods that exceed the thresholds above. # Return a DataFrame def find_n_year_storms(start_time, end_time, n): n_index = n_s.index(n) next_n = n_s[n_index-1] if n_index != 0 else None storms = [] for duration_tuple in reversed(duration_strs): duration_str = duration_tuple[0] low_thresh = n_year_threshes.loc[duration_str, str(n) + '-year'] high_thresh = n_year_threshes.loc[duration_str, str(next_n) + '-year'] if next_n is not None else None duration = int(dur_str_to_hours[duration_str]) sub_series = chi_rain_series[start_time: end_time] rolling = sub_series.rolling(window=int(duration), min_periods=0).sum() if high_thresh is not None: event_endtimes = rolling[(rolling >= low_thresh) & (rolling < high_thresh)].sort_values(ascending=False) else: event_endtimes = rolling[(rolling >= low_thresh)].sort_values(ascending=False) for index, event_endtime in event_endtimes.iteritems(): storms.append({'n': n, 'end_time': index, 'inches': event_endtime, 'duration_hrs': duration, 'start_time': index - timedelta(hours=duration)}) return pd.DataFrame(storms) # Find all of the n-year storms in the whole rainfall dataset n_year_storms_raw = find_n_year_storms(chi_rain_series.index[0], chi_rain_series.index[-1], 100) for n in n_s[1:]: n_year_storms_raw = n_year_storms_raw.append(find_n_year_storms(chi_rain_series.index[0], chi_rain_series.index[-1], n)) n_year_storms_raw.head() # Re-order the dataframe to make it clearer n_year_storms_raw = n_year_storms_raw[['n', 'duration_hrs', 'start_time', 'end_time', 'inches']] n_year_storms_raw.head() """ Explanation: N-Year Storms N-Year storms are defined by a getting x inches of rain in y minutes/hours/days. For Chicago (Northeast Illinois), these thresholds are defined by Bulletin 70, and can be reviewed here. End of explanation """ unique_storms = pd.DataFrame(n_year_storms_raw[0:1]) unique_storms.head() # This method takes in a start time and end time, and searches unique_storms to see if a storm with these times # overlaps with anything already in unique_storms. Returns True if it overlaps with an existing storm def overlaps(start_time, end_time): Range = namedtuple('Range', ['start', 'end']) range_to_check = Range(start=start_time, end=end_time) for index, row in unique_storms.iterrows(): date_range = Range(start=row['start_time'], end=row['end_time']) latest_start = max(range_to_check.start, date_range.start) earliest_end = min(range_to_check.end, date_range.end) if ((earliest_end - latest_start).days + 1) > 0: return True return False s = pd.to_datetime('1987-08-11 01:00:00') e = pd.to_datetime('1987-08-11 23:59:00') overlaps(s,e) # Iterate through n_year_storms_raw and if an overlapping storm does not exist in unique_storms, then add it for index, storm in n_year_storms_raw.iterrows(): if not overlaps(storm['start_time'], storm['end_time']): unique_storms = unique_storms.append(storm) unique_storms.head() # How many of each n-year storm did we see? unique_storms['n'].value_counts().sort_index() unique_storms.dtypes """ Explanation: Just by looking over the first few rows of the dataframe, we can see that there is a lot of overlap. We should assume that two n-year storms cannot overlap. Create a new dataframe for unique events, in which we take the highest n over a given time period. End of explanation """ def find_year(timestamp): return timestamp.year unique_storms['year'] = unique_storms['start_time'].apply(find_year) unique_storms.head() unique_storms['year'].value_counts().sort_index().plot(kind='bar', title='N-Year Storms per Year') """ Explanation: Let's take a look at the number of n-year storms per year End of explanation """ ns_by_year = {year: {n: 0 for n in n_s} for year in range(1970, 2017)} for index, event in unique_storms.iterrows(): ns_by_year[event['year']][int(event['n'])] += 1 ns_by_year = pd.DataFrame(ns_by_year).transpose() ns_by_year.head() ns_by_year = ns_by_year[ns_by_year.columns[::-1]] # Reverse column order ns_by_year.columns = [str(n) + '-year' for n in ns_by_year.columns] ns_by_year.plot(kind='bar', stacked=True, title="N-Year Storms by Year") """ Explanation: Let's break up the different n's per year End of explanation """ unique_storms.to_csv('data/n_year_storms_ohare_noaa.csv', index=False) """ Explanation: From the above, it is not so obvious that there are patterns. Further analysis will be done in other notebooks. In order to do so, create a CSV of N-Year storms for further analysis End of explanation """
ericfourrier/pandas-patch
examples/Pandas Patch In Action.ipynb
mit
from pandas_patch import * %psource structure def get_test_df_complete(): """ get the full test dataset from Lending Club open source database, the purpose of this fuction is to be used in a demo ipython notebook """ import requests from zipfile import ZipFile from StringIO import StringIO zip_to_download = "https://resources.lendingclub.com/LoanStats3b.csv.zip" r = requests.get(zip_to_download) zipfile = ZipFile(StringIO(r.content)) file_csv = zipfile.namelist()[0] df = pd.read_csv(zipfile.open(file_csv), skiprows =[0], na_values = ['n/a','N/A',''], parse_dates = ['issue_d','last_pymnt_d','next_pymnt_d','last_credit_pull_d'] ) zipfile.close() df = df[:-2] nb_row = float(len(df.index)) df['na_col'] = np.nan df['constant_col'] = 'constant' df['duplicated_column'] = df.id df['many_missing_70'] = np.nan df.loc[1:int(0.3*nb_row),'many_missing_70'] = 1 df['bad'] = 1 index_good = df['loan_status'].isin(['Fully Paid', 'Current','In Grace Period']) df.loc[index_good,'bad'] = 0 return df # ipython tips # with psource you can see the source code of a function %psource pandas_patch ?nacount # to get info about the functions and docs #df = get_test_df_complete() # because no wifi connection df = get_test_df_complete() df.columns """ Explanation: Pandas Patch In Action import packages and download test data End of explanation """ df.nrow() df.ncol() df.dfnum() #identify numeric variables df.dfchar() # identify character variables timeit df.factors() df.nacount(axis = 0) # counting the number of missing values per column df.nacount(axis = 1) # count the number of missing values per rows df.constantcol() # find the constant columns df.findupcol() # find the duplicate columns timeit df.detectkey(pct = ) timeit df.apply(lambda x: len(pd.unique(x))) timeit df.count_unique() df.manymissing(a = 0.7) """ Explanation: Basic data cleaning and exploration Basic helpers End of explanation """ timeit df.structure() df.psummary(dynamic = True) df.str %psource structure df.nacount(axis = 0).Napercentage %timeit df.nacount() timeit df.count_unique() df.count() df.nacount() 1 >= 2 df.int_rate.dtype df.sample_df(pct = 0.10).nrow() df.factors() timeit df.detectkey(pct = 0.05) timeit df.detectkey2() df.nearzerovar() def pandas_to_ndarray_wrap(X, copy=True): """ Converts X to a ndarray and provides a function to help convert back to pandas object. Parameters ---------- X : Series/DataFrame/ndarray copy : Boolean If True, return a copy. Returns ------- Xvals : ndarray If X is a Series/DataFrame, then Xvals = X.values, if ndarray, Xvals = X F : Function F(Xvals) = X """ if copy: X = X.copy() if isinstance(X, pd.Series): return X.values, lambda Z: pd.Series(np.squeeze(Z), index=X.index) elif isinstance(X, pd.DataFrame): return X.values, lambda Z: pd.DataFrame( Z, index=X.index, columns=X.columns) elif isinstance(X, np.ndarray) or isspmatrix(X): return X, lambda Z: Z else: raise ValueError("Unhandled type: %s" % type(X)) pandas_to_ndarray_wrap(df) pandas_to_ndarray_wrap(df)[0] df.size timeit df.duplicated() """ Explanation: Summary of strucuture, data info and cleaning functions End of explanation """
csaladenes/aviation
code/.ipynb_checkpoints/airport_dest_parser-checkpoint.ipynb
mit
L=json.loads(file('../json/L.json','r').read()) M=json.loads(file('../json/M.json','r').read()) N=json.loads(file('../json/N.json','r').read()) import requests AP={} for c in M: if c not in AP:AP[c]={} for i in range(len(L[c])): AP[c][N[c][i]]=L[c][i] """ Explanation: Load airports of each country End of explanation """ baseurl='https://www.airportia.com/' import requests, urllib2 def urlgetter(url): s = requests.Session() cookiesopen = s.get(url) cookies=str(s.cookies) fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]] #push token opener = urllib2.build_opener() for k in fcookies: opener.addheaders.append(('Cookie', k[0]+'='+k[1])) #read html return s.get(url).content """ Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers. seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past. parse Departures End of explanation """ SD={} SC=json.loads(file('../json/SC2.json','r').read()) #pop out last - if applicable try: SD.pop(c) except: pass for h in range(len(AP.keys())): c=AP.keys()[h] #country not parsed yet if c in SC: if c not in SD: SD[c]=[] print h,c airportialinks=AP[c] sch={} #all airports of country, where there is traffic for i in airportialinks: if i in SC[c]: print i, if i not in sch:sch[i]={} url=baseurl+airportialinks[i] m=urlgetter(url) for d in range (3,17): #date not parsed yet if d not in sch[i]: url=baseurl+airportialinks[i]+'departures/201704'+str(d) m=urlgetter(url) soup = BeautifulSoup(m, "lxml") #if there are flights at all if len(soup.findAll('table'))>0: sch[i][d]=pd.read_html(m)[0] else: print '--W-',d, SD[c]=sch print """ Explanation: good dates End of explanation """ dbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path file(dbpath+"json/SD_dest.json",'w').write(repr(SD)) cnc_path='../../universal/countries/' cnc=pd.read_excel(cnc_path+'cnc.xlsx').set_index('Name') MDF=pd.DataFrame() for c in SD: sch=SD[c] mdf=pd.DataFrame() for i in sch: for d in sch[i]: df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1) df['From']=i df['Date']=d mdf=pd.concat([mdf,df]) mdf=mdf.replace('Hahn','Frankfurt') mdf=mdf.replace('Hahn HHN','Frankfurt HHN') if len(sch)>0: mdf['City']=[i[:i.rfind(' ')] for i in mdf['To']] mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['To']] cpath=str(cnc.T.loc[c]['ISO2']).lower() if cpath=='nan':cpath='na' file('../countries/'+cpath+"/json/mdf_dest.json",'w').write(json.dumps(mdf.reset_index().to_json())) MDF=pd.concat([MDF,mdf]) print c, dbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path MDF.reset_index().to_json(dbpath+'json/MDF_dest.json') """ Explanation: Save End of explanation """
do-mpc/do-mpc
documentation/source/mhe_example.ipynb
lgpl-3.0
import numpy as np from casadi import * # Add do_mpc to path. This is not necessary if it was installed via pip. import sys sys.path.append('../../') # Import do_mpc package: import do_mpc """ Explanation: Getting started: MHE Open an interactive online Jupyter Notebook with this content on Binder: In this Jupyter Notebook we illustrate application of the do-mpc moving horizon estimation module. Please follow first the general Getting Started guide, as we cover the sample example and skip over some previously explained details. End of explanation """ model_type = 'continuous' # either 'discrete' or 'continuous' model = do_mpc.model.Model(model_type) """ Explanation: Creating the model First, we need to decide on the model type. For the given example, we are working with a continuous model. End of explanation """ phi = model.set_variable(var_type='_x', var_name='phi', shape=(3,1)) dphi = model.set_variable(var_type='_x', var_name='dphi', shape=(3,1)) # Two states for the desired (set) motor position: phi_m_set = model.set_variable(var_type='_u', var_name='phi_m_set', shape=(2,1)) # Two additional states for the true motor position: phi_m = model.set_variable(var_type='_x', var_name='phi_m', shape=(2,1)) """ Explanation: The model is based on the assumption that we have additive process and/or measurement noise: \begin{align} \dot{x}(t) &= f(x(t),u(t),z(t),p(t),p_{\text{tv}}(t))+w(t), \ y(t) &= h(x(t),u(t),z(t),p(t),p_{\text{tv}}(t))+v(t), \end{align} we are free to chose, which states and which measurements experience additive noise. Model variables The next step is to define the model variables. It is important to define the variable type, name and optionally shape (default is scalar variable). In contrast to the previous example, we now use vectors for all variables. End of explanation """ # State measurements phi_meas = model.set_meas('phi_1_meas', phi, meas_noise=True) # Input measurements phi_m_set_meas = model.set_meas('phi_m_set_meas', phi_m_set, meas_noise=False) """ Explanation: Model measurements This step is essential for the state estimation task: We must define a measurable output. Typically, this is a subset of states (or a transformation thereof) as well as the inputs. Note that some MHE implementations consider inputs separately. As mentionned above, we need to define for each measurement if additive noise is present. In our case we assume noisy state measurements ($\phi$) but perfect input measurements. End of explanation """ Theta_1 = model.set_variable('parameter', 'Theta_1') Theta_2 = model.set_variable('parameter', 'Theta_2') Theta_3 = model.set_variable('parameter', 'Theta_3') c = np.array([2.697, 2.66, 3.05, 2.86])*1e-3 d = np.array([6.78, 8.01, 8.82])*1e-5 """ Explanation: Model parameters Next we define parameters. The MHE allows to estimate parameters as well as states. Note that not all parameters must be estimated (as shown in the MHE setup below). We can also hardcode parameters (such as the spring constants c). End of explanation """ model.set_rhs('phi', dphi) dphi_next = vertcat( -c[0]/Theta_1*(phi[0]-phi_m[0])-c[1]/Theta_1*(phi[0]-phi[1])-d[0]/Theta_1*dphi[0], -c[1]/Theta_2*(phi[1]-phi[0])-c[2]/Theta_2*(phi[1]-phi[2])-d[1]/Theta_2*dphi[1], -c[2]/Theta_3*(phi[2]-phi[1])-c[3]/Theta_3*(phi[2]-phi_m[1])-d[2]/Theta_3*dphi[2], ) model.set_rhs('dphi', dphi_next, process_noise = False) tau = 1e-2 model.set_rhs('phi_m', 1/tau*(phi_m_set - phi_m)) """ Explanation: Right-hand-side equation Finally, we set the right-hand-side of the model by calling model.set_rhs(var_name, expr) with the var_name from the state variables defined above and an expression in terms of $x, u, z, p$. Note that we can decide whether the inidividual states experience process noise. In this example we choose that the system model is perfect. This is the default setting, so we don't need to pass this parameter explictly. End of explanation """ model.setup() """ Explanation: The model setup is completed by calling model.setup(): End of explanation """ mhe = do_mpc.estimator.MHE(model, ['Theta_1']) """ Explanation: After calling model.setup() we cannot define further variables etc. Configuring the moving horizon estimator The first step of configuring the moving horizon estimator is to call the class with a list of all parameters to be estimated. An empty list (default value) means that no parameters are estimated. The list of estimated parameters must be a subset (or all) of the previously defined parameters. <div class="alert alert-info"> **Note** So why did we define ``Theta_2`` and ``Theta_3`` if we do not estimate them? In many cases we will use the same model for (robust) control and MHE estimation. In that case it is possible to have some external parameters (e.g. weather prediction) that are uncertain but cannot be estimated. </div> End of explanation """ setup_mhe = { 't_step': 0.1, 'n_horizon': 10, 'store_full_solution': True, 'meas_from_data': True } mhe.set_param(**setup_mhe) """ Explanation: MHE parameters: Next, we pass MHE parameters. Most importantly, we need to set the time step and the horizon. We also choose to obtain the measurement from the MHE data object. Alternatively, we are able to set a user defined measurement function that is called at each timestep and returns the N previous measurements for the estimation step. End of explanation """ P_v = np.diag(np.array([1,1,1])) P_x = np.eye(8) P_p = 10*np.eye(1) mhe.set_default_objective(P_x, P_v, P_p) """ Explanation: Objective function The most important step of the configuration is to define the objective function for the MHE problem: \begin{align} \underset{ \begin{array}{c} \mathbf{x}{0:N+1}, \mathbf{u}{0:N}, p,\ \mathbf{w}{0:N}, \mathbf{v}{0:N} \end{array} }{\mathrm{min}} &\frac{1}{2}\|x_0-\tilde{x}0\|{P_x}^2+\frac{1}{2}\|p-\tilde{p}\|{P_p}^2 +\sum{k=0}^{N-1} \left(\frac{1}{2}\|v_k\|{P{v,k}}^2 + \frac{1}{2}\|w_k\|{P{w,k}}^2\right),\ &\left.\begin{aligned} \mathrm{s.t.}\quad x_{k+1} &= f(x_k,u_k,z_k,p,p_{\text{tv},k})+ w_k,\ y_k &= h(x_k,u_k,z_k,p,p_{\text{tv},k}) + v_k, \ &g(x_k,u_k,z_k,p_k,p_{\text{tv},k}) \leq 0 \end{aligned}\right} k=0,\dots, N \end{align} We typically consider the formulation shown above, where the user has to pass the weighting matrices P_x, P_v, P_p and P_w. In our concrete example, we assume a perfect model without process noise and thus P_w is not required. We set the objective function with the weighting matrices shown below: End of explanation """ p_template_mhe = mhe.get_p_template() """ Explanation: Fixed parameters If the model contains parameters and if we estimate only a subset of these parameters, it is required to pass a function that returns the value of the remaining parameters at each time step. Furthermore, this function must return a specific structure, which is first obtained by calling: End of explanation """ def p_fun_mhe(t_now): p_template_mhe['Theta_2'] = 2.25e-4 p_template_mhe['Theta_3'] = 2.25e-4 return p_template_mhe """ Explanation: Using this structure, we then formulate the following function for the remaining (not estimated) parameters: End of explanation """ mhe.set_p_fun(p_fun_mhe) """ Explanation: This function is finally passed to the mhe instance: End of explanation """ mhe.bounds['lower','_u', 'phi_m_set'] = -2*np.pi mhe.bounds['upper','_u', 'phi_m_set'] = 2*np.pi mhe.bounds['lower','_p_est', 'Theta_1'] = 1e-5 mhe.bounds['upper','_p_est', 'Theta_1'] = 1e-3 """ Explanation: Bounds The MHE implementation also supports bounds for states, inputs, parameters which can be set as shown below. For the given example, it is especially important to set realistic bounds on the estimated parameter. Otherwise the MHE solution is a poor fit. End of explanation """ mhe.setup() """ Explanation: Setup Similar to the controller, simulator and model, we finalize the MHE configuration by calling: End of explanation """ simulator = do_mpc.simulator.Simulator(model) """ Explanation: Configuring the Simulator In many cases, a developed control approach is first tested on a simulated system. do-mpc responds to this need with the do_mpc.simulator class. The simulator uses state-of-the-art DAE solvers, e.g. Sundials CVODE to solve the DAE equations defined in the supplied do_mpc.model. This will often be the same model as defined for the optimizer but it is also possible to use a more complex model of the same system. In this section we demonstrate how to setup the simulator class for the given example. We initialize the class with the previously defined model: End of explanation """ # Instead of supplying a dict with the splat operator (**), as with the optimizer.set_param(), # we can also use keywords (and call the method multiple times, if necessary): simulator.set_param(t_step = 0.1) """ Explanation: Simulator parameters Next, we need to parametrize the simulator. Please see the API documentation for simulator.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set t_step. We choose the same value as for the optimizer. End of explanation """ p_template_sim = simulator.get_p_template() """ Explanation: Parameters In the model we have defined the inertia of the masses as parameters. The simulator is now parametrized to simulate using the "true" values at each timestep. In the most general case, these values can change, which is why we need to supply a function that can be evaluted at each time to obtain the current values. do-mpc requires this function to have a specific return structure which we obtain first by calling: End of explanation """ def p_fun_sim(t_now): p_template_sim['Theta_1'] = 2.25e-4 p_template_sim['Theta_2'] = 2.25e-4 p_template_sim['Theta_3'] = 2.25e-4 return p_template_sim """ Explanation: We need to define a function which returns this structure with the desired numerical values. For our simple case: End of explanation """ simulator.set_p_fun(p_fun_sim) """ Explanation: This function is now supplied to the simulator in the following way: End of explanation """ simulator.setup() """ Explanation: Setup Finally, we call: End of explanation """ x0 = np.pi*np.array([1, 1, -1.5, 1, -5, 5, 0, 0]).reshape(-1,1) """ Explanation: Creating the loop While the full loop should also include a controller, we are currently only interested in showcasing the estimator. We therefore estimate the states for an arbitrary initial condition and some random control inputs (shown below). End of explanation """ x0_mhe = x0*(1+0.5*np.random.randn(8,1)) """ Explanation: To make things more interesting we pass the estimator a perturbed initial state: End of explanation """ simulator.x0 = x0 mhe.x0_mhe = x0_mhe mhe.p_est0 = 1e-4 """ Explanation: and use the x0 property of the simulator and estimator to set the initial state: End of explanation """ mhe.set_initial_guess() """ Explanation: It is also adviced to create an initial guess for the MHE optimization problem. The simplest way is to base that guess on the initial state, which is done automatically when calling: End of explanation """ import matplotlib.pyplot as plt import matplotlib as mpl # Customizing Matplotlib: mpl.rcParams['font.size'] = 18 mpl.rcParams['lines.linewidth'] = 3 mpl.rcParams['axes.grid'] = True """ Explanation: Setting up the Graphic We are again using the do-mpc graphics module. This versatile tool allows us to conveniently configure a user-defined plot based on Matplotlib and visualize the results stored in the mhe.data, simulator.data objects. We start by importing matplotlib: End of explanation """ mhe_graphics = do_mpc.graphics.Graphics(mhe.data) sim_graphics = do_mpc.graphics.Graphics(simulator.data) """ Explanation: And initializing the graphics module with the data object of interest. In this particular example, we want to visualize both the mpc.data as well as the simulator.data. End of explanation """ %%capture # We just want to create the plot and not show it right now. This "inline magic" suppresses the output. fig, ax = plt.subplots(3, sharex=True, figsize=(16,9)) fig.align_ylabels() # We create another figure to plot the parameters: fig_p, ax_p = plt.subplots(1, figsize=(16,4)) """ Explanation: Next, we create a figure and obtain its axis object. Matplotlib offers multiple alternative ways to obtain an axis object, e.g. subplots, subplot2grid, or simply gca. We use subplots: End of explanation """ %%capture for g in [sim_graphics, mhe_graphics]: # Plot the angle positions (phi_1, phi_2, phi_2) on the first axis: g.add_line(var_type='_x', var_name='phi', axis=ax[0]) ax[0].set_prop_cycle(None) g.add_line(var_type='_x', var_name='dphi', axis=ax[1]) ax[1].set_prop_cycle(None) # Plot the set motor positions (phi_m_1_set, phi_m_2_set) on the second axis: g.add_line(var_type='_u', var_name='phi_m_set', axis=ax[2]) ax[2].set_prop_cycle(None) g.add_line(var_type='_p', var_name='Theta_1', axis=ax_p) ax[0].set_ylabel('angle position [rad]') ax[1].set_ylabel('angular \n velocity [rad/s]') ax[2].set_ylabel('motor angle [rad]') ax[2].set_xlabel('time [s]') """ Explanation: Most important API element for setting up the graphics module is graphics.add_line, which mimics the API of model.add_variable, except that we also need to pass an axis. We want to show both the simulator and MHE results on the same axis, which is why we configure both of them identically: End of explanation """ sim_graphics.result_lines """ Explanation: Before we show any results we configure we further configure the graphic, by changing the appearance of the simulated lines. We can obtain line objects from any graphics instance with the result_lines property: End of explanation """ # First element for state phi: sim_graphics.result_lines['_x', 'phi', 0] """ Explanation: We obtain a structure that can be queried conveniently as follows: End of explanation """ for line_i in sim_graphics.result_lines.full: line_i.set_alpha(0.4) line_i.set_linewidth(6) """ Explanation: In this particular case we want to change all result_lines with: End of explanation """ ax[0].legend(sim_graphics.result_lines['_x', 'phi'], '123', title='Sim.', loc='center right') ax[1].legend(mhe_graphics.result_lines['_x', 'phi'], '123', title='MHE', loc='center right') """ Explanation: We furthermore use this property to create a legend: End of explanation """ ax_p.legend(sim_graphics.result_lines['_p', 'Theta_1']+mhe_graphics.result_lines['_p', 'Theta_1'], ['True','Estim.']) """ Explanation: and another legend for the parameter plot: End of explanation """ def random_u(u0): # Hold the current value with 80% chance or switch to new random value. u_next = (0.5-np.random.rand(2,1))*np.pi # New candidate value. switch = np.random.rand() >= 0.8 # switching? 0 or 1. u0 = (1-switch)*u0 + switch*u_next # Old or new value. return u0 """ Explanation: Running the loop We investigate the closed-loop MHE performance by alternating a simulation step (y0=simulator.make_step(u0)) and an estimation step (x0=mhe.make_step(y0)). Since we are lacking the controller which would close the loop (u0=mpc.make_step(x0)), we define a random control input function: End of explanation """ %%capture np.random.seed(999) #make it repeatable u0 = np.zeros((2,1)) for i in range(50): u0 = random_u(u0) # Control input v0 = 0.1*np.random.randn(model.n_v,1) # measurement noise y0 = simulator.make_step(u0, v0=v0) x0 = mhe.make_step(y0) # MHE estimation step """ Explanation: The function holds the current input value with 80% chance or switches to a new random input value. We can now run the loop. At each iteration, we perturb our measurements, for a more realistic scenario. This can be done by calling the simulator with a value for the measurement noise, which we defined in the model above. End of explanation """ sim_graphics.plot_results() mhe_graphics.plot_results() # Reset the limits on all axes in graphic to show the data. mhe_graphics.reset_axes() # Mark the time after a full horizon is available to the MHE. ax[0].axvline(1) ax[1].axvline(1) ax[2].axvline(1) # Show the figure: fig """ Explanation: We can visualize the resulting trajectory with the pre-defined graphic: End of explanation """ ax_p.set_ylim(1e-4, 4e-4) ax_p.set_ylabel('mass inertia') ax_p.set_xlabel('time [s]') fig_p """ Explanation: Parameter estimation: End of explanation """ mhe.bounds['lower','_p_est', 'Theta_1'] = -np.inf mhe.bounds['upper','_p_est', 'Theta_1'] = np.inf """ Explanation: MHE Advantages One of the main advantages of moving horizon estimation is the possibility to set bounds for states, inputs and estimated parameters. As mentioned above, this is crucial in the presented example. Let's see how the MHE behaves without realistic bounds for the estimated mass inertia of disc one. We simply reconfigure the bounds: End of explanation """ mhe.setup() """ Explanation: And setup the MHE again. The backend is now recreating the optimization problem, taking into consideration the currently saved bounds. End of explanation """ mhe.reset_history() simulator.reset_history() """ Explanation: We reset the history of the estimator and simulator (to clear their data objects and start "fresh"). End of explanation """ %%capture np.random.seed(999) #make it repeatable u0 = np.zeros((2,1)) for i in range(50): u0 = random_u(u0) # Control input v0 = 0.1*np.random.randn(model.n_v,1) # measurement noise y0 = simulator.make_step(u0, v0=v0) x0 = mhe.make_step(y0) # MHE estimation step """ Explanation: Finally, we run the exact same loop again obtaining new results. End of explanation """ sim_graphics.plot_results() mhe_graphics.plot_results() # Reset the limits on all axes in graphic to show the data. mhe_graphics.reset_axes() # Mark the time after a full horizon is available to the MHE. ax[0].axvline(1) ax[1].axvline(1) ax[2].axvline(1) # Show the figure: fig """ Explanation: These results now look quite terrible: End of explanation """ ax_p.set_ylabel('mass inertia') ax_p.set_xlabel('time [s]') fig_p """ Explanation: Clearly, the main problem is a faulty parameter estimation, which is off by orders of magnitude: End of explanation """
mohanprasath/Course-Work
coursera/data_visualization_with_python/DV0101EN-3-4-1-Waffle-Charts-Word-Clouds-and-Regression-Plots-py-v2.0.ipynb
gpl-3.0
import numpy as np # useful for many scientific computing in Python import pandas as pd # primary data structure library from PIL import Image # converting images into arrays """ Explanation: <a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a> <h1 align=center><font size = 5>Waffle Charts, Word Clouds, and Regression Plots</font></h1> Introduction In this lab, we will learn how to create word clouds and waffle charts. Furthermore, we will start learning about additional visualization libraries that are based on Matplotlib, namely the library seaborn, and we will learn how to create regression plots using the seaborn library. Table of Contents <div class="alert alert-block alert-info" style="margin-top: 20px"> 1. [Exploring Datasets with *p*andas](#0)<br> 2. [Downloading and Prepping Data](#2)<br> 3. [Visualizing Data using Matplotlib](#4) <br> 4. [Waffle Charts](#6) <br> 5. [Word Clouds](#8) <br> 7. [Regression Plots](#10) <br> </div> <hr> Exploring Datasets with pandas and Matplotlib<a id="0"></a> Toolkits: The course heavily relies on pandas and Numpy for data wrangling, analysis, and visualization. The primary plotting library we will explore in the course is Matplotlib. Dataset: Immigration to Canada from 1980 to 2013 - International migration flows to and from selected countries - The 2015 revision from United Nation's website The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. In this lab, we will focus on the Canadian Immigration data. Downloading and Prepping Data <a id="2"></a> Import Primary Modules: End of explanation """ df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx', sheet_name='Canada by Citizenship', skiprows=range(20), skipfooter=2) print('Data downloaded and read into a dataframe!') """ Explanation: Let's download and import our primary Canadian Immigration dataset using pandas read_excel() method. Normally, before we can do that, we would need to download a module which pandas requires to read in excel files. This module is xlrd. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the xlrd module: !conda install -c anaconda xlrd --yes Download the dataset and read it into a pandas dataframe: End of explanation """ df_can.head() """ Explanation: Let's take a look at the first five items in our dataset End of explanation """ # print the dimensions of the dataframe print(df_can.shape) """ Explanation: Let's find out how many entries there are in our dataset End of explanation """ # clean up the dataset to remove unnecessary columns (eg. REG) df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis = 1, inplace = True) # let's rename the columns so that they make sense df_can.rename (columns = {'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace = True) # for sake of consistency, let's also make all column labels of type string df_can.columns = list(map(str, df_can.columns)) # set the country name as index - useful for quickly looking up countries using .loc method df_can.set_index('Country', inplace = True) # add total column df_can['Total'] = df_can.sum (axis = 1) # years that we will be using in this lesson - useful for plotting later on years = list(map(str, range(1980, 2014))) print ('data dimensions:', df_can.shape) """ Explanation: Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to Introduction to Matplotlib and Line Plots and Area Plots, Histograms, and Bar Plots for a detailed description of this preprocessing. End of explanation """ %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.patches as mpatches # needed for waffle Charts mpl.style.use('ggplot') # optional: for ggplot-like style # check for latest version of Matplotlib print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0 """ Explanation: Visualizing Data using Matplotlib<a id="4"></a> Import matplotlib: End of explanation """ # let's create a new dataframe for these three countries df_dsn = df_can.loc[['Denmark', 'Norway', 'Sweden'], :] # let's take a look at our dataframe df_dsn """ Explanation: Waffle Charts <a id="6"></a> A waffle chart is an interesting visualization that is normally created to display progress toward goals. It is commonly an effective option when you are trying to add interesting visualization features to a visual that consists mainly of cells, such as an Excel dashboard. Let's revisit the previous case study about Denmark, Norway, and Sweden. End of explanation """ # compute the proportion of each category with respect to the total total_values = sum(df_dsn['Total']) category_proportions = [(float(value) / total_values) for value in df_dsn['Total']] # print out proportions for i, proportion in enumerate(category_proportions): print (df_dsn.index.values[i] + ': ' + str(proportion)) """ Explanation: Unfortunately, unlike R, waffle charts are not built into any of the Python visualization libraries. Therefore, we will learn how to create them from scratch. Step 1. The first step into creating a waffle chart is determing the proportion of each category with respect to the total. End of explanation """ width = 40 # width of chart height = 10 # height of chart total_num_tiles = width * height # total number of tiles print ('Total number of tiles is ', total_num_tiles) """ Explanation: Step 2. The second step is defining the overall size of the waffle chart. End of explanation """ # compute the number of tiles for each catagory tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions] # print out number of tiles per category for i, tiles in enumerate(tiles_per_category): print (df_dsn.index.values[i] + ': ' + str(tiles)) """ Explanation: Step 3. The third step is using the proportion of each category to determe it respective number of tiles End of explanation """ # initialize the waffle chart as an empty matrix waffle_chart = np.zeros((height, width)) # define indices to loop through waffle chart category_index = 0 tile_index = 0 # populate the waffle chart for col in range(width): for row in range(height): tile_index += 1 # if the number of tiles populated for the current category is equal to its corresponding allocated tiles... if tile_index > sum(tiles_per_category[0:category_index]): # ...proceed to the next category category_index += 1 # set the class value to an integer, which increases with class waffle_chart[row, col] = category_index print ('Waffle chart populated!') """ Explanation: Based on the calculated proportions, Denmark will occupy 129 tiles of the waffle chart, Norway will occupy 77 tiles, and Sweden will occupy 194 tiles. Step 4. The fourth step is creating a matrix that resembles the waffle chart and populating it. End of explanation """ waffle_chart """ Explanation: Let's take a peek at how the matrix looks like. End of explanation """ # instantiate a new figure object fig = plt.figure() # use matshow to display the waffle chart colormap = plt.cm.coolwarm plt.matshow(waffle_chart, cmap=colormap) plt.colorbar() """ Explanation: As expected, the matrix consists of three categories and the total number of each category's instances matches the total number of tiles allocated to each category. Step 5. Map the waffle chart matrix into a visual. End of explanation """ # instantiate a new figure object fig = plt.figure() # use matshow to display the waffle chart colormap = plt.cm.coolwarm plt.matshow(waffle_chart, cmap=colormap) plt.colorbar() # get the axis ax = plt.gca() # set minor ticks ax.set_xticks(np.arange(-.5, (width), 1), minor=True) ax.set_yticks(np.arange(-.5, (height), 1), minor=True) # add gridlines based on minor ticks ax.grid(which='minor', color='w', linestyle='-', linewidth=2) plt.xticks([]) plt.yticks([]) """ Explanation: Step 6. Prettify the chart. End of explanation """ # instantiate a new figure object fig = plt.figure() # use matshow to display the waffle chart colormap = plt.cm.coolwarm plt.matshow(waffle_chart, cmap=colormap) plt.colorbar() # get the axis ax = plt.gca() # set minor ticks ax.set_xticks(np.arange(-.5, (width), 1), minor=True) ax.set_yticks(np.arange(-.5, (height), 1), minor=True) # add gridlines based on minor ticks ax.grid(which='minor', color='w', linestyle='-', linewidth=2) plt.xticks([]) plt.yticks([]) # compute cumulative sum of individual categories to match color schemes between chart and legend values_cumsum = np.cumsum(df_dsn['Total']) total_values = values_cumsum[len(values_cumsum) - 1] # create legend legend_handles = [] for i, category in enumerate(df_dsn.index.values): label_str = category + ' (' + str(df_dsn['Total'][i]) + ')' color_val = colormap(float(values_cumsum[i])/total_values) legend_handles.append(mpatches.Patch(color=color_val, label=label_str)) # add legend to chart plt.legend(handles=legend_handles, loc='lower center', ncol=len(df_dsn.index.values), bbox_to_anchor=(0., -0.2, 0.95, .1) ) """ Explanation: Step 7. Create a legend and add it to chart. End of explanation """ def create_waffle_chart(categories, values, height, width, colormap, value_sign=''): # compute the proportion of each category with respect to the total total_values = sum(values) category_proportions = [(float(value) / total_values) for value in values] # compute the total number of tiles total_num_tiles = width * height # total number of tiles print ('Total number of tiles is', total_num_tiles) # compute the number of tiles for each catagory tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions] # print out number of tiles per category for i, tiles in enumerate(tiles_per_category): print (df_dsn.index.values[i] + ': ' + str(tiles)) # initialize the waffle chart as an empty matrix waffle_chart = np.zeros((height, width)) # define indices to loop through waffle chart category_index = 0 tile_index = 0 # populate the waffle chart for col in range(width): for row in range(height): tile_index += 1 # if the number of tiles populated for the current category # is equal to its corresponding allocated tiles... if tile_index > sum(tiles_per_category[0:category_index]): # ...proceed to the next category category_index += 1 # set the class value to an integer, which increases with class waffle_chart[row, col] = category_index # instantiate a new figure object fig = plt.figure() # use matshow to display the waffle chart colormap = plt.cm.coolwarm plt.matshow(waffle_chart, cmap=colormap) plt.colorbar() # get the axis ax = plt.gca() # set minor ticks ax.set_xticks(np.arange(-.5, (width), 1), minor=True) ax.set_yticks(np.arange(-.5, (height), 1), minor=True) # add dridlines based on minor ticks ax.grid(which='minor', color='w', linestyle='-', linewidth=2) plt.xticks([]) plt.yticks([]) # compute cumulative sum of individual categories to match color schemes between chart and legend values_cumsum = np.cumsum(values) total_values = values_cumsum[len(values_cumsum) - 1] # create legend legend_handles = [] for i, category in enumerate(categories): if value_sign == '%': label_str = category + ' (' + str(values[i]) + value_sign + ')' else: label_str = category + ' (' + value_sign + str(values[i]) + ')' color_val = colormap(float(values_cumsum[i])/total_values) legend_handles.append(mpatches.Patch(color=color_val, label=label_str)) # add legend to chart plt.legend( handles=legend_handles, loc='lower center', ncol=len(categories), bbox_to_anchor=(0., -0.2, 0.95, .1) ) """ Explanation: And there you go! What a good looking delicious waffle chart, don't you think? Now it would very inefficient to repeat these seven steps every time we wish to create a waffle chart. So let's combine all seven steps into one function called create_waffle_chart. This function would take the following parameters as input: categories: Unique categories or classes in dataframe. values: Values corresponding to categories or classes. height: Defined height of waffle chart. width: Defined width of waffle chart. colormap: Colormap class value_sign: In order to make our function more generalizable, we will add this parameter to address signs that could be associated with a value such as %, $, and so on. value_sign has a default value of empty string. End of explanation """ width = 40 # width of chart height = 10 # height of chart categories = df_dsn.index.values # categories values = df_dsn['Total'] # correponding values of categories colormap = plt.cm.coolwarm # color map class """ Explanation: Now to create a waffle chart, all we have to do is call the function create_waffle_chart. Let's define the input parameters: End of explanation """ create_waffle_chart(categories, values, height, width, colormap) """ Explanation: And now let's call our function to create a waffle chart. End of explanation """ # install wordcloud !conda install -c conda-forge wordcloud==1.4.1 --yes # import package and its set of stopwords from wordcloud import WordCloud, STOPWORDS print ('Wordcloud is installed and imported!') """ Explanation: There seems to be a new Python package for generating waffle charts called PyWaffle, but it looks like the repository is still being built. But feel free to check it out and play with it. Word Clouds <a id="8"></a> Word clouds (also known as text clouds or tag clouds) work in a simple way: the more a specific word appears in a source of textual data (such as a speech, blog post, or database), the bigger and bolder it appears in the word cloud. Luckily, a Python package already exists in Python for generating word clouds. The package, called word_cloud was developed by Andreas Mueller. You can learn more about the package by following this link. Let's use this package to learn how to generate a word cloud for a given text document. First, let's install the package. End of explanation """ # download file and save as alice_novel.txt !wget --quiet https://ibm.box.com/shared/static/m54sjtrshpt5su20dzesl5en9xa5vfz1.txt -O alice_novel.txt # open the file and read it into a variable alice_novel alice_novel = open('alice_novel.txt', 'r').read() print ('File downloaded and saved!') """ Explanation: Word clouds are commonly used to perform high-level analysis and visualization of text data. Accordinly, let's digress from the immigration dataset and work with an example that involves analyzing text data. Let's try to analyze a short novel written by Lewis Carroll titled Alice's Adventures in Wonderland. Let's go ahead and download a .txt file of the novel. End of explanation """ stopwords = set(STOPWORDS) """ Explanation: Next, let's use the stopwords that we imported from word_cloud. We use the function set to remove any redundant stopwords. End of explanation """ # instantiate a word cloud object alice_wc = WordCloud( background_color='white', max_words=2000, stopwords=stopwords ) # generate the word cloud alice_wc.generate(alice_novel) """ Explanation: Create a word cloud object and generate a word cloud. For simplicity, let's generate a word cloud using only the first 2000 words in the novel. End of explanation """ # display the word cloud plt.imshow(alice_wc, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: Awesome! Now that the word cloud is created, let's visualize it. End of explanation """ fig = plt.figure() fig.set_figwidth(14) # set width fig.set_figheight(18) # set height # display the cloud plt.imshow(alice_wc, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: Interesting! So in the first 2000 words in the novel, the most common words are Alice, said, little, Queen, and so on. Let's resize the cloud so that we can see the less frequent words a little better. End of explanation """ stopwords.add('said') # add the words said to stopwords # re-generate the word cloud alice_wc.generate(alice_novel) # display the cloud fig = plt.figure() fig.set_figwidth(14) # set width fig.set_figheight(18) # set height plt.imshow(alice_wc, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: Much better! However, said isn't really an informative word. So let's add it to our stopwords and re-generate the cloud. End of explanation """ # download image !wget --quiet https://ibm.box.com/shared/static/3mpxgaf6muer6af7t1nvqkw9cqj85ibm.png -O alice_mask.png # save mask to alice_mask alice_mask = np.array(Image.open('alice_mask.png')) print('Image downloaded and saved!') """ Explanation: Excellent! This looks really interesting! Another cool thing you can implement with the word_cloud package is superimposing the words onto a mask of any shape. Let's use a mask of Alice and her rabbit. We already created the mask for you, so let's go ahead and download it and call it alice_mask.png. End of explanation """ fig = plt.figure() fig.set_figwidth(14) # set width fig.set_figheight(18) # set height plt.imshow(alice_mask, cmap=plt.cm.gray, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: Let's take a look at how the mask looks like. End of explanation """ # instantiate a word cloud object alice_wc = WordCloud(background_color='white', max_words=2000, mask=alice_mask, stopwords=stopwords) # generate the word cloud alice_wc.generate(alice_novel) # display the word cloud fig = plt.figure() fig.set_figwidth(14) # set width fig.set_figheight(18) # set height plt.imshow(alice_wc, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: Shaping the word cloud according to the mask is straightforward using word_cloud package. For simplicity, we will continue using the first 2000 words in the novel. End of explanation """ df_can.head() """ Explanation: Really impressive! Unfortunately, our immmigration data does not have any text data, but where there is a will there is a way. Let's generate sample text data from our immigration dataset, say text data of 90 words. Let's recall how our data looks like. End of explanation """ total_immigration = df_can['Total'].sum() total_immigration """ Explanation: And what was the total immigration from 1980 to 2013? End of explanation """ max_words = 90 word_string = '' for country in df_can.index.values: # check if country's name is a single-word name if len(country.split(' ')) == 1: repeat_num_times = int(df_can.loc[country, 'Total']/float(total_immigration)*max_words) word_string = word_string + ((country + ' ') * repeat_num_times) # display the generated text word_string """ Explanation: Using countries with single-word names, let's duplicate each country's name based on how much they contribute to the total immigration. End of explanation """ # create the word cloud wordcloud = WordCloud(background_color='white').generate(word_string) print('Word cloud created!') # display the cloud fig = plt.figure() fig.set_figwidth(14) fig.set_figheight(18) plt.imshow(wordcloud, interpolation='bilinear') plt.axis('off') plt.show() """ Explanation: We are not dealing with any stopwords here, so there is no need to pass them when creating the word cloud. End of explanation """ # install seaborn !pip install seaborn # import library import seaborn as sns print('Seaborn installed and imported!') """ Explanation: According to the above word cloud, it looks like the majority of the people who immigrated came from one of 15 countries that are displayed by the word cloud. One cool visual that you could build, is perhaps using the map of Canada and a mask and superimposing the word cloud on top of the map of Canada. That would be an interesting visual to build! Regression Plots <a id="10"></a> Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. You can learn more about seaborn by following this link and more about seaborn regression plots by following this link. In lab Pie Charts, Box Plots, Scatter Plots, and Bubble Plots, we learned how to create a scatter plot and then fit a regression line. It took ~20 lines of code to create the scatter plot along with the regression fit. In this final section, we will explore seaborn and see how efficient it is to create regression lines and fits using this library! Let's first install seaborn End of explanation """ # we can use the sum() method to get the total population per year df_tot = pd.DataFrame(df_can[years].sum(axis=0)) # change the years to type float (useful for regression later on) df_tot.index = map(float,df_tot.index) # reset the index to put in back in as a column in the df_tot dataframe df_tot.reset_index(inplace = True) # rename columns df_tot.columns = ['year', 'total'] # view the final dataframe df_tot.head() """ Explanation: Create a new dataframe that stores that total number of landed immigrants to Canada per year from 1980 to 2013. End of explanation """ import seaborn as sns ax = sns.regplot(x='year', y='total', data=df_tot) """ Explanation: With seaborn, generating a regression plot is as simple as calling the regplot function. End of explanation """ import seaborn as sns ax = sns.regplot(x='year', y='total', data=df_tot, color='green') """ Explanation: This is not magic; it is seaborn! You can also customize the color of the scatter plot and regression line. Let's change the color to green. End of explanation """ import seaborn as sns ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+') """ Explanation: You can always customize the marker shape, so instead of circular markers, let's use '+'. End of explanation """ plt.figure(figsize=(15, 10)) ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+') """ Explanation: Let's blow up the plot a little bit so that it is more appealing to the sight. End of explanation """ plt.figure(figsize=(15, 10)) ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200}) ax.set(xlabel='Year', ylabel='Total Immigration') # add x- and y-labels ax.set_title('Total Immigration to Canada from 1980 - 2013') # add title """ Explanation: And let's increase the size of markers so they match the new size of the figure, and add a title and x- and y-labels. End of explanation """ plt.figure(figsize=(15, 10)) sns.set(font_scale=1.5) ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200}) ax.set(xlabel='Year', ylabel='Total Immigration') ax.set_title('Total Immigration to Canada from 1980 - 2013') """ Explanation: And finally increase the font size of the tickmark labels, the title, and the x- and y-labels so they don't feel left out! End of explanation """ plt.figure(figsize=(15, 10)) sns.set(font_scale=1.5) sns.set_style('ticks') # change background to white background ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200}) ax.set(xlabel='Year', ylabel='Total Immigration') ax.set_title('Total Immigration to Canada from 1980 - 2013') """ Explanation: Amazing! A complete scatter plot with a regression fit with 5 lines of code only. Isn't this really amazing? If you are not a big fan of the purple background, you can easily change the style to a white plain background. End of explanation """ plt.figure(figsize=(15, 10)) sns.set(font_scale=1.5) sns.set_style('whitegrid') ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200}) ax.set(xlabel='Year', ylabel='Total Immigration') ax.set_title('Total Immigration to Canada from 1980 - 2013') """ Explanation: Or to a white background with gridlines. End of explanation """ ### type your answer here import folium folium.Map(location=[-40.4637, -3.7492], zoom_start=6, tiles='Stamen Toner') """ Explanation: Question: Use seaborn to create a scatter plot with a regression line to visualize the total immigration from Denmark, Sweden, and Norway to Canada from 1980 to 2013. End of explanation """
gautam1858/tensorflow
tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib """ Explanation: Post-training integer quantization with int16 activations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview TensorFlow Lite now supports converting activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. We refer to this mode as the "16x8 quantization mode". This mode can improve accuracy of the quantized model significantly, when activations are sensitive to the quantization, while still achieving almost 3-4x reduction in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators. Some examples of models that benefit from this mode of the post-training quantization include: * super-resolution, * audio signal processing such as noise cancelling and beamforming, * image de-noising, * HDR reconstruction from a single image In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer using this mode. At the end you check the accuracy of the converted model and compare it to the original float32 model. Note that this example demonstrates the usage of this mode and doesn't show benefits over other available quantization techniques in TensorFlow Lite. Build an MNIST model Setup End of explanation """ tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 """ Explanation: Check that the 16x8 quantization mode is available End of explanation """ # Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) ) """ Explanation: Train and export the model End of explanation """ converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() """ Explanation: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy. Convert to a TensorFlow Lite model Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model. Now, convert the model using TFliteConverter into default float32 format: End of explanation """ tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model) """ Explanation: Write it out to a .tflite file: End of explanation """ converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8] """ Explanation: To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification: End of explanation """ mnist_train, _ = tf.keras.datasets.mnist.load_data() images = tf.cast(mnist_train[0], tf.float32) / 255.0 mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1) def representative_data_gen(): for input_value in mnist_ds.take(100): # Model has only one input so each data point has one element. yield [input_value] converter.representative_dataset = representative_data_gen """ Explanation: As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16. Set the calibration data: End of explanation """ tflite_16x8_model = converter.convert() tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite" tflite_model_16x8_file.write_bytes(tflite_16x8_model) """ Explanation: Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience. End of explanation """ !ls -lh {tflite_models_dir} """ Explanation: Note how the resulting file is approximately 1/3 the size. End of explanation """ interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file)) interpreter_16x8.allocate_tensors() """ Explanation: Run the TensorFlow Lite models Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. Load the model into the interpreters End of explanation """ test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter_16x8.get_input_details()[0]["index"] output_index = interpreter_16x8.get_output_details()[0]["index"] interpreter_16x8.set_tensor(input_index, test_image) interpreter_16x8.invoke() predictions = interpreter_16x8.get_tensor(output_index) plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) """ Explanation: Test the models on one image End of explanation """ # A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy print(evaluate_model(interpreter)) """ Explanation: Evaluate the models End of explanation """ # NOTE: This quantization mode is an experimental post-training mode, # it does not have any optimized kernels implementations or # specialized machine learning hardware accelerators. Therefore, # it could be slower than the float interpreter. print(evaluate_model(interpreter_16x8)) """ Explanation: Repeat the evaluation on the 16x8 quantized model: End of explanation """
pwer21c/pwer21c.github.io
python/pythoncodes/3_preview_while_ifelse_samedi.ipynb
mit
a=5 if a>3: print("a는 3보다 큽니다.") a=1 if a>3: print("a는 3보다 큽니다.") a=330 b=200 if b > a: print("b는 a보다 커요") elif b==a: print("b와 a는 같은 숫자에요") else: print("b는 a보다 작아요") while 문을 이용하여 1에서 10까지 출력하세요 i=2 while i<11: print(i) i=i+2 i=0 while i<11: if i!=0: print(i) i=i+2 i=1 while i<11: if i%5==0: print(i) i=i+1 % while문을 이용하여 7의 배수 즉 7,14,21,..., 100까지의 숫자에서 fruits=["apple", "banana", "cherry"] print(fruits) print(fruits[2]) fruits.remove("banana") print(fruits) fruits.append("banana") print(fruits) i=1 group3=[] group6=[] group9=[] while i<100: if i%7==0: group7.append(i) elif i%5==0: group5.append(i) i=i+1 print(group7) print(group5) i=1 group3=[] group6=[] group9=[] while i<100: if i%3==0: group3.append(i) if i%6==0: group6.append(i) if i%9==0: group9.append(i) i=i+1 print(group3) print(group6) print(group9) print("xyz") a=33 b=200 if b > a: print("b는 a보다 커요") elif b==a: print("b와 a는 같은 숫자에요") else: print("b는 a보다 작아요") """ Explanation: 지금 밖에 비가 온다면 나갈때 우산을 들고 나가야 하죠 ? 비가 온다면 if it rains , s'il pleut 조건문 입니다. 잘 기억해 두세요 Equals: a == b Not Equals: a != b Less than: a < b Less than or equal to: a <= b Greater than: a > b Greater than or equal to: a >= b 그럼 좀더 다르게 해볼까요? End of explanation """ a=330 b=200 if b > a: print("b는 a보다 커요") elif b==a: print("b와 a는 같은 숫자에요") else: print("b는 a보다 작아요") """ Explanation: 그럼 a의 숫자를 바꿔 봅시다. 330으로 End of explanation """ a=330 b=200 if b > a: print("b는 a보다 커요") elif b==a: print("b와 a는 같은 숫자에요") else: print("b는 a보다 작아요") """ Explanation: 결과가 다른걸 알수가 있어요. 자 여기서 아래를 보세요 End of explanation """ ## 이건 뭔지 알죠 i라는 변수에 1을 집어 넣은거에요. i는 1이에요 그런데 변수는 변하는 거라고 했죠? 잘보세요 i = 1 # while 옆에 있는 i<6은 i가 6보다 작으면 이 작업을 하겠다는 거에요 while i < 6: print(i) i=i+1 """ Explanation: 에러가 생겨요 꼭 : 다음에 오는건 앞에 공간을 비워서 줄을 맞춰야 해요. 그걸 indentation 에러인데 자주 만나게 될꺼에요. 자 이제는 while문 End of explanation """ ## 이건 뭔지 알죠 i라는 변수에 1을 집어 넣은거에요. i는 1이에요 그런데 변수는 변하는 거라고 했죠? 잘보세요 i = 1 # while 옆에 있는 i<6은 i가 6보다 작으면 이 작업을 하겠다는 거에요 while i < 10: print(i) i=i+1 i = 1 while i <= 10: if i%2==0: print(i) i=i+1 """ Explanation: 잘했어요. 그런데 저는 1에서 10까지 출력하고 싶어요. 그러면 End of explanation """
ucsd-ccbb/visJS2jupyter
notebooks/multigraph_example/.ipynb_checkpoints/multigraph_example-checkpoint.ipynb
mit
import matplotlib as mpl import networkx as nx import pandas as pd import random import visJS2jupyter.visJS_module """ Explanation: Multigraph Network Styling for visJS2jupyter Authors: Brin Rosenthal (sbrosenthal@ucsd.edu), Mikayla Webster (m1webste@ucsd.edu), Julia Len (jlen@ucsd.edu) Import packages End of explanation """ G = nx.connected_watts_strogatz_graph(30,5,.2) G = nx.MultiGraph(G) edges = G.edges(keys = True) # for multigraphs every edge has to be represented by a three-tuple (source, target, key) """ Explanation: We start by creating a randomized, single-edged graph, and convert that to a multigraph End of explanation """ sources = list(zip(*edges)[0]) targets = list(zip(*edges)[1]) backward_edges = list(zip(targets, sources)) # demonstarting adding backwards edges G.add_edges_from(backward_edges) edges = list(G.edges(data = True)) nodes = list(G.nodes()) # type cast to list in order to make compatible with networkx 1.11 and 2.0 edges = list(G.edges(keys = True)) # for nx 2.0, returns an "EdgeView" object rather than an iterable """ Explanation: We duplicate every edge in the graph to make it a true multigraph. Note: NetworkX does not support duplicate edges with opposite directions. NetworkX will flip any backwards edges you try to add to your graph. For example, if your graph currently contains the edges [(0,1), (1,2)] and you add the edge (1,0) to your graph, your graph will now contain edges [(0,1), (0,1), (1,2)] End of explanation """ # add some node attributes to color-code by degree = dict(G.degree()) # nx 2.0 returns a "DegreeView" object. Cast to dict to maintain compatibility with nx 1.11 bc = nx.betweenness_centrality(G) nx.set_node_attributes(G, name = 'degree', values = degree) # between networkx 1.11 and 2.0, therefore we must nx.set_node_attributes(G, name = 'betweenness_centrality', values = bc) # explicitly pass our arguments # (not implicitly through position) # add the edge attribute 'weight' to color-code by weights = [] for i in range(len(edges)): weights.append(float(random.randint(1,5))) w_dict = dict(zip(edges, weights)) nx.set_edge_attributes(G, name = 'weight', values = w_dict) # map the betweenness centrality to the node color, using matplotlib spring_r colormap node_to_color = visJS_module.return_node_to_color(G,field_to_map='betweenness_centrality',cmap=mpl.cm.spring_r,alpha = 1, color_max_frac = .9,color_min_frac = .1) # map weight to edge color, using default settings edge_to_color = visJS_module.return_edge_to_color(G,field_to_map='weight') """ Explanation: Multigraph Node and Edge Styling There is no difference between multigraph and single-edged-graph styling. Just map the node and edge attributes to some visual properties, and style the nodes and edges according to these properties (like usual!) End of explanation """ # set node initial positions using networkx's spring_layout function pos = nx.spring_layout(G) nodes_dict = [{"id":n,"color":node_to_color[n], "degree":nx.degree(G,n), "x":pos[n][0]*1000, "y":pos[n][1]*1000} for n in nodes ] node_map = dict(zip(nodes,range(len(nodes)))) # map to indices for source/target in edges edges_dict = [{"source":node_map[edges[i][0]], "target":node_map[edges[i][1]], "color":edge_to_color[(edges[i][0],edges[i][1],edges[i][2])],"title":'test'} # remeber (source, target, key) for i in range(len(edges))] # set some network-wide styles visJS_module.visjs_network(nodes_dict,edges_dict, node_size_multiplier=3, node_size_transform = '', node_color_highlight_border='red', node_color_highlight_background='#D3918B', node_color_hover_border='blue', node_color_hover_background='#8BADD3', node_font_size=25, edge_arrow_to=True, physics_enabled=True, edge_color_highlight='#8A324E', edge_color_hover='#8BADD3', edge_width=3, max_velocity=15, min_velocity=1, edge_smooth_enabled = True) """ Explanation: Interactive network Note that this example is simply the multigraph version of our "Complex Parameters" notebook. End of explanation """
sorig/shogun
doc/ipython-notebooks/metric/LMNN.ipynb
bsd-3-clause
import numpy import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') x = numpy.array([[0,0],[-1,0.1],[0.3,-0.05],[0.7,0.3],[-0.2,-0.6],[-0.15,-0.63],[-0.25,0.55],[-0.28,0.67]]) y = numpy.array([0,0,0,0,1,1,2,2]) """ Explanation: Metric Learning with the Shogun Machine Learning Toolbox By Fernando J. Iglesias Garcia (GitHub ID: iglesias) as project report for GSoC 2013 (project details). This notebook illustrates <a href="http://en.wikipedia.org/wiki/Statistical_classification">classification</a> and <a href="http://en.wikipedia.org/wiki/Feature_selection">feature selection</a> using <a href="http://en.wikipedia.org/wiki/Similarity_learning#Metric_learning">metric learning</a> in Shogun. To overcome the limitations of <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">knn</a> with Euclidean distance as the distance measure, <a href="http://en.wikipedia.org/wiki/Large_margin_nearest_neighbor">Large Margin Nearest Neighbour</a>(LMNN) is discussed. This is consolidated by applying LMNN over the metagenomics data set. Building up the intuition to understand LMNN First of all, let us introduce LMNN through a simple example. For this purpose, we will be using the following two-dimensional toy data set: End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline def plot_data(feats,labels,axis,alpha=1.0): # separate features according to their class X0,X1,X2 = feats[labels==0], feats[labels==1], feats[labels==2] # class 0 data axis.plot(X0[:,0], X0[:,1], 'o', color='green', markersize=12, alpha=alpha) # class 1 data axis.plot(X1[:,0], X1[:,1], 'o', color='red', markersize=12, alpha=alpha) # class 2 data axis.plot(X2[:,0], X2[:,1], 'o', color='blue', markersize=12, alpha=alpha) # set axes limits axis.set_xlim(-1.5,1.5) axis.set_ylim(-1.5,1.5) axis.set_aspect('equal') axis.set_xlabel('x') axis.set_ylabel('y') figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) axis.set_title('Toy data set') pyplot.show() """ Explanation: That is, there are eight feature vectors where each of them belongs to one out of three different classes (identified by either 0, 1, or 2). Let us have a look at this data: End of explanation """ def make_covariance_ellipse(covariance): import matplotlib.patches as patches import scipy.linalg as linalg # the ellipse is centered at (0,0) mean = numpy.array([0,0]) # eigenvalue decomposition of the covariance matrix (w are eigenvalues and v eigenvectors), # keeping only the real part w,v = linalg.eigh(covariance) # normalize the eigenvector corresponding to the largest eigenvalue u = v[0]/linalg.norm(v[0]) # angle in degrees angle = 180.0/numpy.pi*numpy.arctan(u[1]/u[0]) # fill Gaussian ellipse at 2 standard deviation ellipse = patches.Ellipse(mean, 2*w[0]**0.5, 2*w[1]**0.5, 180+angle, color='orange', alpha=0.3) return ellipse # represent the Euclidean distance figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) ellipse = make_covariance_ellipse(numpy.eye(2)) axis.add_artist(ellipse) axis.set_title('Euclidean distance') pyplot.show() """ Explanation: In the figure above, we can see that two of the classes are represented by two points that are, for each of these classes, very close to each other. The third class, however, has four points that are close to each other with respect to the y-axis, but spread along the x-axis. If we were to apply kNN (k-nearest neighbors) in a data set like this, we would expect quite some errors using the standard Euclidean distance. This is due to the fact that the spread of the data is not similar amongst the feature dimensions. The following piece of code plots an ellipse on top of the data set. The ellipse in this case is in fact a circunference that helps to visualize how the Euclidean distance weights equally both feature dimensions. End of explanation """ from shogun import features, MulticlassLabels feats = features(x.T) labels = MulticlassLabels(y.astype(numpy.float64)) """ Explanation: A possible workaround to improve the performance of kNN in a data set like this would be to input to the kNN routine a distance measure. For instance, in the example above a good distance measure would give more weight to the y-direction than to the x-direction to account for the large spread along the x-axis. Nonetheless, it would be nicer (and, in fact, much more useful in practice) if this distance could be learnt automatically from the data at hand. Actually, LMNN is based upon this principle: given a number of neighbours k, find the Mahalanobis distance measure which maximizes kNN accuracy (using the given value for k) in a training data set. As we usually do in machine learning, under the assumption that the training data is an accurate enough representation of the underlying process, the distance learnt will not only perform well in the training data, but also have good generalization properties. Now, let us use the LMNN class implemented in Shogun to find the distance and plot its associated ellipse. If everything goes well, we will see that the new ellipse only overlaps with the data points of the green class. First, we need to wrap the data into Shogun's feature and label objects: End of explanation """ from shogun import LMNN # number of target neighbours per example k = 1 lmnn = LMNN(feats,labels,k) # set an initial transform as a start point of the optimization init_transform = numpy.eye(2) lmnn.put('maxiter', 2000) lmnn.train(init_transform) """ Explanation: Secondly, perform LMNN training: End of explanation """ # get the linear transform from LMNN L = lmnn.get_real_matrix('linear_transform') # square the linear transform to obtain the Mahalanobis distance matrix M = numpy.matrix(numpy.dot(L.T,L)) # represent the distance given by LMNN figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) ellipse = make_covariance_ellipse(M.I) axis.add_artist(ellipse) axis.set_title('LMNN distance') pyplot.show() """ Explanation: LMNN is an iterative algorithm. The argument given to train represents the initial state of the solution. By default, if no argument is given, then LMNN uses PCA to obtain this initial value. Finally, we retrieve the distance measure learnt by LMNN during training and visualize it together with the data: End of explanation """ # project original data using L lx = numpy.dot(L,x.T) # represent the data in the projected space figure,axis = pyplot.subplots(1,1) plot_data(lx.T,y,axis) plot_data(x,y,axis,0.3) ellipse = make_covariance_ellipse(numpy.eye(2)) axis.add_artist(ellipse) axis.set_title('LMNN\'s linear transform') pyplot.show() """ Explanation: Beyond the main idea LMNN is one of the so-called linear metric learning methods. What this means is that we can understand LMNN's output in two different ways: on the one hand, as a distance measure, this was explained above; on the other hand, as a linear transformation of the input data. Like any other linear transformation, LMNN's output can be written as a matrix, that we will call $L$. In other words, if the input data is represented by the matrix $X$, then LMNN can be understood as the data transformation expressed by $X'=L X$. We use the convention that each column is a feature vector; thus, the number of rows of $X$ is equal to the input dimension of the data, and the number of columns is equal to the number of vectors. So far, so good. But, if the output of the same method can be interpreted in two different ways, then there must be a relation between them! And that is precisely the case! As mentioned above, the ellipses that were plotted in the previous section represent a distance measure. This distance measure can be thought of as a matrix $M$, being the distance between two vectors $\vec{x_i}$ and $\vec{x_j}$ equal to $d(\vec{x_i},\vec{x_j})=(\vec{x_i}-\vec{x_j})^T M (\vec{x_i}-\vec{x_j})$. In general, this type of matrices are known as Mahalanobis matrices. In LMNN, the matrix $M$ is precisely the 'square' of the linear transformation $L$, i.e. $M=L^T L$. Note that a direct consequence of this is that $M$ is guaranteed to be positive semi-definite (PSD), and therefore define a valid metric. This distance measure/linear transform duality in LMNN has its own advantages. An important one is that the optimization problem can go back and forth between the $L$ and the $M$ representations, giving raise to a very efficient solution. Let us now visualize LMNN using the linear transform interpretation. In the following figure we have taken our original toy data, transform it using $L$ and plot both the before and after versions of the data together. End of explanation """ import numpy import matplotlib.pyplot as pyplot %matplotlib inline def sandwich_data(): from numpy.random import normal # number of distinct classes num_classes = 6 # number of points per class num_points = 9 # distance between layers, the points of each class are in a layer dist = 0.7 # memory pre-allocation x = numpy.zeros((num_classes*num_points, 2)) y = numpy.zeros(num_classes*num_points) for i,j in zip(range(num_classes), range(-num_classes//2, num_classes//2 + 1)): for k,l in zip(range(num_points), range(-num_points//2, num_points//2 + 1)): x[i*num_points + k, :] = numpy.array([normal(l, 0.1), normal(dist*j, 0.1)]) y[i*num_points:i*num_points + num_points] = i return x,y def plot_sandwich_data(x, y, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']): for idx,val in enumerate(numpy.unique(y)): xi = x[y==val] axis.scatter(xi[:,0], xi[:,1], s=50, facecolors='none', edgecolors=cols[idx]) x, y = sandwich_data() figure, axis = pyplot.subplots(1, 1, figsize=(5,5)) plot_sandwich_data(x, y, axis) axis.set_aspect('equal') axis.set_title('"Sandwich" toy data set') axis.set_xlabel('x') axis.set_ylabel('y') pyplot.show() """ Explanation: In the figure above, the transparent points represent the original data and are shown to ease the visualization of the LMNN transformation. Note also that the ellipse plotted is the one corresponding to the common Euclidean distance. This is actually an important consideration: if we think of LMNN as a linear transformation, the distance considered in the projected space is the Euclidean distance, and no any Mahalanobis distance given by M. To sum up, we can think of LMNN as a linear transform of the input space, or as method to obtain a distance measure to be used in the input space. It is an error to apply both the projection and the learnt Mahalanobis distance. Neighbourhood graphs An alternative way to visualize the effect of using the distance found by LMNN together with kNN consists of using neighbourhood graphs. Despite the fancy name, these are actually pretty simple. The idea is just to construct a graph in the Euclidean space, where the points in the data set are the nodes of the graph, and a directed edge from one point to another denotes that the destination node is the 1-nearest neighbour of the origin node. Of course, it is also possible to work with neighbourhood graphs where $k \gt 1$. Here we have taken the simplification of $k = 1$ so that the forthcoming plots are not too cluttered. Let us define a data set for which the Euclidean distance performs considerably bad. In this data set there are several levels or layers in the y-direction. Each layer is populated by points that belong to the same class spread along the x-direction. The layers are close to each other in pairs, whereas the spread along x is larger. Let us define a function to generate such a data set and have a look at it. End of explanation """ from shogun import KNN, EuclideanDistance, LMNN, features, MulticlassLabels def plot_neighborhood_graph(x, nn, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']): for i in range(x.shape[0]): xs = [x[i,0], x[nn[1,i], 0]] ys = [x[i,1], x[nn[1,i], 1]] axis.plot(xs, ys, cols[int(y[i])]) feats = features(x.T) labels = MulticlassLabels(y) fig, axes = pyplot.subplots(1, 3, figsize=(15, 10)) # use k = 2 instead of 1 because otherwise the method nearest_neighbors just returns the same # points as their own 1-nearest neighbours k = 2 knn = KNN(k, EuclideanDistance(feats, feats), labels) plot_sandwich_data(x, y, axes[0]) plot_neighborhood_graph(x, knn.nearest_neighbors(), axes[0]) axes[0].set_title('Euclidean neighbourhood in the input space') lmnn = LMNN(feats, labels, k) # set a large number of iterations. The data set is small so it does not cost a lot, and this way # we ensure a robust solution lmnn.put('maxiter', 3000) lmnn.train() knn.put('distance', lmnn.get_distance()) plot_sandwich_data(x, y, axes[1]) plot_neighborhood_graph(x, knn.nearest_neighbors(), axes[1]) axes[1].set_title('LMNN neighbourhood in the input space') # plot features in the transformed space, with the neighbourhood graph computed using the Euclidean distance L = lmnn.get_real_matrix('linear_transform') xl = numpy.dot(x, L.T) feats = features(xl.T) knn.put('distance', EuclideanDistance(feats, feats)) plot_sandwich_data(xl, y, axes[2]) plot_neighborhood_graph(xl, knn.nearest_neighbors(), axes[2]) axes[2].set_ylim(-3, 2.5) axes[2].set_title('Euclidean neighbourhood in the transformed space') [axes[i].set_xlabel('x') for i in range(len(axes))] [axes[i].set_ylabel('y') for i in range(len(axes))] [axes[i].set_aspect('equal') for i in range(len(axes))] pyplot.show() """ Explanation: Let the fun begin now! In the following block of code, we create an instance of a kNN classifier, compute the nearest neighbours using the Euclidean distance and, afterwards, using the distance computed by LMNN. The data set in the space result of the linear transformation given by LMNN is also shown. End of explanation """ from shogun import CSVFile, features, MulticlassLabels ape_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/fm_ape_gut.dat'))) ape_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/label_ape_gut.dat'))) """ Explanation: Notice how all the lines that go across the different layers in the left hand side figure have disappeared in the figure in the middle. Indeed, LMNN did a pretty good job here. The figure in the right hand side shows the disposition of the points in the transformed space; from which the neighbourhoods in the middle figure should be clear. In any case, this toy example is just an illustration to give an idea of the power of LMNN. In the next section we will see how after applying a couple methods for feature normalization (e.g. scaling, whitening) the Euclidean distance is not so sensitive against different feature scales. Real data sets Feature selection in metagenomics Metagenomics is a modern field in charge of the study of the DNA of microorganisms. The data set we have chosen for this section contains information about three different types of apes; in particular, gorillas, chimpanzees, and bonobos. Taking an approach based on metagenomics, the main idea is to study the DNA of the microorganisms (e.g. bacteria) which live inside the body of the apes. Owing to the many chemical reactions produced by these microorganisms, it is not only the DNA of the host itself important when studying, for instance, sickness or health, but also the DNA of the microorganisms inhabitants. First of all, let us load the ape data set. This data set contains features taken from the bacteria inhabitant in the gut of the apes. End of explanation """ print('Number of examples = %d, number of features = %d.' % (ape_features.get_num_vectors(), ape_features.get_num_features())) """ Explanation: It is of course important to have a good insight of the data we are dealing with. For instance, how many examples and different features do we have? End of explanation """ def visualize_tdsne(features, labels): from shogun import TDistributedStochasticNeighborEmbedding converter = TDistributedStochasticNeighborEmbedding() converter.put('target_dim', 2) converter.put('perplexity', 25) embedding = converter.embed(features) import matplotlib.pyplot as pyplot % matplotlib inline x = embedding.get_real_matrix('feature_matrix') y = labels.get_real_vector('labels') pyplot.scatter(x[0, y==0], x[1, y==0], color='green') pyplot.scatter(x[0, y==1], x[1, y==1], color='red') pyplot.scatter(x[0, y==2], x[1, y==2], color='blue') pyplot.show() visualize_tdsne(ape_features, ape_labels) """ Explanation: So, 1472 features! Those are quite many features indeed. In other words, the feature vectors at hand lie on a 1472-dimensional space. We cannot visualize in the input feature space how the feature vectors look like. However, in order to gain a little bit more of understanding of the data, we can apply dimension reduction, embed the feature vectors in a two-dimensional space, and plot the vectors in the embedded space. To this end, we are going to use one of the many methods for dimension reduction included in Shogun. In this case, we are using t-distributed stochastic neighbour embedding (or t-dsne). This method is particularly suited to produce low-dimensional embeddings (two or three dimensions) that are straightforward to visualize. End of explanation """ from shogun import KNN, EuclideanDistance from shogun import StratifiedCrossValidationSplitting, CrossValidation from shogun import CrossValidationResult, MulticlassAccuracy # set up the classifier knn = KNN() knn.put('k', 3) knn.put('distance', EuclideanDistance()) # set up 5-fold cross-validation splitting = StratifiedCrossValidationSplitting(ape_labels, 5) # evaluation method evaluator = MulticlassAccuracy() cross_validation = CrossValidation(knn, ape_features, ape_labels, splitting, evaluator) # locking is not supported for kNN, deactivate it to avoid an inoffensive warning cross_validation.put('m_autolock', False) # number of experiments, the more we do, the less variance in the result num_runs = 200 cross_validation.put('num_runs', num_runs) # perform cross-validation and print the result! result = cross_validation.evaluate() result = CrossValidationResult.obtain_from_generic(result) print('kNN mean accuracy in a total of %d runs is %.4f.' % (num_runs, result.get_real('mean'))) """ Explanation: In the figure above, the green points represent chimpanzees, the red ones bonobos, and the blue points gorillas. Providing the results in the figure, we can rapidly draw the conclusion that the three classes of apes are somewhat easy to discriminate in the data set since the classes are more or less well separated in two dimensions. Note that t-dsne use randomness in the embedding process. Thus, the figure result of the experiment in the previous block of code will be different after different executions. Feel free to play around and observe the results after different runs! After this, it should be clear that the bonobos form most of the times a very compact cluster, whereas the chimpanzee and gorillas clusters are more spread. Also, there tends to be a chimpanzee (a green point) closer to the gorillas' cluster. This is probably a outlier in the data set. Even before applying LMNN to the ape gut data set, let us apply kNN classification and study how it performs using the typical Euclidean distance. Furthermore, since this data set is rather small in terms of number of examples, the kNN error above may vary considerably (I have observed variation of almost 20% a few times) across different runs. To get a robust estimate of how kNN performs in the data set, we will perform cross-validation using Shogun's framework for evaluation. This will give us a reliable result regarding how well kNN performs in this data set. End of explanation """ from shogun import LMNN import numpy # to make training faster, use a portion of the features fm = ape_features.get_real_matrix('feature_matrix') ape_features_subset = features(fm[:150, :]) # number of targer neighbours in LMNN, here we just use the same value that was used for KNN before k = 3 lmnn = LMNN(ape_features_subset, ape_labels, k) lmnn.put('m_diagonal', True) lmnn.put('maxiter', 1000) init_transform = numpy.eye(ape_features_subset.get_num_features()) lmnn.train(init_transform) diagonal = numpy.diag(lmnn.get_real_matrix('linear_transform')) print('%d out of %d elements are non-zero.' % (numpy.sum(diagonal != 0), diagonal.size)) """ Explanation: Finally, we can say that KNN performs actually pretty well in this data set. The average test classification error is less than between 2%. This error rate is already low and we should not really expect a significant improvement applying LMNN. This ought not be a surprise. Recall that the points in this data set have more than one thousand features and, as we saw before in the dimension reduction experiment, only two dimensions in an embedded space were enough to discern arguably well the chimpanzees, gorillas and bonobos. Note that we have used stratified splitting for cross-validation. Stratified splitting divides the folds used during cross-validation so that the proportion of the classes in the initial data set is approximately maintained for each of the folds. This is particular useful in skewed data sets, where the number of examples among classes varies significantly. Nonetheless, LMNN may still turn out to be very useful in a data set like this one. Making a small modification of the vanilla LMNN algorithm, we can enforce that the linear transform found by LMNN is diagonal. This means that LMNN can be used to weight each of the features and, once the training is performed, read from these weights which features are relevant to apply kNN and which ones are not. This is indeed a form of feature selection. Using Shogun, it is extremely easy to switch to this so-called diagonal mode for LMNN: just call the method set_diagonal(use_diagonal) with use_diagonal set to True. The following experiment takes about five minutes until it is completed (using Shogun Release, i.e. compiled with optimizations enabled). This is mostly due to the high dimension of the data (1492 features) and the fact that, during training, LMNN has to compute many outer products of feature vectors, which is a computation whose time complexity is proportional to the square of the number of features. For the illustration purposes of this notebook, in the following cell we are just going to use a small subset of all the features so that the training finishes faster. End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline statistics = lmnn.get_statistics() pyplot.plot(statistics.obj.get()) pyplot.grid(True) pyplot.xlabel('Number of iterations') pyplot.ylabel('LMNN objective') pyplot.show() """ Explanation: So only 64 out of the 150 first features are important according to the result transform! The rest of them have been given a weight exactly equal to zero, even if all of the features were weighted equally with a value of one at the beginnning of the training. In fact, if all the 1472 features were used, only about 158 would have received a non-zero weight. Please, feel free to experiment using all the features! It is a fair question to ask how did we know that the maximum number of iterations in this experiment should be around 1200 iterations. Well, the truth is that we know this only because we have run this experiment with this same data beforehand, and we know that after this number of iterations the algorithm has converged. This is not something nice, and the ideal case would be if one could completely forget about this parameter, so that LMNN uses as many iterations as it needs until it converges. Nevertheless, this is not practical at least because of two reasons: If you are dealing with many examples or with very high dimensional feature vectors, you might not want to wait until the algorithm converges and have a look at what LMNN has found before it has completely converged. As with any other algorithm based on gradient descent, the termination criteria can be tricky. Let us illustrate this further: End of explanation """ from shogun import CSVFile, features, MulticlassLabels wine_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat'))) wine_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat'))) assert(wine_features.get_num_vectors() == wine_labels.get_num_labels()) print('%d feature vectors with %d features from %d different classes.' % (wine_features.get_num_vectors(), \ wine_features.get_num_features(), wine_labels.get_num_classes())) """ Explanation: Along approximately the first three hundred iterations, there is not much variation in the objective. In other words, the objective curve is pretty much flat. If we are not careful and use termination criteria that are not demanding enough, training could be stopped at this point. This would be wrong, and might have terrible results as the training had not clearly converged yet at that moment. In order to avoid disastrous situations, in Shogun we have implemented LMNN with really demanding criteria for automatic termination of the training process. Albeit, it is possible to tune the termination criteria using the methods set_stepsize_threshold and set_obj_threshold. These methods can be used to modify the lower bound required in the step size and the increment in the objective (relative to its absolute value), respectively, to stop training. Also, it is possible to set a hard upper bound on the number of iterations using set_maxiter as we have done above. In case the internal termination criteria did not fire before the maximum number of iterations was reached, you will receive a warning message, similar to the one shown above. This is not a synonym that the training went wrong; but it is strongly recommended at this event to have a look at the objective plot as we have done in the previous block of code. Multiclass classification In addition to feature selection, LMNN can be of course used for multiclass classification. I like to think about LMNN in multiclass classification as a way to empower kNN. That is, the idea is basically to apply kNN using the distance found by LMNN $-$ in contrast with using one of the other most common distances, such as the Euclidean one. To this end we will use the wine data set from the UCI Machine Learning repository. End of explanation """ from shogun import KNN, EuclideanDistance from shogun import StratifiedCrossValidationSplitting, CrossValidation from shogun import CrossValidationResult, MulticlassAccuracy import numpy # kNN classifier k = 5 knn = KNN() knn.put('k', k) knn.put('distance', EuclideanDistance()) splitting = StratifiedCrossValidationSplitting(wine_labels, 5) evaluator = MulticlassAccuracy() cross_validation = CrossValidation(knn, wine_features, wine_labels, splitting, evaluator) cross_validation.put('m_autolock', False) num_runs = 200 cross_validation.put('num_runs', num_runs) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means = numpy.zeros(3) euclidean_means[0] = result.get_real('mean') print('kNN accuracy with the Euclidean distance %.4f.' % result.get_real('mean')) """ Explanation: First, let us evaluate the performance of kNN in this data set using the same cross-validation setting used in the previous section: End of explanation """ from shogun import LMNN # train LMNN lmnn = LMNN(wine_features, wine_labels, k) lmnn.put('maxiter', 1500) lmnn.train() # evaluate kNN using the distance learnt by LMNN knn.set_distance(lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means = numpy.zeros(3) lmnn_means[0] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN %.4f.' % result.get_real('mean')) """ Explanation: Seconly, we will use LMNN to find a distance measure and use it with kNN: End of explanation """ print('minima = ' + str(numpy.min(wine_features, axis=1))) print('maxima = ' + str(numpy.max(wine_features, axis=1))) """ Explanation: The warning is fine in this case, we have made sure that the objective variation was really small after 1500 iterations. In any case, do not hesitate to check it yourself studying the objective plot as it was shown in the previous section. As the results point out, LMNN really helps here to achieve better classification performance. However, this comparison is not entirely fair since the Euclidean distance is very sensitive to the scaling that different feature dimensions may have, whereas LMNN can adjust to this during training. Let us have a closer look to this fact. Next, we are going to retrieve the feature matrix and see what are the maxima and minima for every dimension. End of explanation """ from shogun import RescaleFeatures # preprocess features so that all of them vary within [0,1] preprocessor = RescaleFeatures() preprocessor.init(wine_features) wine_features.add_preprocessor(preprocessor) wine_features.apply_preprocessor() # sanity check assert(numpy.min(wine_features) >= 0.0 and numpy.max(wine_features) <= 1.0) # perform kNN classification after the feature rescaling knn.put('distance', EuclideanDistance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means[1] = result.get_real('mean') print('kNN accuracy with the Euclidean distance after feature rescaling %.4f.' % result.get_real('mean')) # train kNN in the new features and classify with kNN lmnn.train() knn.put('distance', lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means[1] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN after feature rescaling %.4f.' % result.get_real('mean')) """ Explanation: Examine the second and the last dimensions, for instance. The second dimension has values ranging from 0.74 to 5.8, while the values of the last dimension range from 278 to 1680. This will cause that the Euclidean distance works specially wrong in this data set. You can realize of this considering that the total distance between two points will almost certainly just take into account the contributions of the dimensions with largest range. In order to produce a more fair comparison, we will rescale the data so that all the feature dimensions are within the interval [0,1]. Luckily, there is a preprocessor class in Shogun that makes this straightforward. End of explanation """ import scipy.linalg as linalg # shorthand for the feature matrix -- this makes a copy of the feature matrix data = wine_features.get_real_matrix('feature_matrix') # remove mean data = data.T data-= numpy.mean(data, axis=0) # compute the square of the covariance matrix and its inverse M = linalg.sqrtm(numpy.cov(data.T)) # keep only the real part, although the imaginary that pops up in the sqrtm operation should be equal to zero N = linalg.inv(M).real # apply whitening transform white_data = numpy.dot(N, data.T) wine_white_features = features(white_data) """ Explanation: Another different preprocessing that can be applied to the data is called whitening. Whitening, which is explained in an article in wikipedia, transforms the covariance matrix of the data into the identity matrix. End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline fig, axarr = pyplot.subplots(1,2) axarr[0].matshow(numpy.cov(wine_features)) axarr[1].matshow(numpy.cov(wine_white_features)) pyplot.show() """ Explanation: The covariance matrices before and after the transformation can be compared to see that the covariance really becomes the identity matrix. End of explanation """ wine_features = wine_white_features # perform kNN classification after whitening knn.set_distance(EuclideanDistance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means[2] = result.get_real('mean') print('kNN accuracy with the Euclidean distance after whitening %.4f.' % result.get_real('mean')) # train kNN in the new features and classify with kNN lmnn.train() knn.put('distance', lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means[2] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN after whitening %.4f.' % result.get_real('mean')) """ Explanation: Finally, we evaluate again the performance obtained with kNN using the Euclidean distance and the distance found by LMNN using the whitened features. End of explanation """ assert(euclidean_means.shape[0] == lmnn_means.shape[0]) N = euclidean_means.shape[0] # the x locations for the groups ind = 0.5*numpy.arange(N) # bar width width = 0.15 figure, axes = pyplot.subplots() figure.set_size_inches(6, 5) euclidean_rects = axes.bar(ind, euclidean_means, width, color='y') lmnn_rects = axes.bar(ind+width, lmnn_means, width, color='r') # attach information to chart axes.set_ylabel('Accuracies') axes.set_ylim(top=1.4) axes.set_title('kNN accuracy by distance and feature preprocessing') axes.set_xticks(ind+width) axes.set_xticklabels(('Raw', 'Rescaling', 'Whitening')) axes.legend(( euclidean_rects[0], lmnn_rects[0]), ('Euclidean', 'LMNN'), loc='upper right') def autolabel(rects): # attach text labels to bars for rect in rects: height = rect.get_height() axes.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%.3f' % height, ha='center', va='bottom') autolabel(euclidean_rects) autolabel(lmnn_rects) pyplot.show() """ Explanation: As it can be seen, it did not really help to whiten the features in this data set with respect to only applying feature rescaling; the accuracy was already rather large after rescaling. In any case, it is good to know that this transformation exists, as it can become useful with other data sets, or before applying other machine learning algorithms. Let us summarize the results obtained in this section with a bar chart grouping the accuracy results by distance (Euclidean or the one found by LMNN), and feature preprocessing: End of explanation """
satishgoda/learning
web/jquery_slide.ipynb
mit
from IPython.display import HTML %%writefile jquery_slide_toggle.html <script> $(document).ready(function(){ $("#flip").click(function(){ $("#panel").slideToggle("fast"); }); }); </script> """ Explanation: Sliding in jQuery https://www.w3schools.com/jquery/jquery_slide.asp https://www.w3schools.com/jquery/tryit.asp?filename=tryjquery_slide_toggle End of explanation """ %%writefile jquery_slide_toggle.html -a <style> #panel, #flip { padding: 5px; text-align: center; background-color: #e5eecc; border: solid 1px #c3c3c3; } #panel { padding: 50px; display: none; } </style> %%writefile jquery_slide_toggle.html -a <div id="flip">Click to slide the panel down or up</div> <div id="panel">Hello world!</div> HTML('./jquery_slide_toggle.html') """ Explanation: When the document is ready (loaded and the DOM constrcuted), jQuery will register the click callback on the elements with the #flip id. End of explanation """ %%html <script> $(document).ready(function(){ $("#flip0").click(function(){ $("#panel0").slideToggle("fast"); }); }); </script> <style> #panel0, #flip0 { padding: 5px; text-align: center; background-color: #e5eecc; border: solid 1px #c3c3c3; } #panel0 { padding: 50px; display: none; } </style> <div id="flip0">Click to slide the panel down or up</div> <div id="panel0">Hello world!</div> """ Explanation: Note I was first typing the entire html code in one cell and executing it. But later I need to make notes, so I came up with the %%writefile cell magic way to organize the html code. End of explanation """
sot/aca_stats
fit_acq_model-2018-11-dev/fit_acq_model-2018-11-binned-poly-binom.ipynb
bsd-3-clause
import sys import os from itertools import count from pathlib import Path sys.path.insert(0, str(Path(os.environ['HOME'], 'git', 'skanb', 'pea-test-set'))) import utils as asvt_utils import numpy as np import matplotlib.pyplot as plt from astropy.table import Table, vstack from astropy.time import Time import tables from scipy import stats from scipy.interpolate import CubicSpline from Chandra.Time import DateTime from astropy.table import Table from chandra_aca.star_probs import get_box_delta %matplotlib inline SKA = Path(os.environ['SKA']) """ Explanation: Fit binned poly-tccd acquisition probability model in 2018-11 This is a DEVELOPMENT model. This is an intermediate model which collects the probabilities within narrow magnitude bins and fits a quadratic polynomial model to the data as a function of CCD temperature. The fit and plot of polynomial coefficients in each mag bin are used as starting values in the fit_acq_model-2018-11-poly-spline-tccd notebook. End of explanation """ # Make a map of AGASC_ID to AGACS 1.7 MAG_ACA. The acq_stats.h5 file has whatever MAG_ACA # was in place at the time of planning the loads. with tables.open_file(str(SKA / 'data' / 'agasc' / 'miniagasc_1p7.h5'), 'r') as h5: agasc_mag_aca = h5.root.data.col('MAG_ACA') agasc_id = h5.root.data.col('AGASC_ID') has_color3 = h5.root.data.col('RSV3') != 0 # red_star = np.isclose(h5.root.data.col('COLOR1'), 1.5) mag_aca_err = h5.root.data.col('MAG_ACA_ERR') / 100 red_mag_err = red_star & ~has_color3 # MAG_ACA, MAG_ACA_ERR is potentially inaccurate agasc1p7_idx = {id: idx for id, idx in zip(agasc_id, count())} agasc1p7 = Table([agasc_mag_aca, mag_aca_err, red_mag_err], names=['mag_aca', 'mag_aca_err', 'red_mag_err'], copy=False) acq_file = str(SKA / 'data' / 'acq_stats' / 'acq_stats.h5') with tables.open_file(str(acq_file), 'r') as h5: cols = h5.root.data.cols names = {'tstart': 'guide_tstart', 'obsid': 'obsid', 'obc_id': 'acqid', 'halfwidth': 'halfw', 'warm_pix': 'n100_warm_frac', 'mag_aca': 'mag_aca', 'mag_obs': 'mean_trak_mag', 'known_bad': 'known_bad', 'color': 'color1', 'img_func': 'img_func', 'ion_rad': 'ion_rad', 'sat_pix': 'sat_pix', 'agasc_id': 'agasc_id', 't_ccd': 'ccd_temp', 'slot': 'slot'} acqs = Table([getattr(cols, h5_name)[:] for h5_name in names.values()], names=list(names.keys())) year_q0 = 1999.0 + 31. / 365.25 # Jan 31 approximately acqs['year'] = Time(acqs['tstart'], format='cxcsec').decimalyear.astype('f4') acqs['quarter'] = (np.trunc((acqs['year'] - year_q0) * 4)).astype('f4') # Create 'fail' column, rewriting history as if the OBC always # ignore the MS flag in ID'ing acq stars. # # UPDATE: is ion_rad being ignored on-board? (Not as of 2018-11) # obc_id = acqs['obc_id'] obc_id_no_ms = (acqs['img_func'] == 'star') & ~acqs['sat_pix'] & ~acqs['ion_rad'] acqs['fail'] = np.where(obc_id | obc_id_no_ms, 0.0, 1.0) acqs['mag_aca'] = [agasc1p7['mag_aca'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] acqs['red_mag_err'] = [agasc1p7['red_mag_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] acqs['mag_aca_err'] = [agasc1p7['mag_aca_err'][agasc1p7_idx[agasc_id]] for agasc_id in acqs['agasc_id']] acqs['asvt'] = False # Filter for year and mag (previously used data through 2007:001) # # UPDATE this to be between 4 to 5 years from time of recalibration. # year_min = 2014.5 year_max = DateTime('2018-10-30').frac_year ok = ((acqs['year'] > year_min) & (acqs['year'] < year_max) & (acqs['mag_aca'] > 7.0) & (acqs['mag_aca'] < 11) & (~np.isclose(acqs['color'], 0.7))) # Filter known bad obsids print('Filtering known bad obsids, start len = {}'.format(np.count_nonzero(ok))) bad_obsids = [ # Venus 2411,2414,6395,7306,7307,7308,7309,7311,7312,7313,7314,7315,7317,7318,7406,583, 7310,9741,9742,9743,9744,9745,9746,9747,9749,9752,9753,9748,7316,15292,16499, 16500,16501,16503,16504,16505,16506,16502, ] for badid in bad_obsids: ok = ok & (acqs['obsid'] != badid) print('Filtering known bad obsids, end len = {}'.format(np.count_nonzero(ok))) """ Explanation: Get acq stats data and clean End of explanation """ peas = Table.read('pea_analysis_results_2018_299_CCD_temp_performance.csv', format='ascii.csv') peas = asvt_utils.flatten_pea_test_data(peas) # Fuzz mag and T_ccd by a bit for plotting and fitting. fpeas = Table([peas['star_mag'], peas['ccd_temp'], peas['search_box_hw']], names=['mag_aca', 't_ccd', 'halfwidth']) fpeas['year'] = np.random.uniform(2019.0, 2019.5, size=len(peas)) fpeas['color'] = 1.0 fpeas['quarter'] = (np.trunc((fpeas['year'] - year_q0) * 4)).astype('f4') fpeas['fail'] = 1.0 - peas['search_success'] fpeas['asvt'] = True fpeas['red_mag_err'] = False fpeas['mag_obs'] = 0.0 """ Explanation: Get ASVT data and make it look more like acq stats data End of explanation """ data_all = vstack([acqs[ok]['year', 'fail', 'mag_aca', 't_ccd', 'halfwidth', 'quarter', 'color', 'asvt', 'red_mag_err', 'mag_obs'], fpeas]) data_all.sort('year') """ Explanation: Combine flight acqs and ASVT data End of explanation """ # Adjust probability (in probit space) for box size. data_all['box_delta'] = get_box_delta(data_all['halfwidth']) data_all = data_all.group_by('quarter') data_all0 = data_all.copy() # For later augmentation with simulated red_mag_err stars data_mean = data_all.groups.aggregate(np.mean) """ Explanation: Compute box probit delta term based on box size End of explanation """ def t_ccd_normed(t_ccd): return (t_ccd + 8.0) / 8.0 def p_fail(pars, t_ccd, tc2=None, box_delta=0, rescale=True): """ Acquisition probability model :param pars: 7 parameters (3 x offset, 3 x scale, p_fail for bright stars) :param tc, tc2: t_ccd, t_ccd ** 2 :param box_delta: search box half width (arcsec) """ p0, p1, p2 = pars tc = t_ccd_normed(t_ccd) if rescale else t_ccd if tc2 is None: tc2 = tc ** 2 # Make sure box_delta has right dimensions tc, box_delta = np.broadcast_arrays(tc, box_delta) probit_p_fail = p0 + p1 * tc + p2 * tc2 + box_delta p_fail = stats.norm.cdf(probit_p_fail.clip(-8, 8)) # transform from probit to linear probability return p_fail def p_acq_fail(data=None): """ Sherpa fit function wrapper to ensure proper use of data in fitting. """ if data is None: data = data_all tc = t_ccd_normed(data['t_ccd']) tc2 = tc ** 2 box_delta = data['box_delta'] def sherpa_func(pars, x=None): return p_fail(pars, tc, tc2, box_delta, rescale=False) return sherpa_func def calc_binom_stat(data, model, staterror=None, syserror=None, weight=None, bkg=None): """ Calculate log-likelihood for a binomial probability distribution for a single trial at each point. Defining p = model, then probability of seeing data == 1 is p and probability of seeing data == 0 is (1 - p). Note here that ``data`` is strictly either 0.0 or 1.0, and np.where interprets those float values as False or True respectively. """ fit_stat = -np.sum(np.log(np.where(data, model, 1.0 - model))) return fit_stat, np.ones(1) def fit_poly_model(data_mask=None): from sherpa import ui data = data_all if data_mask is None else data_all[data_mask] comp_names = ['p0', 'p1', 'p2'] data_id = 1 ui.set_method('simplex') ones = np.ones(len(data)) ui.load_user_stat('binom_stat', calc_binom_stat, lambda x: ones) ui.set_stat(binom_stat) # ui.set_stat('cash') ui.load_user_model(p_acq_fail(data), 'model') ui.add_user_pars('model', comp_names) ui.set_model(data_id, 'model') ui.load_arrays(data_id, np.array(data['year']), np.array(data['fail'], dtype=np.float)) # Initial fit values from fit of all data fmod = ui.get_model_component('model') for comp_name in comp_names: setattr(fmod, comp_name, 0.0) comp = getattr(fmod, comp_name) comp.max = 10 comp.min = -10 ui.fit(data_id) return ui.get_fit_results() """ Explanation: Model definition End of explanation """ def plot_fails_mag_aca_vs_t_ccd(mag_bins, data_all=data_all, year0=2014.0): ok = (data_all['year'] > year0) & ~data_all['fail'].astype(bool) da = data_all[ok] fuzzx = np.random.uniform(-0.3, 0.3, len(da)) fuzzy = np.random.uniform(-0.125, 0.125, len(da)) plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C0', markersize=4) ok = (data_all['year'] > year0) & data_all['fail'].astype(bool) da = data_all[ok] fuzzx = np.random.uniform(-0.3, 0.3, len(da)) fuzzy = np.random.uniform(-0.125, 0.125, len(da)) plt.plot(da['t_ccd'] + fuzzx, da['mag_aca'] + fuzzy, '.C1', markersize=4, alpha=0.8) # plt.xlim(-18, -10) # plt.ylim(7.0, 11.1) x0, x1 = plt.xlim() for y in mag_bins: plt.plot([x0, x1], [y, y], '-', color='r', linewidth=2, alpha=0.8) plt.xlabel('T_ccd (C)') plt.ylabel('Mag_aca') plt.title(f'Acq successes (blue) and failures (orange) since {year0}') plt.grid() def plot_fit_grouped(pars, group_col, group_bin, mask=None, log=False, colors='br', label=None, probit=False): data = data_all if mask is None else data_all[mask] data['model'] = p_acq_fail(data)(pars) group = np.trunc(data[group_col] / group_bin) data = data.group_by(group) data_mean = data.groups.aggregate(np.mean) len_groups = np.diff(data.groups.indices) data_fail = data_mean['fail'] model_fail = np.array(data_mean['model']) fail_sigmas = np.sqrt(data_fail * len_groups) / len_groups # Possibly plot the data and model probabilities in probit space if probit: dp = stats.norm.ppf(np.clip(data_fail + fail_sigmas, 1e-6, 1-1e-6)) dm = stats.norm.ppf(np.clip(data_fail - fail_sigmas, 1e-6, 1-1e-6)) data_fail = stats.norm.ppf(data_fail) model_fail = stats.norm.ppf(model_fail) fail_sigmas = np.vstack([data_fail - dm, dp - data_fail]) plt.errorbar(data_mean[group_col], data_fail, yerr=fail_sigmas, fmt='.' + colors[1:], label=label, markersize=8) plt.plot(data_mean[group_col], model_fail, '-' + colors[0]) if log: ax = plt.gca() ax.set_yscale('log') def mag_filter(mag0, mag1): ok = (data_all['mag_aca'] > mag0) & (data_all['mag_aca'] < mag1) return ok def t_ccd_filter(t_ccd0, t_ccd1): ok = (data_all['t_ccd'] > t_ccd0) & (data_all['t_ccd'] < t_ccd1) return ok def wp_filter(wp0, wp1): ok = (data_all['warm_pix'] > wp0) & (data_all['warm_pix'] < wp1) return ok """ Explanation: Plotting and validation End of explanation """ mag_centers = np.array([6.3, 8.1, 9.1, 9.55, 9.75, 10.0, 10.25, 10.55, 10.75, 11.0]) mag_bins = (mag_centers[1:] + mag_centers[:-1]) / 2 mag_means = np.array([8.0, 9.0, 9.5, 9.75, 10.0, 10.25, 10.5, 10.75]) for m0, m1, mm in zip(mag_bins[:-1], mag_bins[1:], mag_means): ok = (data_all['asvt'] == False) & (data_all['mag_aca'] >= m0) & (data_all['mag_aca'] < m1) print(f"m0={m0:.2f} m1={m1:.2f} mean_mag={data_all['mag_aca'][ok].mean():.2f} vs. {mm}") plot_fails_mag_aca_vs_t_ccd(mag_bins) """ Explanation: Define magnitude bins for fitting and show data End of explanation """ # fit = fit_sota_model(data_all['color'] == 1.5, ms_disabled=True) mask_no_1p5 = ((data_all['red_mag_err'] == False) & (data_all['t_ccd'] > -16) & (data_all['t_ccd'] < -0.5)) mag0s, mag1s = mag_bins[:-1], mag_bins[1:] fits = {} for m0, m1 in zip(mag0s, mag1s): print(m0, m1) fits[m0, m1] = fit_poly_model(mask_no_1p5 & mag_filter(m0, m1)) colors = [f'kC{i}' for i in range(9)] # This computes probabilities for 120 arcsec boxes, corresponding to raw data t_ccds = np.linspace(-16, -0, 20) plt.figure(figsize=(13, 4)) for subplot in (1, 2): plt.subplot(1, 2, subplot) for m0_m1, color in zip(list(fits), colors): fit = fits[m0_m1] m0, m1 = m0_m1 probs = p_fail(fit.parvals, t_ccds) if subplot == 2: probs = stats.norm.ppf(probs) plt.plot(t_ccds, probs, label=f'{(m0 + m1) / 2: .2f}') plt.legend() plt.xlabel('T_ccd') plt.ylabel('P_fail' if subplot == 1 else 'Probit(p_fail)') plt.grid() for m0_m1, color in zip(list(fits), colors): fit = fits[m0_m1] m0, m1 = m0_m1 plot_fit_grouped(fit.parvals, 't_ccd', 1.0, mask=mask_no_1p5 & mag_filter(m0, m1), probit=True, colors=color, label=str(m0_m1)) plt.grid() plt.ylim(-3.5, 3.5) plt.ylabel('Probit(p_fail)') plt.xlabel('T_ccd') # plt.legend(); colors = [f'kC{i}' for i in range(9)] for m0_m1, color in zip(list(fits)[3:], colors): fit = fits[m0_m1] m0, m1 = m0_m1 plot_fit_grouped(fit.parvals, 't_ccd', 1.0, mask=mask_no_1p5 & mag_filter(m0, m1), probit=False, colors=color, label=str(m0_m1)) plt.grid() plt.ylabel('p_fail') plt.xlabel('T_ccd') plt.legend(fontsize='small', loc='upper left'); def print_pvals(ps, idx): ps_str = ', '.join(f'{p:.3f}' for p in ps) print(f'spline_p[{idx}] = np.array([{ps_str}])') spline_mags = np.array([8.5, 9.25, 10.0, 10.4, 10.7]) spline_mags = np.array([8.0, 9.0, 10.0, 10.5, 11]) p0s = [] p1s = [] p2s = [] mags = [] for m0_m1, fit in fits.items(): ps = fit.parvals m0, m1 = m0_m1 mags.append((m0 + m1) / 2) p0s.append(ps[0]) p1s.append(ps[1]) p2s.append(ps[2]) plt.plot(mags, p0s, '.-', label='p0') plt.plot(mags, p1s, '.-', label='p1') plt.plot(mags, p2s, '.-', label='p2') plt.legend(fontsize='small') plt.grid() print_pvals(np.interp(spline_mags, mags, p0s), 0) print_pvals(np.interp(spline_mags, mags, p1s), 1) print_pvals(np.interp(spline_mags, mags, p2s), 2) """ Explanation: Color != 1.5 fit (this is MOST acq stars) End of explanation """
edwardd1/phys202-2015-work
midterm/InteractEx06.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.display import Image from IPython.html.widgets import interact, interactive, fixed """ Explanation: Interact Exercise 6 Imports Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell. End of explanation """ Image('fermidist.png') """ Explanation: Exploring the Fermi distribution In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is: End of explanation """ def fermidist(energy, mu, kT): e = 2.71828182845904523536028747135266249775724709369995 """Compute the Fermi distribution at energy, mu and kT.""" x = 1/(e **((energy - mu)/kT) + 1) return x assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532, 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ])) """ Explanation: In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: \begin{align} \frac{1}{e^{(\epsilon - \mu)/kT} + 1} \end{align} Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code. End of explanation """ def plot_fermidist(mu, kT): E = np.linspace(0, 10., 100) y = plt.plot(E, fermidist(E, mu, kT)) plt.xlabel('t') plt.ylabel('X(t)') return y plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermidist function """ Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and the overall visualization. Customize your plot in 3 other ways to make it effective and beautiful. End of explanation """ interact(plot_fermidist, mu = (0.0,5.0), kT=(.1,10.0)); """ Explanation: Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$. End of explanation """
karlnapf/shogun
doc/ipython-notebooks/multiclass/KNN.ipynb
bsd-3-clause
import numpy as np import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat, savemat from numpy import random from os import path mat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = mat['data'] Yall = np.array(mat['label'].squeeze(), dtype=np.double) # map from 1..10 to 0..9, since shogun # requires multiclass labels to be # 0, 1, ..., K-1 Yall = Yall - 1 random.seed(0) subset = random.permutation(len(Yall)) Xtrain = Xall[:, subset[:5000]] Ytrain = Yall[subset[:5000]] Xtest = Xall[:, subset[5000:6000]] Ytest = Yall[subset[5000:6000]] Nsplit = 2 all_ks = range(1, 21) print(Xall.shape) print(Xtrain.shape) print(Xtest.shape) """ Explanation: K-Nearest Neighbors (KNN) by Chiyuan Zhang and S&ouml;ren Sonnenburg This notebook illustrates the <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href="http://en.wikipedia.org/wiki/Cover_tree">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href="http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM">Multiclass Support Vector Machines</a> is shown. The basics The training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias. In SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K. First we load and init data split: End of explanation """ %matplotlib inline import pylab as P def plot_example(dat, lab): for i in range(5): ax=P.subplot(1,5,i+1) P.title(int(lab[i])) ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest') ax.set_xticks([]) ax.set_yticks([]) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtrain, Ytrain) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtest, Ytest) """ Explanation: Let us plot the first five examples of the train data (first row) and test data (second row). End of explanation """ import shogun as sg from shogun import MulticlassLabels, features from shogun import KNN labels = MulticlassLabels(Ytrain) feats = features(Xtrain) k=3 dist = sg.distance('EuclideanDistance') knn = KNN(k, dist, labels) labels_test = MulticlassLabels(Ytest) feats_test = features(Xtest) knn.train(feats) pred = knn.apply_multiclass(feats_test) print("Predictions", pred.get_int_labels()[:5]) print("Ground Truth", Ytest[:5]) from shogun import MulticlassAccuracy evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(pred, labels_test) print("Accuracy = %2.2f%%" % (100*accuracy)) """ Explanation: Then we import shogun components and convert the data to shogun objects: End of explanation """ idx=np.where(pred != Ytest)[0] Xbad=Xtest[:,idx] Ybad=Ytest[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad) """ Explanation: Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect. End of explanation """ knn.put('k', 13) multiple_k=knn.classify_for_multiple_k() print(multiple_k.shape) """ Explanation: Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step: End of explanation """ for k in range(13): print("Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest))) """ Explanation: We have the prediction for each of the 13 k's now and can quickly compute the accuracies: End of explanation """ from shogun import Time, KNN_COVER_TREE, KNN_BRUTE start = Time.get_curtime() knn.put('k', 3) knn.put('knn_solver', KNN_BRUTE) pred = knn.apply_multiclass(feats_test) print("Standard KNN took %2.1fs" % (Time.get_curtime() - start)) start = Time.get_curtime() knn.put('k', 3) knn.put('knn_solver', KNN_COVER_TREE) pred = knn.apply_multiclass(feats_test) print("Covertree KNN took %2.1fs" % (Time.get_curtime() - start)) """ Explanation: So k=3 seems to have been the optimal choice. Accellerating KNN Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above: End of explanation """ def evaluate(labels, feats, use_cover_tree=False): from shogun import MulticlassAccuracy, CrossValidationSplitting import time split = CrossValidationSplitting(labels, Nsplit) split.build_subsets() accuracy = np.zeros((Nsplit, len(all_ks))) acc_train = np.zeros(accuracy.shape) time_test = np.zeros(accuracy.shape) for i in range(Nsplit): idx_train = split.generate_subset_inverse(i) idx_test = split.generate_subset_indices(i) for j, k in enumerate(all_ks): #print "Round %d for k=%d..." % (i, k) feats.add_subset(idx_train) labels.add_subset(idx_train) dist = sg.distance('EuclideanDistance') dist.init(feats, feats) knn = KNN(k, dist, labels) knn.set_store_model_features(True) if use_cover_tree: knn.put('knn_solver', KNN_COVER_TREE) else: knn.put('knn_solver', KNN_BRUTE) knn.train() evaluator = MulticlassAccuracy() pred = knn.apply_multiclass() acc_train[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() feats.add_subset(idx_test) labels.add_subset(idx_test) t_start = time.clock() pred = knn.apply_multiclass(feats) time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels() accuracy[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() return {'eout': accuracy, 'ein': acc_train, 'time': time_test} """ Explanation: So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN: End of explanation """ labels = MulticlassLabels(Ytest) feats = features(Xtest) print("Evaluating KNN...") wo_ct = evaluate(labels, feats, use_cover_tree=False) wi_ct = evaluate(labels, feats, use_cover_tree=True) print("Done!") """ Explanation: Evaluate KNN with and without Cover Tree. This takes a few seconds: End of explanation """ import matplotlib fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*') P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*') P.legend(["Test Accuracy", "Training Accuracy"]) P.xlabel('K') P.ylabel('Accuracy') P.title('KNN Accuracy') P.tight_layout() fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*') P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d') P.xlabel("K") P.ylabel("time") P.title('KNN time') P.legend(["Plain KNN", "CoverTree KNN"], loc='center right') P.tight_layout() """ Explanation: Generate plots with the data collected in the evaluation: End of explanation """ from shogun import GMNPSVM width=80 C=1 gk=sg.kernel("GaussianKernel", log_width=np.log(width)) svm=GMNPSVM(C, gk, labels) _=svm.train(feats) """ Explanation: Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results. Comparison to Multiclass Support Vector Machines In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above. Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance). End of explanation """ out=svm.apply(feats_test) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_test) print("Accuracy = %2.2f%%" % (100*accuracy)) """ Explanation: Let's apply the SVM to the same test data set to compare results: End of explanation """ Xrem=Xall[:,subset[6000:]] Yrem=Yall[subset[6000:]] feats_rem=features(Xrem) labels_rem=MulticlassLabels(Yrem) out=svm.apply(feats_rem) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_rem) print("Accuracy = %2.2f%%" % (100*accuracy)) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad) """ Explanation: Since the SVM performs way better on this task - let's apply it to all data we did not use in training. End of explanation """
NeuroDataDesign/pan-synapse
pipeline_1/background/Precision_Recall_2.0.ipynb
apache-2.0
def generatePointSet(): center = (rand(0, 9), rand(0, 999), rand(0, 999)) toPopulate = [] for z in range(-3, 2): for y in range(-3, 2): for x in range(-3, 2): curPoint = (center[0]+z, center[1]+y, center[2]+x) #only populate valid points valid = True for dim in range(3): if curPoint[dim] < 0 or curPoint[dim] >= 1000: valid = False if valid: toPopulate.append(curPoint) return set(toPopulate) def generateTestVolume(): #create a test volume volume = np.zeros((10, 1000, 1000)) myPointSet = set() for _ in range(rand(1000, 2000)): potentialPointSet = generatePointSet() #be sure there is no overlap while len(myPointSet.intersection(potentialPointSet)) > 0: potentialPointSet = generatePointSet() for elem in potentialPointSet: myPointSet.add(elem) #populate the true volume for elem in myPointSet: volume[elem[0], elem[1], elem[2]] = rand(40000, 60000) #introduce noise noiseVolume = np.copy(volume) for z in range(noiseVolume.shape[0]): for y in range(noiseVolume.shape[1]): for x in range(noiseVolume.shape[2]): if not (z, y, x) in myPointSet: toPop = rand(0, 10) if toPop == 5: noiseVolume[z][y][x] = rand(0, 60000) return volume, noiseVolume """ Explanation: Simulation Code The code below will generate a test folume and populate it with a set of clusters. End of explanation """ myTestVolume, _ = generateTestVolume() #passed arbitrarily large param for upper bound to get all clusters testList = clusterThresh(myTestVolume, 0, 100000) #generate an annotation volume from the test list annotations = mv.generateAnnotations(testList, *(myTestVolume.shape)) print np.count_nonzero(np.logical_xor(annotations, myTestVolume)) """ Explanation: Cluster Extraction Validation First, be sure that our cluster extraction algorithm works as intended on the test volume End of explanation """ for i in range(5): myTestVolume, _ = generateTestVolume() testList = clusterThresh(myTestVolume, 0, 100000) annotations = mv.generateAnnotations(testList, *(myTestVolume.shape)) print np.count_nonzero(np.logical_xor(annotations, myTestVolume)) """ Explanation: The nonzero xor between the volumes proves that clustering the volume maintains the integrity of the cluster lists. This test is run 5 times in a row below to validate that the 0 difference metric is valid across trials End of explanation """ def precision_recall_f1(labels, predictions, overlapRatio): if len(predictions) == 0: print 'ERROR: prediction list is empty' return 0., 0., 0. labelFound = np.zeros(len(labels)) truePositives = 0 falsePositives = 0 for prediction in predictions: #casting to set is ok here since members are uinque predictedMembers = set([tuple(elem) for elem in prediction.getMembers()]) detectionCutoff = overlapRatio * len(predictedMembers) found = False for idx, label in enumerate(labels): labelMembers = set([tuple(elem) for elem in label.getMembers()]) #if the predictedOverlap is over the detectionCutoff ratio if len(predictedMembers & labelMembers) >= detectionCutoff: truePositives +=1 found=True labelFound[idx] = 1 if not found: falsePositives +=1 precision = truePositives/float(truePositives + falsePositives) recall = np.count_nonzero(labelFound)/float(len(labels)) f1 = 0 try: f1 = 2 * (precision*recall)/(precision + recall) except ZeroDivisionError: f1 = 0 return precision, recall, f1 """ Explanation: The set of 5 zeros validates that the error in the F1 code is not the extraction of the cluster lists Precision Recall F1 2.0 End of explanation """ myTestVolume, _ = generateTestVolume() testList = clusterThresh(myTestVolume, 0, 100000) #run the code on a test volume identical labels precision, recall, f1 = precision_recall_f1(testList, testList, 1) print precision, recall, f1 """ Explanation: Precision Recall F1 2.0 Testing First, the algorithm will be tested on a case where it should get 100% precision and 100% recall End of explanation """ statList = [] for i in range(1, 11): percentile = i/10. predictions = np.random.choice(testList, int(percentile * len(testList)), replace=False) precision, recall, f1 = precision_recall_f1(testList, predictions, 1) statList.append([percentile, precision, recall]) print 'Percentile: ', percentile print '\t precision: ', precision print '\t recall: ', recall fig = plt.figure() elemwiseStats = zip(*(statList)) plt.title('Recall vs Percentile Passed') plt.scatter(elemwiseStats[0], elemwiseStats[2]) plt.show() fig = plt.figure() plt.title('Precision vs Percentile Passed') plt.scatter(elemwiseStats[0], elemwiseStats[1], c='r') plt.show() """ Explanation: The 1 precision, recall and f1 metrics show that the algorithm performs as expected on the case of labels being their own predicions Next, the recall will be tested by randomly selecting a percentile of the true clusters to be passed to the prediction. This should modulate only the recall, and not the precision, of the data, as all clusters being passed are still "correct" predictions. If the algorithm works as expected, I should see a constant precision metric and an upwardly sloping recall metric with slope of 10% That is, for every additional 10% of the labels included in the predictions, the recall sould increase by 10% End of explanation """ statList = [] #get list points in data that I can populate noise clusters with noCluster = zip(*(np.where(myTestVolume == 0))) for i in range(0, 10): #get the number of noise clusters that must be added for data to be diluted #to target percent percentile = i/10. numNoise = int(percentile * len(testList)/float(1-percentile)) #generate the prediction + noise list noiseList =[] for j in range(numNoise): badPoint = noCluster[rand(0, len(noCluster)-1)] noiseList.append(Cluster([list(badPoint)])) predictions = testList + noiseList precision, recall, f1 = precision_recall_f1(testList, predictions, 1) statList.append([percentile, precision, recall]) print 'Percentile: ', percentile print '\t precision: ', precision print '\t recall: ', recall fig = plt.figure() elemwiseStats = zip(*(statList)) plt.title('Recall vs Percentile of Data that is Not Noise') plt.scatter(elemwiseStats[0], elemwiseStats[2]) plt.show() fig = plt.figure() plt.title('Precision vs Percent of Data that is Not Noise') plt.scatter(elemwiseStats[0], elemwiseStats[1], c='r') plt.show() """ Explanation: The linearly increasing nature of recall plot demonstrates that the recall portion of the code indeed corresponds to the ratio of true labels passed in. Additionally, the precision plot being constant shows that modulating only the recall has no affect on the precision of the data Next, the data will be diluted such that it contains a portion of noise clusters. This should change the precision, but not the recall, of the data. If this algorithm works as expected, it will produce a constant recall and a downward sloping precision curve with a slope of 10%. That is, for every additional 10% of noise added to the predictions, the precision should drop by 10% End of explanation """ statList = [] #generate the list of eroded clusters erodedList = [] for idx, cluster in enumerate(testList): percentile = (idx%10)/10. + .1 members = cluster.getMembers() erodedList.append(Cluster(members[:int(len(members)*percentile)])) for i in range(1, 11): percentile = i/10. precision, recall, f1 = precision_recall_f1(erodedList, testList, percentile) statList.append([percentile, precision, recall]) print 'Percentile: ', percentile print '\t precision: ', precision print '\t recall: ', recall fig = plt.figure() elemwiseStats = zip(*(statList)) plt.title('Recall vs Percent Required Overlap') plt.scatter(elemwiseStats[0], elemwiseStats[2]) plt.show() fig = plt.figure() elemwiseStats = zip(*(statList)) plt.title('Precision vs Percent Required Overlap') plt.scatter(elemwiseStats[0], elemwiseStats[1]) plt.show() """ Explanation: As these plots demonstrate, adding noise to the list of all true clusters delivers the expected result, that the precision drops, and the recall remains constant. In our data, there is not a guarantee that the predicted clusters and the actual clusters will exactly overlap. In fact, this is likely not the case. However, we would not like to consider a cluster a false positive if it only differs from the true cluster by one pixel. For this reason, I have included an overlapRatio parameter to vary how much overlap between a prediction and a true cluster must exist for the prediction to be considered correct In the following simulation, the cluster labels will be evenly divided and then eroded between 10% and 100%. I will then run the precision recall code against them with an ever increasing percent overlap metric. If the code works, I expect both the precision and the recall to drop by about 10% for every 10% increase in the percent overlap metric. End of explanation """
random-forests/tensorflow-workshop
archive/extras/estimators-comparison.ipynb
apache-2.0
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import collections from sklearn.datasets import make_moons, make_circles, make_blobs from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import tensorflow as tf import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap %matplotlib inline """ Explanation: Note: Just started hacking on this, the code is a mess =D Comparing Estimators on a few toy datasets Inspired by this excellent sample from sklearn: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html End of explanation """ n_samples = 100 random_state = 0 datasets = collections.OrderedDict([ ('Blobs', make_blobs(n_samples=n_samples, centers=2, cluster_std=0.5, random_state=random_state)), ('Circles', make_circles(n_samples=n_samples, factor=.5, noise=.03, random_state=random_state)), ('Moons', make_moons(n_samples=n_samples, noise=.03, random_state=random_state)) ]) """ Explanation: Create a few toy datasets for binary classification. 'Blobs' is linearly seperable, the others are not. End of explanation """ figure = plt.figure(figsize=(18, 6)) colors = np.array(["blue", "green"]) i = 0 for name in datasets: X, y = datasets[name] i += 1 ax = plt.subplot(1, len(datasets), i) plt.scatter(X[:, 0], X[:, 1], color=colors[y].tolist()) plt.title(name, fontsize=14) plt.show() """ Explanation: Let's plot them. Points from the first class will be colored blue, and the second class will be colored green. End of explanation """ def make_estimators(feature_columns, n_classes): estimators = collections.OrderedDict([ ('Linear', tf.estimator.LinearClassifier( feature_columns=feature_columns, n_classes=n_classes, model_dir="./graphs/canned/linear" )), ('Deep', tf.estimator.DNNClassifier( hidden_units=[128, 128], feature_columns=feature_columns, n_classes=n_classes, model_dir="./graphs/canned/deep" )), # Note: the value of this model is when we # use different types of feature engineering # for the linear and dnn features # see the Wide and Deep tutorial on tensorflow.org # a non-trivial use-case. ('Wide_Deep', tf.estimator.DNNLinearCombinedClassifier( dnn_hidden_units=[100, 50], linear_feature_columns=feature_columns, dnn_feature_columns=feature_columns, n_classes=n_classes, model_dir="./graphs/canned/wide_n_deep" )), ]) return estimators """ Explanation: This method creates a number of estimators for us to experiment with. It takes a description of the features to use as a parameter. We'll create this description separately for each dataset, later in the notebook. End of explanation """ def get_predictions(estimator, input_fn): predictions = [] for prediction in estimator.predict(input_fn=input_fn): probs = prediction['probabilities'] # If instead you'd like to return just the predicted class index # you can use this code. #cls = np.argmax(probs) #predictions.append(cls) predictions.append(probs) return predictions """ Explanation: Calling predict on an estimator returns a generator object. For convenience, this method will give us a list of predictions. Here, we're returning the probabilities for each class. End of explanation """ # We'll use these objects to store results. # Each maps from a tuple of (dataset_name, estimator_name) to the results. evaluations = {} predictions = {} mesh_predictions = {} # === # Parameters # === # Training sets steps = 100 # Step size in the mesh h = .02 for ds_name in datasets: # This is the entire dataset X, y = datasets[ds_name] # Standardize values to 0 mean and unit standard deviation X = StandardScaler().fit_transform(X) # Split in to train / test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) n_features = X_train.shape[1] n_classes = len(np.unique(y_train)) feature_columns = [tf.feature_column.numeric_column('x', shape=n_features)] estimators = make_estimators(feature_columns, n_classes) # Create a mesh grid. # The idea is we'll make a prediction for every coordinate # in this space, so we display them later. x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) for es_name in estimators: print("Training", es_name, "on", ds_name,"...") estimator = estimators[es_name] train_input_fn = tf.estimator.inputs.numpy_input_fn( {'x': X_train}, y_train, num_epochs=None, # Repeat forever shuffle=True ) test_input_fn = tf.estimator.inputs.numpy_input_fn( {'x': X_test}, y_test, num_epochs=1, # Repeat forever shuffle=False ) # An input function for each point on the mes surface_input_fn = tf.estimator.inputs.numpy_input_fn( {'x': np.c_[xx.ravel(), yy.ravel()]}, num_epochs=1, # Repeat forever shuffle=False ) estimator.train(train_input_fn, steps=steps) # evaluate on the test data evaluation = estimator.evaluate(test_input_fn) # store the evaluation for later evaluations[(ds_name, es_name)] = evaluation # make a prediction for every coordinate in the mesh predictions = np.array(get_predictions(estimator, input_fn=surface_input_fn)) # store the mesh predictions for later mesh_predictions[(ds_name, es_name)] = predictions print("Finished") """ Explanation: Let's train each Estimator on each dataset, and record the predictions for each test point, and the evaluation (which contains stats like overall accuracy) as we go. End of explanation """ n_datasets = len(datasets) n_estimators = len(estimators) figure = plt.figure(figsize=(n_datasets * 6, n_estimators * 2)) plot_num = 1 row = 0 for ds_name in datasets: X, y = datasets[ds_name] # Standardize values to 0 mean and unit standard deviation X = StandardScaler().fit_transform(X) # Split in to train/test X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Step size in the mesh h = .02 x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Plot the dataset cm = plt.cm.RdBu cm_bright = ListedColormap(['#FF0000', '#0000FF']) ax = plt.subplot(len(datasets), len(estimators) + 1, plot_num) plot_num += 1 if row == 0: ax.set_title("Input data") # Plot the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # Plot the testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6, edgecolors='k') ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) col = 1 for es_name in estimators: evaluation = evaluations[(ds_name,es_name)] accuracy = evaluation["accuracy"] ax = plt.subplot(len(datasets), len(estimators) + 1, plot_num) plot_num += 1 # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, x_max]x[y_min, y_max]. Z = mesh_predictions[(ds_name, es_name)][:, 1] # Put the result into a color plot Z = Z.reshape(xx.shape) ax.contourf(xx, yy, Z, cmap=cm, alpha=.8) # Plot also the training points ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright, edgecolors='k') # and testing points ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, edgecolors='k', alpha=0.6) ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xticks(()) ax.set_yticks(()) if row == 0: ax.set_title(es_name) ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % accuracy).lstrip('0'), size=15, horizontalalignment='right') col += 1 row += 1 plt.tight_layout() plt.show() """ Explanation: Let's plot the results. End of explanation """
samkennerly/TruckVotes
books/model.ipynb
bsd-2-clause
%load_ext autoreload %autoreload 2 %autosave 0 from truckvotes import * """ Explanation: choose a model End of explanation """ def show_error(predicted,actual): fTrueRed = (predicted > 0.5) & (actual > 0.5) fTrueBlue = (predicted < 0.5) & (actual < 0.5) fCorrect = fTrueRed | fTrueBlue fClose = (predicted - actual).abs() < 0.1 err = actual - predicted rms = np.sqrt( (err*err).mean() ) mad = err.abs().median() worst = err.abs().max() bias = err.mean() print( "Correct:\t%s of %s samples" % (fCorrect.sum(),len(actual)) ) print( "Within 10%%:\t%s of %s samples" % (fClose.sum(),len(actual)) ) print( "RMS error:\t", round(rms,3) ) print( "Typical error:\t", round(mad,3) ) print( "Worst error:\t", round(worst,3) ) print( "Bias:\t\t", round(bias,3) ) bar_colors = redwhiteblue(0.5+np.r_[err]) err.plot(kind='bar',color=bar_colors) plt.ylabel('Error') plt.ylim((-0.3,0.3)) show_error(TruckVotes['LinearFit'],TruckVotes['RedPct']) """ Explanation: read inputs <h2>Plot the predictor and target variable</h2> The (silly) idea is to use the popularity of pickup trucks in a state to predict Republican/Democratic popularity. Define a predictor and a target: <ul> <li><b>Truckness</b> = number of new pickup truck registrations in 2008 per capita <li><b>RedPct</b> = fraction of <i>non-third-party</i> voters who voted Republican </ul> Align Truckness and RedPct in a DataFrame called <b>TruckVotes</b>. For plotting purposes, append Population and sort by Truckness. Check for missing or mis-aligned data, then scatterplot Truckness vs RedPct. Scale dots by population size, and color them by RedPct. <h2>Linear regression</h2> <b>Ordinary least-squares linear regression</b> assumes a linear$^1$ model: $$ R_j = \alpha + \beta T_j + \epsilon_j $$ where $R_j$ is RedPct of the $j$th sample, $T_j$ is its Truckness, $\alpha, \beta$ are two real constants to be determined, and the <b>error terms</b> ${ \epsilon_j }$ are random variables. Assume the errors $\epsilon_j$ are all independent, normally-distributed, have the same variance, and have mean 0. Under these (unrealistic) assumptions, the maximum-likelihood estimates of $\alpha$ and $\beta$ are whatever values minimize the total square error: $$ \sum_{j=0}^{50} \epsilon^2 = \sum_{j=0}^{50} (R_j - \alpha - \beta T_j)^2 $$ Nearly all statistics packages include routines to calculate $\alpha$ and $\beta$. I used scipy.stats.linregress(): <small>1] To be precise, it's an <i>affine</i> model unless $\alpha=0$. But people usually call this model "linear" anyway.</small> run some trials That $r^2$ value won't win ay awards, but it isn't terrible. Calculate some other measures of error, and inspect the error term for each state: End of explanation """ def logit(s): return np.log(s) - np.log(1-s) TruckVotes['LogTruckness'] = np.log(TruckVotes['Truckness']) TruckVotes['Redness'] = logit(TruckVotes['RedPct']) predicted_redness = linear_fit(TruckVotes,'LogTruckness','Redness') redbluedots(TruckVotes,'LogTruckness','Redness') plt.plot(TruckVotes['LogTruckness'],predicted_redness,'ks-',linewidth=2) """ Explanation: <h2>Linear regression in new coordinates</h2> Linear models assume the predictor and target can be any real numbers. But in our case, <ul> <li>Truckness is always positive: $T \in [0,\infty)$ <li>RedPct is always a number between 0 and 1: $R \in [0,1]$ </ul> For the $x$-axis, I want a smooth, monotonic transformation that maps $[0,\infty) \to (-\infty,\infty)$. I chose a logarithmic$^2$ transform: <b>LogTruckness</b> $\tau \equiv \log(T)$ For the $y$-axis, I want the inverse of a smooth sigmoid$^3$ function. The <b>logistic</b> function is a personal favorite which shows up in logistic regression and statistical physics (including my <a href='https://sites.google.com/site/samkennerly/maths'>dissertation</a>): $$ \textrm{logistic}(x) = \frac{e^x}{1+e^x} = \frac{1}{1 + e^{-x}} = \tfrac{1}{2} + \tfrac{1}{2}\tanh\left(\tfrac{1}{2}x\right) $$ The inverse of the logistic function is the <b>logit</b> function: $$ \textrm{logit}(x) = \log \left(\frac{x}{1-x}\right) = \log(x) - \log(1-x) $$ Define a new $y$ coordinate to be the logit of RedPct: <b>Redness</b> $\rho \equiv \textrm{logit}(R) = \log(R) - \log(1-R)$ I don't have a rigorous justification for either of these choices - they just have an appropriate domain and range. The plot below shows the result of least-squares linear regression in these new coordinates: <small> 2] In NumPy, log() is the natural logarithm. Any other logarithm base would also work for our purposes. 3] A <b>sigmoid function</b> is bounded, differentiable, and monotonically increasing. By convention, the bounds are usually $[0,1]$. </small> End of explanation """ def logistic(s): return 1.0 / (1.0 + np.exp(-s)) TruckVotes['FancyFit'] = logistic(predicted_redness) redbluedots(TruckVotes,'Truckness','RedPct') plt.plot(TruckVotes['Truckness'],TruckVotes['FancyFit'],'ks-',linewidth=2) """ Explanation: That $r^2$ is a little more respectable. Transform the new prediction back to old coordinates: End of explanation """ show_error(TruckVotes['FancyFit'],TruckVotes['RedPct']) """ Explanation: <i>FancyFit</i> did a better job fitting the extreme values DC and WY, called 2 fewer states wrong, and had a better RMS error. (Its median error and "Within 10%" results were slightly worse, so it wasn't superior in every way.) End of explanation """ # Get 2012 data ObamaRomney = get_csv('ObamaRomney') Population2012 = get_csv('Population2012')['2012'] Trucks2012 = get_csv('Trucks2012') # Calculate Truckness and RedPct VotePct2012 = ObamaRomney.apply(lambda x: x / ObamaRomney['Total']) VotePct2012 = VotePct2012[['Obama','Romney','AllOthers']] RedPct2012 = VotePct2012['Romney'] / (VotePct2012['Romney'] + VotePct2012['Obama']) Truckness2012 = Trucks2012['Pickup'] / Population2012 # Align, sort, and check for bad data TruckVotes2012 = pd.concat([Truckness2012,RedPct2012,Population2012],axis=1,join='outer') TruckVotes2012.columns = ['Truckness','RedPct','Population'] TruckVotes2012.sort_values('Truckness',inplace=True) fBadRow = pd.isnull(TruckVotes2012).any(axis=1) print( TruckVotes2012.head() ) print( "%s of %s rows are missing data" % (fBadRow.sum(),len(TruckVotes)) ) # Make a predictor function calibrated to 2008 results. def predict_redness(truckness): x = np.log(truckness) y = 0.6516*x + 1.1727 return logistic(y) # Compare predicted results with actual 2012 results TruckVotes2012['FancyFit'] = predict_redness(TruckVotes2012['Truckness']) redbluedots(TruckVotes2012,'Truckness','RedPct') plt.plot(TruckVotes2012['Truckness'],TruckVotes2012['FancyFit'],'ks-',linewidth=2) """ Explanation: <i>FancyFit</i> is a simplified$^4$ <b>generalized linear model</b>. GLMs assume a transformed version of a linear model can predict the response variable. The idea is to choose an invertible <b>link function</b> g( ) and attempt to predict $g(y)$ instead of $y$ itself: $$ E[y] = g^{-1}( \alpha + \beta x ) $$ Using LogTruckness as a predictor and logit as a link function, $$ \textrm{logit}\big(E[R]\big) = \alpha + \beta \log(T) $$ or equivalently, defining $\gamma \equiv e^{-\alpha}$, $$ E[R] = \textrm{logistic}\left( \alpha + \beta \log(T) \right) = \frac{1}{1+\exp(-\alpha -\beta\log(T))} = \frac{1}{1+\gamma T^{-\beta}} = \frac{T^{\beta}}{T^{\beta} + \gamma} $$ Given a positive Truckness as input, this model "knows" that expected RedPct is bounded. Check what happens in extreme cases: $$ \lim_{T\to 0} E[R] = 0 \quad \textrm{and} \quad \lim_{T\to \infty} E[R] = 1 $$ <small> 4] A full GLM would assume a probability distribution for the error terms and optimize $\alpha$ and $\beta$ accordingly. For simplicity and laziness, I just used least-squares. </small> <h2>Test the model on 2012 data</h2> Load data from 2012, use the same values of $\alpha, \beta$, and see how well this model predicts the 2012 presidential election: End of explanation """ show_error(TruckVotes2012['FancyFit'],TruckVotes2012['RedPct']) """ Explanation: Calculate overall error and errors for each state: End of explanation """ show_error(TruckVotes['RedPct'],TruckVotes2012['RedPct']) """ Explanation: Compare with a <b>null model</b> which "predicts" 2012 results by assuming they will be exactly the same as 2008 results: End of explanation """
anhiga/poliastro
docs/source/examples/Propagation using Cowell's formulation.ipynb
mit
import numpy as np from astropy import units as u from matplotlib import ticker from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D plt.ion() from scipy.integrate import ode from poliastro.bodies import Earth from poliastro.twobody import Orbit from poliastro.examples import iss from poliastro.twobody.propagation import func_twobody from poliastro.util import norm from ipywidgets.widgets import interact, fixed def state_to_vector(ss): r, v = ss.rv() x, y, z = r.to(u.km).value vx, vy, vz = v.to(u.km / u.s).value return np.array([x, y, z, vx, vy, vz]) u0 = state_to_vector(iss) u0 t = np.linspace(0, 10 * iss.period, 500).to(u.s).value t[:10] dt = t[1] - t[0] dt k = Earth.k.to(u.km**3 / u.s**2).value """ Explanation: Cowell's formulation For cases where we only study the gravitational forces, solving the Kepler's equation is enough to propagate the orbit forward in time. However, when we want to take perturbations that deviate from Keplerian forces into account, we need a more complex method to solve our initial value problem: one of them is Cowell's formulation. In this formulation we write the two body differential equation separating the Keplerian and the perturbation accelerations: $$\ddot{\mathbb{r}} = -\frac{\mu}{|\mathbb{r}|^3} \mathbb{r} + \mathbb{a}_d$$ <div class="alert alert-info">For an in-depth exploration of this topic, still to be integrated in poliastro, check out https://github.com/Juanlu001/pfc-uc3m</div> First example Let's setup a very simple example with constant acceleration to visualize the effects on the orbit. End of explanation """ def constant_accel_factory(accel): def constant_accel(t0, u, k): v = u[3:] norm_v = (v[0]**2 + v[1]**2 + v[2]**2)**.5 return accel * v / norm_v return constant_accel constant_accel_factory(accel=1e-5)(t[0], u0, k) help(func_twobody) """ Explanation: To provide an acceleration depending on an extra parameter, we can use closures like this one: End of explanation """ res = np.zeros((t.size, 6)) res[0] = u0 ii = 1 accel = 1e-5 rr = ode(func_twobody).set_integrator('dop853') # All parameters by default rr.set_initial_value(u0, t[0]) rr.set_f_params(k, constant_accel_factory(accel)) while rr.successful() and rr.t + dt < t[-1]: rr.integrate(rr.t + dt) res[ii] = rr.y ii += 1 res[:5] """ Explanation: Now we setup the integrator manually using scipy.integrate.ode. We cannot provide the Jacobian since we don't know the form of the acceleration in advance. End of explanation """ fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') ax.plot(*res[:, :3].T) ax.view_init(14, 70) """ Explanation: And we plot the results: End of explanation """ from poliastro.twobody.propagation import cowell def plot_iss(thrust=0.1, mass=2000.): r0, v0 = iss.rv() k = iss.attractor.k t = np.linspace(0, 10 * iss.period, 500).to(u.s).value u0 = state_to_vector(iss) res = np.zeros((t.size, 6)) res[0] = u0 accel = thrust / mass # Perform the whole integration r0 = r0.to(u.km).value v0 = v0.to(u.km / u.s).value k = k.to(u.km**3 / u.s**2).value ad = constant_accel_factory(accel) r, v = r0, v0 for ii in range(1, len(t)): r, v = cowell(k, r, v, t[ii] - t[ii - 1], ad=ad) x, y, z = r vx, vy, vz = v res[ii] = [x, y, z, vx, vy, vz] fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, projection='3d') ax.set_xlim(-20e3, 20e3) ax.set_ylim(-20e3, 20e3) ax.set_zlim(-20e3, 20e3) ax.view_init(14, 70) return ax.plot(*res[:, :3].T) interact(plot_iss, thrust=(0.0, 0.2, 0.001), mass=fixed(2000.)) """ Explanation: Interactivity This is the last time we used scipy.integrate.ode directly. Instead, we can now import a convenient function from poliastro: End of explanation """ rtol = 1e-13 full_periods = 2 u0 = state_to_vector(iss) tf = ((2 * full_periods + 1) * iss.period / 2).to(u.s).value u0, tf iss_f_kep = iss.propagate(tf * u.s, rtol=1e-18) r0, v0 = iss.rv() r, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value, tf, rtol=rtol) iss_f_num = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, iss.epoch + tf * u.s) iss_f_num.r, iss_f_kep.r assert np.allclose(iss_f_num.r, iss_f_kep.r, rtol=rtol, atol=1e-08 * u.km) assert np.allclose(iss_f_num.v, iss_f_kep.v, rtol=rtol, atol=1e-08 * u.km / u.s) #assert np.allclose(iss_f_num.a, iss_f_kep.a, rtol=rtol, atol=1e-08 * u.km) #assert np.allclose(iss_f_num.ecc, iss_f_kep.ecc, rtol=rtol) #assert np.allclose(iss_f_num.inc, iss_f_kep.inc, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.raan, iss_f_kep.raan, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.argp, iss_f_kep.argp, rtol=rtol, atol=1e-08 * u.rad) #assert np.allclose(iss_f_num.nu, iss_f_kep.nu, rtol=rtol, atol=1e-08 * u.rad) """ Explanation: Error checking End of explanation """ u0 = state_to_vector(iss) full_periods = 4 tof_vector = np.linspace(0, ((2 * full_periods + 1) * iss.period / 2).to(u.s).value, num=100) rtol_vector = np.logspace(-3, -12, num=30) res_array = np.zeros((rtol_vector.size, tof_vector.size)) for jj, tof in enumerate(tof_vector): rf, vf = iss.propagate(tof * u.s, rtol=1e-12).rv() for ii, rtol in enumerate(rtol_vector): rr = ode(func_twobody).set_integrator('dop853', rtol=rtol, nsteps=1000) rr.set_initial_value(u0, 0.0) rr.set_f_params(k, constant_accel_factory(0.0)) # Zero acceleration rr.integrate(rr.t + tof) if rr.successful(): uf = rr.y r, v = uf[:3] * u.km, uf[3:] * u.km / u.s res = max(norm((r - rf) / rf), norm((v - vf) / vf)) else: res = np.nan res_array[ii, jj] = res fig, ax = plt.subplots(figsize=(16, 6)) xx, yy = np.meshgrid(tof_vector, rtol_vector) cs = ax.contourf(xx, yy, res_array, levels=np.logspace(-12, -1, num=12), locator=ticker.LogLocator(), cmap=plt.cm.Spectral_r) fig.colorbar(cs) for nn in range(full_periods + 1): lf = ax.axvline(nn * iss.period.to(u.s).value, color='k', ls='-') lh = ax.axvline((2 * nn + 1) * iss.period.to(u.s).value / 2, color='k', ls='--') ax.set_yscale('log') ax.set_xlabel("Time of flight (s)") ax.set_ylabel("Relative tolerance") ax.set_title("Maximum relative difference") ax.legend((lf, lh), ("Full period", "Half period")) """ Explanation: Too bad I cannot access the internal state of the solver. I will have to do it in a blackbox way. End of explanation """ ss = Orbit.circular(Earth, 500 * u.km) tof = 20 * ss.period ad = constant_accel_factory(1e-7) r0, v0 = ss.rv() r, v = cowell(k, r0.to(u.km).value, v0.to(u.km / u.s).value, tof.to(u.s).value, ad=ad) ss_final = Orbit.from_vectors(Earth, r * u.km, v * u.km / u.s, ss.epoch + rr.t * u.s) da_a0 = (ss_final.a - ss.a) / ss.a da_a0 dv_v0 = abs(norm(ss_final.v) - norm(ss.v)) / norm(ss.v) 2 * dv_v0 np.allclose(da_a0, 2 * dv_v0, rtol=1e-2) dv = abs(norm(ss_final.v) - norm(ss.v)) dv accel_dt = accel * u.km / u.s**2 * (t[-1] - t[0]) * u.s accel_dt np.allclose(dv, accel_dt, rtol=1e-2, atol=1e-8 * u.km / u.s) """ Explanation: Numerical validation According to [Edelbaum, 1961], a coplanar, semimajor axis change with tangent thrust is defined by: $$\frac{\operatorname{d}!a}{a_0} = 2 \frac{F}{m V_0}\operatorname{d}!t, \qquad \frac{\Delta{V}}{V_0} = \frac{1}{2} \frac{\Delta{a}}{a_0}$$ So let's create a new circular orbit and perform the necessary checks, assuming constant mass and thrust (i.e. constant acceleration): End of explanation """ ss_final.ecc """ Explanation: This means we successfully validated the model against an extremely simple orbit transfer with approximate analytical solution. Notice that the final eccentricity, as originally noticed by Edelbaum, is nonzero: End of explanation """
TheMitchWorksPro/DataTech_Playground
PY_Basics/Walkthroughs/TMWP_Num_Seq_As_Num_Experiment.ipynb
mit
from __future__ import print_function # only need this line for Python 2.7 ... by importing print() we also get support for unpacking within print # * for unpacking is not recognized in this context in Python 2.7 normally # arguments on print and behavior of print in this example is also Python 3.x which "from _future__" is importing # assume input of 9: N = 9 print(*range(N+1), sep='', end='') # this larger number test is just for comparison with what follows in the attempt to do this mathematically N = 113 print(*range(N+1), sep='', end='') """ Explanation: <div align="right">Python [conda env:PY27_Test]</div> <div align="right"></div> Convert Number To String Sequence BookMarks This content is organized almost like a story. Start to finish it demonstrates a bunch of related concepts in Python programming. For later reference, these bookmarks jump to specific sections: Loop Driven Solution - Solve the problem with loops and incrementers map(), reduce() and lambda Solutions - Solution code using map(), reduce(), and lambda map() and lambda() demo and explanation - Explanations of map() and lambda from map() solution reduce() explanation - explanation of reduce() solution Returning to the Original Problem of Eliminating the Loops - keeps all original abilities like "showWork" but eliminates loops from the code. The Problem This problem was inspired by www.hackerrank.com. The original problem was to take any number N (input by the user), and output a "123...N" result without using any strings to do it. The most elegant solution accepted on www.hackerrank.com is to utilize Python 3 print() function syntax in a single statement that prints range(N) by unpacking it first with an asterick: *. End of explanation """ '''Initial experiment: use powers of 10, but start from the end so that we get: 1 * 10**N + 2 * 10**N-1 + ... This idea fails however as soon as numbers get bigger than 9 This example outputs the source list generated by range() ahead of the answer just to show it. ''' def num_to_seqStr(N, showList = True): lst = range(1,N+1) answer = sum([i*10**(N-i) for i in lst]) if showList == True: print(lst) return answer for i in range(11): print("Answer: %s" %num_to_seqStr(i)) """ Explanation: As a point of curiosity though ... how would we do it using math instead of relying on print tricks which probably convert the numbers to strings under the covers anyway? The basics of the solution is that the code needs to add zeroes to each number in the sequence in the right amount so that added together, they become the "string" of numbers: <pre> 3 1001 20 10000000 + 100 99900000000 ====== =============== 123 99910001001 </pre> List Comprehensions (Incomplete Solution) Initially, the problem is approached using a list comprehension to avoid looping. This simple experiment is only a partial solution though: End of explanation """ def findTens(N): # find the powers of 10 inside a number incrementer = 1 while True: if N - 10**incrementer < 0: break else: incrementer += 1 if incrementer == 100: break # debugging condition return incrementer - 1 findTensTests = [findTens(0), findTens(7), findTens(112), findTens(13), findTens(1009)] findTensTests def create_seqNum(N, reverse=False, showWork=False, returnDescr=False, divLength=100): '''create_seqNum() --> iput N, and get back a number built from the sequence of 1234...N Arguments: reverse=True to get the sequence in revers, showWork=True to see numbers that add up to final, returnDescr=True to print the answer in a sentence as well as returning it as a number.''' num = 0 tensIncr = 0 answer = 0 Ntens = findTens(N) modifier = 0 # modifies counter when increment of 10 occurs if reverse == True: # create range() inputs rstart = 1 rend = N+1 rinc = 1 else: rstart = N rend = 0 rinc = -1 for i in range(rstart, rend, rinc): itens = findTens(i) num = i * 10**tensIncr # how many zeroes do we need on the end of each num? tensIncr += 1 + itens pad = (Ntens - itens) if showWork == True: print(("For %d" + " "*pad + " Add: %d") %(i, num)) answer += num if showWork == True: print("#"*divLength) if showWork == True or returnDescr == True: print("Answer: %d" %answer) print("#"*divLength) return answer print(create_seqNum.__doc__) for i in [1, 5, 9, 10, 11, 13, 98, 99, 100, 101, 102, 107, 1012]: create_seqNum(i, reverse=True, returnDescr=True) create_seqNum(i, returnDescr=True) create_seqNum(102, showWork=True) """ Explanation: Note: At 10, it all goes wrong and the number ends in 7900 instead of 78910. This is because the earlier numbers were not multiplied by sufficient powers of 10 so the numbers will sum together correctly into the final answer. <a id="loopdrivensol" name="loopdrivensol"></a> Solving It With Loops and Incrementers There are better ways to do all of this, but using loops and incrementers to get the right amount of zeroes for each power of ten encountered establishes a simple baseline for what the program internally needs to do. End of explanation """ import math # needed for log functions def powOfTens(N): # find the powers of 10 inside a number return int(math.log10(N)) # converts decimal to whole integer # int() rounds down no matter how big the decimal # note: math.log(x, 10) produces floating point rounding errors # output is of form: (oritinalNumber, powersOfTens) countTensTest = [(1, powOfTens(1)), (7, powOfTens(7)), (112, powOfTens(112)), (13, powOfTens(13)), (1009, powOfTens(1009))] countTensTest listofints = [1,2,3,9,10,11,12,19,99,100,101,102, 999, 1000, 1001, 50102030] for i in listofints: print(i, powOfTens(i), math.log10(i)) # show what we are really calculating: # (original, function result, un-modified log) """ Explanation: log() To The Rescue for Part Of The Problem As suggested on Stack Overflow by userID: eruciform, taking the base-10 logarithm of a number should replace the loop logic used in the earlier "findTens()" function. This helps us calculate the powers of 10 in a number to use to help calculate how many zeroes will be needed on each term. This allows the terms to add together into a final that displays them all. End of explanation """ # source: eruciform on StackOverflow # note: powOfTens(x) is just int(math.log10(x)) import math # to get math.log listofints = [1,2,3,9,10,11,12,19,99,100,101,102, 999, 1000, 1001, 50102030] n = reduce(lambda x,y:[x[0]*(10**(y[1]+1))+y[0],0],map(lambda x:[x,powOfTens(x)], listofints))[0] # to do this in one line with no external functions, replace powOfTens(x) w/: # int(math.log10(x)) print(n) # source: eruciform on StackOverflow # we can also more simply crack this problem with just reduce() n = reduce(lambda x,y:x*(10**(powOfTens(y)+1))+y,listofints) # to do this in one line with no external functions, replace powOfTens(x) w/: # int(math.log10(y)) print(n) """ Explanation: <a id="eruciform_start" name="eruciform_start"></a> Using map(), reduce() and lambda() To Solve It This code addresses the underlying calculations. It initially tests the code using a number sequence that is not the full 123... range progression of the original problem, but we can tells from the tests that these approaches work. This code is also designed to quickly solve the problem in as few lines as possible (unlike the loop which was also designed to give options on how to show its inner workings). End of explanation """ listofints = [1,2,3,9,10,11,12,19,99,100,101,102, 999, 1000, 1001, 50102030] map(lambda x:[x,int(math.log10(x))], listofints) """ Explanation: Deconstructing the code from the inside out, this is what it is doing. <a id="maplambdareduce" name="maplambdareduce"></a> map() example with reduce() and lambda() Starting with the first more complex example using map(), lambda creates an anonymous function that "maps" to all the elements in the list, in this case listofints. The anonymous lambda function applies int(math.log10(x)) to a single x, and then map() "maps" each x in listofints to the lambda function. End of explanation """ reduce(lambda x,y:[x[0]*(10**(y[1]+1))+y[0],0],map(lambda x:[x,int(math.log10(x))], listofints)) """ Explanation: Each sublist now contains [original_number, number_of_tens_in_number]. reduce() "reduces" the list to a single number by applying the lambda function fed into it. It takes each term in the list as function(n1 , n2), then the result goes back into the function as function(result, n3), then function(result2, n4) ... and so on until the entire list is consumed and one answer is spit back. End of explanation """ # asume this is a subset from a list like this [ ... 999, 1001, 1002, ...] # after mapping it with powOfTens() or int(math.log10(x)), the first two terms in the sample would look like this: testVal = [[999, 2], [1001, 3]] # and it would continue with more terms testFun = lambda x,y:[x[0]*(10**(y[1]+1))+y[0],0] testFun(testVal[0], testVal[1]) # then, as reduce works its way up the list, the answer and the next term feed in like this: testVal = [[9991001, 2], [1002, 3]] testFun(testVal[0], testVal[1]) """ Explanation: The lambda function being used to reduce it, grabs the first term of of each sublist and multiplies by 10 to the power of the second term of the next sublist + 1 + the next term. The original format of the sublist is preserved by wrapping the whole thing in [] to make it a list that is then used to replace the original [term, number_powers_of_tens] sublist in listofints. This test function can better show what is happening: End of explanation """ listofints = [1,2,3,9,10,11,12,19,99,100,101,102, 999, 1000, 1001, 50102030] n = reduce(lambda x,y:x*(10**(int(math.log10(y))+1))+y,listofints) print(n) """ Explanation: Extracting just the first term is why the whole thing ended with [0] in the original code: <pre> n = reduce(lambda x,y:[x[0]*(10**(y[1]+1))+y[0],0],map(lambda x:[x,int(math.log(x,10))], listofints))[0]</pre> The original code: End of explanation """ # for comparison, here is the lambda logic split in two functions from the earlier example testFun_r = lambda x,y:[x[0]*(10**(y[1]+1))+y[0],0] # used in outer () to feed into reduce() testFun_m = lambda x:[x,int(math.log10(x))] # used in inner () to feed into map() # for this idea, only one lambda does it all, feeding directly into reduce(): testFun2 = lambda x,y:x*(10**(int(math.log10(y))+1))+y # test it: testVal = [999, 1001] testFun2(testVal[0], testVal[1]) # now reduce() applies it across the whole original list: function(n1, n2) = result1, function(result1, n3) = result2 ... # until it produces a final answer to return ("reducing the list down to one number"). listofints = [1,2,3,10,12,19,99,100,101,50102030] n = reduce(lambda x,y:x*(10**(int(math.log10(y))+1))+y,listofints) print(n) 1*10**(int(math.log10(2))+1)+2 """ Explanation: <a id="lambdareduce" name="lambdareduce"></a> Simpler reduce() and lambda() Example When the code is simplified to just use use reduce(), now only one lambda function is able to set up the power-of-tens calculation for each number and merge it all into terms needed for reduce(). Picking it apart from the inside, just the lambda logic is demonstrated here: End of explanation """ # this function presented and tested in earlier sections # repeated here (unchanged) for conveninece: import math # needed for log functions def powOfTens(N): # find the powers of 10 inside a number if N < 1: N = 1 # less than 1 would throw an error, this is a work-around based on # how this function is used in the code return int(math.log10(N)) # converts decimal to whole integer # int() rounds down no matter how big the decimal # note: math.log(x, 10) produces floating point rounding errors from __future__ import print_function def create_seqNumber(N, reverse=False, showWork=False, returnDescr=False, divLength=100): '''create_seqNumber() --> iput N, and get back a number built from the sequence of 1234...N Arguments: reverse=True to get the sequence in revers, showWork=True to see numbers that add up to final, returnDescr=True to print the answer in a sentence as well as returning it as a number.''' num = 0 tensIncr = 0 answer = 0 if isinstance(N, list): if reverse == False: listofints = N else: listofints = N[::-1] elif isinstance(N, int): Ntens = powOfTens(N) if reverse == False: # create range builder inputs rstart = 1 rend = N+1 rinc = 1 else: rstart = N rend = 0 rinc = -1 listofints = range(rstart, rend, rinc) else: print(type(N)) raise TypeError("Error: for create_seqNumber(N), N must be a list or an integer.") answer = reduce(lambda x,y:x*(10**(powOfTens(y)+1))+y,listofints) if showWork == True: print("Show Work:") print("#"*divLength) worklist = [reduce(lambda x,y:x*(10**(powOfTens(y)+1))+y, listofints[0:2])] worklist.append([reduce(lambda a,b:a*(10**(powOfTens(b)+1))+b, [worklist[-1], x]) for ind, x in enumerate(listofints[2:])]) worklist = [worklist[0]] + worklist[1] # print("worklist: %s" %worklist) # print("worklist[-1]", worklist[-1]) NpOfT = powOfTens(worklist[-1]) NpOfT2 = powOfTens(len(worklist)-1) [(x, print(("%d)" + " "*(NpOfT2 - powOfTens(ind)) + " "*(NpOfT - powOfTens(x) + 1) + "%s") %(ind, x)))[0] for ind, x in enumerate(worklist)] print("#"*divLength) if showWork == True or returnDescr == True: print("Answer: %d" %answer) print("#"*divLength) return answer """ Explanation: <a id="originalProblem" name="originalProblem"></a> Returning to the original Problem and Eliminating The Loops After exploring different options, if we are to preserve all the functionality but lose the loops, a solution is chosen that combines reduce() with list comprehensions. Some notes on this solution: - this version supports the original 123...N where user passes in N - it also supports the listofints used in the quick solutions above - showWork now shows what reduce is fusing together to form the final output - in accepting either N or listofints it tests the input argument for type. This is not considered good practice, but is kept here to illustrate how to do it. Coding sites generally recommend code be refactored to avoid using type testing within its operation. End of explanation """ create_seqNumber(15, showWork=False, returnDescr=True) create_seqNumber(15, reverse=True, showWork=False, returnDescr=True) create_seqNumber(102, reverse=False, showWork=False, returnDescr=True) for i in [1, 5, 9, 10, 11, 13, 98, 99, 100, 101, 102, 107, 1012]: # returnDescr = False by default print("F: %s" %(create_seqNumber(i, reverse=False))) print("-----"*25) print("R: %s" %(create_seqNumber(i, reverse=True))) print("#####"*25) tstLst = [1,2,3,9,10,11,12,59,99,100,101,50102030] print("F: %s" %(create_seqNumber(tstLst, reverse=False))) print("-----"*25) print("R: %s" %(create_seqNumber(tstLst, reverse=True))) print("#####"*25) """ Explanation: Testing with showWork Off These test cases show functionality that supports forward and reverse lists generated from a single number, or feeding in a custom list and spitting it back in forward or reverse. End of explanation """ create_seqNumber(3, reverse=False, showWork=True, returnDescr=True) create_seqNumber(13, reverse=True, showWork=True, returnDescr=True) tstLst = [1,2,3,9,10,11,12,59,99,100,101,50102030] create_seqNumber(tstLst, reverse=False, showWork=True, returnDescr=True) create_seqNumber(1003, reverse=False, showWork=True, returnDescr=True) create_seqNumber('15', showWork=False, returnDescr=True) # demo of TypeError the code raises for wrong input """ Explanation: Testing with showWork On Code to do this is a bit clumsy and inefficient. Due to how lengthy the output, only a few test cases are shown. End of explanation """
hashiprobr/redes-sociais
encontro10.ipynb
gpl-3.0
import sys sys.path.append('..') import socnet as sn """ Explanation: Encontro 10: Lacunas Estruturais O enunciado da Escrita 4 continua ao longo deste notebook. Preste atenção nas partes em negrito. Importando a biblioteca: End of explanation """ sn.node_size = 10 sn.node_color = (0, 0, 0) sn.edge_width = 1 """ Explanation: Configurando a biblioteca: End of explanation """ g = sn.generate_complete_graph(15) sn.show_graph(g) """ Explanation: Gerando um grafo completo: End of explanation """ from random import shuffle def randomize_types(g, num_openers, num_closers, num_chummies): if num_openers + num_closers + num_chummies != g.number_of_nodes(): raise Exception('a soma dos tipos não é igual ao número de nós') nodes = g.nodes() shuffle(nodes) for _ in range(num_openers): g.node[nodes.pop()]['type'] = 'opener' for _ in range(num_closers): g.node[nodes.pop()]['type'] = 'closer' for _ in range(num_chummies): g.node[nodes.pop()]['type'] = 'chummy' randomize_types(g, 15, 0, 0) """ Explanation: Esse será o grafo da comunidade. Atribuindo aleatoriamente tipos aos nós: End of explanation """ from random import random def randomize_existences(g): for n, m in g.edges(): if random() < 0.5: g.edge[n][m]['exists'] = 0 else: g.edge[n][m]['exists'] = 1 randomize_existences(g) """ Explanation: Atribuindo aleatoriamente existências às arestas: End of explanation """ def convert_types_and_existences_to_colors(g): for n in g.nodes(): if g.node[n]['type'] == 'opener': g.node[n]['color'] = (255, 0, 0) elif g.node[n]['type'] == 'closer': g.node[n]['color'] = (0, 255, 0) else: g.node[n]['color'] = (0, 0, 255) for n, m in g.edges(): if g.edge[n][m]['exists'] == 0: g.edge[n][m]['color'] = (192, 192, 192) else: g.edge[n][m]['color'] = (0, 0, 0) convert_types_and_existences_to_colors(g) sn.show_graph(g) """ Explanation: Convertendo tipos e existências em cores para visualização: End of explanation """ def neighbors(g, n): return [m for m in g.neighbors(n) if g.edge[n][m]['exists'] == 1] # Independentemente do tipo, a restrição é sempre entre 0 e 2. def calculate_constraint(g, n): neighbors_n = neighbors(g, n) degree_n = len(neighbors_n) # Todos os tipos evitam isolamento. A restrição é máxima nesse caso. if degree_n == 0: return 2 # Para um chummy, a restrição é o inverso do grau. Uma pequena # normalização é necessária para garantir que está entre 0 e 2. if g.node[n]['type'] == 'chummy': return 2 * (g.number_of_nodes() - degree_n - 1) / (g.number_of_nodes() - 1) # Fórmula de Burt. constraint = 0 for m in neighbors_n: neighbors_m = neighbors(g, m) degree_m = len(neighbors_m) sub_constraint = 1 / degree_n for l in neighbors_m: if n != l and g.edge[n][l]['exists'] == 1: sub_constraint += (1 / degree_n) * (1 / degree_m) constraint += sub_constraint ** 2 # Para um closer, a restrição é o inverso da fórmula de Burt. if g.node[n]['type'] == 'closer': return 2 - constraint # Para um opener, a restrição é a fórmula de Burt. return constraint """ Explanation: Definindo uma medida de restrição: End of explanation """ from random import choice def equals(a, b): return abs(a - b) < 0.000000001 def invert_existence(g, n, m): g.edge[n][m]['exists'] = 1 - g.edge[n][m]['exists'] def update_existences(g): # Para cada nó n... for n in g.nodes(): # Calcula a restrição de n. cn = calculate_constraint(g, n) # Inicializa o dicionário de ganhos. g.node[n]['gains'] = {} # Para cada nó m diferente de n... for m in g.nodes(): if n != m: # Calcula a restrição de m. cm = calculate_constraint(g, m) # Inverte temporariamente a existência de (n, m) para ver o que acontece. invert_existence(g, n, m) # Se a inversão representa uma adição e ela não faz a restrição # de m diminuir, então o ganho é zero porque essa inversão não # é possível: adicionar só é possível se ambos os nós querem. if g.edge[n][m]['exists'] == 1 and calculate_constraint(g, m) >= cm: g.node[n]['gains'][m] = 0 # Senão, o ganho é simplesmente a diferença das restrições. else: g.node[n]['gains'][m] = cn - calculate_constraint(g, n) # Restaura a existência original de (n, m), pois a inversão era temporária. invert_existence(g, n, m) # Obtém o maior ganho de n. g.node[n]['max_gain'] = max(g.node[n]['gains'].values()) # Obtém o maior ganho de todos os nós. max_gain = max([g.node[n]['max_gain'] for n in g.nodes()]) # Se o maior ganho não for positivo, devolve False indicando que o grafo estabilizou. if max_gain <= 0: return False # Senão, escolhe aleatoriamente uma aresta correspondente ao maior ganho e inverte sua existência. n = choice([n for n in g.nodes() if equals(g.node[n]['max_gain'], max_gain)]) m = choice([m for m in g.node[n]['gains'] if equals(g.node[n]['gains'][m], max_gain)]) invert_existence(g, n, m) # Devolve True indicando que o grafo ainda não estabilizou. return True """ Explanation: Definindo uma atualização de existência: End of explanation """ from math import inf from statistics import stdev from queue import Queue def calculate_variables(g, verbose=False): # Cria uma cóṕia do grafo na qual as arestas realmente # existem ou não existem. Isso facilita os cálculos. gc = g.copy() for n, m in g.edges(): if g.edge[n][m]['exists'] == 0: gc.remove_edge(n, m) # Cálculo do número de arestas. (densidade) num_edges = gc.number_of_edges() if verbose: print('número de arestas:', num_edges) # Cálculo do número de componentes. (fragmentação) for n in gc.nodes(): gc.node[n]['label'] = 0 label = 0 q = Queue() for s in gc.nodes(): if gc.node[s]['label'] == 0: label += 1 gc.node[s]['label'] = label q.put(s) while not q.empty(): n = q.get() for m in gc.neighbors(n): if gc.node[m]['label'] == 0: gc.node[m]['label'] = label q.put(m) num_components = label if verbose: print('número de componentes:', num_components) # Cálculo do desvio do tamanho de componentes. (fragmentação) sizes = {label: 0 for label in range(1, num_components + 1)} for n in gc.nodes(): sizes[gc.node[n]['label']] += 1 if num_components == 1: dev_components = 0 else: dev_components = stdev(sizes.values()) if verbose: print('desvio do tamanho de componentes: {:05.2f}\n'.format(dev_components)) # Cálculo do desvio do betweenness. (desigualdade) # Cálculo do betweenness médio por tipo. (quais perfis ficaram centrais) sn.build_betweenness(gc) betweenness = [] mean_betweenness = { 'closer': 0, 'opener': 0, 'chummy': 0, } for n in gc.nodes(): betweenness.append(gc.node[n]['theoretical_betweenness']) mean_betweenness[gc.node[n]['type']] += betweenness[-1] if verbose: print('betweenness do nó {:2} ({}): {:05.2f}'.format(n, gc.node[n]['type'], betweenness[-1])) dev_betweenness = stdev(betweenness) for key in mean_betweenness: length = len([n for n in gc.nodes() if gc.node[n]['type'] == key]) if length == 0: mean_betweenness[key] = 0 else: mean_betweenness[key] /= length if verbose: print('\ndesvio do betweenness: {:05.2f}\n'.format(dev_betweenness)) for key, value in mean_betweenness.items(): print('betweenness médios de {}: {:05.2f}'.format(key, value)) return num_edges, num_components, dev_components, dev_betweenness, mean_betweenness """ Explanation: Definindo uma calculadora de variáveis. End of explanation """ TIMES = 25 """ Explanation: Simulando várias vezes o processo no qual os nós invertem a existência de arestas até não poderem mais diminuir a restrição. End of explanation """ num_openers = 15 num_closers = 0 num_chummies = 0 mean_num_edges = 0 mean_num_components = 0 mean_dev_components = 0 mean_dev_betweenness = 0 mean_mean_betweenness = { 'opener': 0, 'closer': 0, 'chummy': 0, } for _ in range(TIMES): randomize_types(g, num_openers, num_closers, num_chummies) randomize_existences(g) while update_existences(g): pass num_edges, num_components, dev_components, dev_betweenness, mean_betweenness = calculate_variables(g) mean_num_edges += num_edges mean_num_components += num_components mean_dev_components += dev_components mean_dev_betweenness += dev_betweenness for key, value in mean_betweenness.items(): mean_mean_betweenness[key] += value mean_num_edges /= TIMES mean_num_components /= TIMES mean_dev_components /= TIMES mean_dev_betweenness /= TIMES for key in mean_mean_betweenness: mean_mean_betweenness[key] /= TIMES print('média do número de arestas: {:05.2f}'.format(mean_num_edges)) print('média do número de componentes: {:05.2f}'.format(mean_num_components)) print('média do desvio do tamanho de componentes: {:05.2f}'.format(mean_dev_components)) print('média do desvio do betweenness: {:05.2f}'.format(mean_dev_betweenness)) for key, value in mean_mean_betweenness.items(): print('média do betweenness médio de {}: {:05.2f}'.format(key, value)) """ Explanation: No final das simulações, a média das variáveis é impressa. End of explanation """ def update_positions(g, invert=False): if invert: for n, m in g.edges(): g.edge[n][m]['notexists'] = 1 - g.edge[n][m]['exists'] sn.update_positions(g, 'notexists') else: sn.update_positions(g, 'exists') def snapshot(g, frames): convert_types_and_existences_to_colors(g) frame = sn.generate_frame(g) frames.append(frame) frames = [] randomize_types(g, num_openers, num_closers, num_chummies) randomize_existences(g) sn.randomize_positions(g) snapshot(g, frames) while update_existences(g): update_positions(g, False) snapshot(g, frames) sn.show_animation(frames) _, _, _, _, _ = calculate_variables(g, verbose=True) """ Explanation: Para ter insights sobre o que está acontecendo, não esqueça de examinar a versão animada da simulação! End of explanation """
bosscha/alma-calibrator
notebooks/selecting_source/alma_database_selection5.ipynb
gpl-2.0
file_listcal = "alma_sourcecat_searchresults_20180419.csv" q = databaseQuery() """ Explanation: New function to make a list and to select calibrator I add a function to retrieve all the flux from the ALMA Calibrator list with its frequency and observing date, and to retrieve redshift (z) from NED. End of explanation """ listcal = q.read_calibratorlist(file_listcal, fluxrange=[0.1, 999999]) len(listcal) print("Name: ", listcal[0][0]) print("J2000 RA, dec: ", listcal[0][1], listcal[0][2]) print("Alias: ", listcal[0][3]) print("Flux density: ", listcal[0][4]) print("Band: ", listcal[0][5]) print("Freq: ", listcal[0][6]) print("Obs date: ", listcal[0][4]) """ Explanation: Example, retrieve all the calibrator with a flux > 0.1 Jy: End of explanation """ report, resume = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \ maxFreqRes=999999999, array='12m', \ excludeCycle0=True, \ selectPol=False, \ minTimeBand={3:60., 6:60., 7:60.}, \ silent=True) """ Explanation: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7] already queried and convert it to SQL exclude Cycle 0, array 12m End of explanation """ print("Name: ", resume[0][0]) print("From NED: ") print("Name: ", resume[0][3]) print("J2000 RA, dec: ", resume[0][4], resume[0][5]) print("z: ", resume[0][6]) print("Total # of projects: ", resume[0][7]) print("Total # of UIDs: ", resume[0][8]) print("Gal lon: ", resume[0][9]) print("Gal lat: ", resume[0][10]) """ Explanation: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED. End of explanation """ for i, obj in enumerate(resume): for j, cal in enumerate(listcal): if obj[0] == cal[0]: # same name obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume" """ Explanation: Sometimes there is no redshift information found in NED Combining listcal and resume information. End of explanation """ def collect_z_and_flux(Band): z = [] flux = [] for idata in resume: if idata[6] is not None: # select object which has redshift information fluxnya = idata[11][0] bandnya = idata[11][1] freqnya = idata[11][2] datenya = idata[11][3] for i, band in enumerate(bandnya): if band == str(Band): # take only first data flux.append(fluxnya[i]) z.append(idata[6]) break return z, flux z3, f3 = collect_z_and_flux(3) print("Number of seleted source in B3: ", len(z3)) z6, f6 = collect_z_and_flux(6) print("Number of seleted source in B6: ", len(z6)) z7, f7 = collect_z_and_flux(7) print("Number of seleted source in B7: ", len(z7)) """ Explanation: Select objects which has redshift collect the flux, band, freq, and obsdate plot based on the Band End of explanation """ plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, f3, 'ro') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3") plt.subplot(222) plt.plot(z6, f6, 'go') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B6") plt.subplot(223) plt.plot(z7, f7, 'bo') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B7") plt.subplot(224) plt.plot(z3, f3, 'ro', z6, f6, 'go', z7, f7, 'bo', alpha=0.3) plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3, B6, B7") """ Explanation: Plot Flux vs Redshift same object will located in the same z some of them will not have flux in all 3 bands. End of explanation """ from astropy.cosmology import FlatLambdaCDM cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Tcmb0=2.725) """ Explanation: Plot log(Luminosity) vs redshift End of explanation """ def calc_power(z, flux): """ z = redshift flux in Jy """ z = np.array(z) flux = np.array(flux) dL = cosmo.luminosity_distance(z).to(u.meter).value # Luminosity distance luminosity = 4.0*np.pi*dL*dL/(1.0+z) * flux * 1e-26 return z, luminosity """ Explanation: How to calculate luminosity: $$L_{\nu} (\nu_{e}) = \frac{4 \pi D_{L}^2}{1+z} \cdot S_{\nu} (\nu_{o})$$ Notes: - Calculate Luminosity or Power in a specific wavelength (without k-correction e.g. using spectral index) - $L_{\nu}$ in watt/Hz, in emited freq - $S_{\nu}$ in watt/m$^2$/Hz, in observed freq - $D_L$ is luminosity distance, calculated using astropy.cosmology function - need to calculate distance in meter - need to convert Jy to watt/m$^2$/Hz ----- $\times 10^{-26}$ End of explanation """ z3, l3 = calc_power(z3, f3) z6, l6 = calc_power(z6, f6) z7, l7 = calc_power(z7, f7) zdummy = np.linspace(0.001, 2.5, 100) fdummy = 0.1 # Jy, our cut threshold zdummy, Ldummy0 = calc_power(zdummy, fdummy) zdummy, Ldummy3 = calc_power(zdummy, np.max(f3)) zdummy, Ldummy6 = calc_power(zdummy, np.max(f6)) zdummy, Ldummy7 = calc_power(zdummy, np.max(f7)) plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, np.log10(l3), 'r*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy3), 'r--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, np.log10(l6), 'g*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy6), 'g--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, np.log10(l7), 'b*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy7), 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, np.log10(l3), 'r*', z6, np.log10(l6), 'g*', z7, np.log10(l7), 'b*', \ zdummy, np.log10(Ldummy0), 'k--', \ zdummy, np.log10(Ldummy3), 'r--', \ zdummy, np.log10(Ldummy6), 'g--', \ zdummy, np.log10(Ldummy7), 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7") """ Explanation: Plot $\log_{10}(L)$ vs $z$ End of explanation """ plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, l3, 'r*', zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, l6, 'g*', zdummy, Ldummy0, 'k--', zdummy, Ldummy6, 'g--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, l7, 'b*', zdummy, Ldummy0, 'k--', zdummy, Ldummy7, 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, l3, 'r*', z6, l6, 'g*', z7, l7, 'b*', \ zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', \ zdummy, Ldummy6, 'g--', zdummy, Ldummy7, 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7") """ Explanation: Black-dashed line are for 0.1 Jy flux. Without log10 End of explanation """
Merinorus/adaisawesome
Homework/01 - Pandas and Data Wrangling/Intro to Pandas.ipynb
gpl-3.0
import pandas as pd import numpy as np pd.options.mode.chained_assignment = None # default='warn' """ Explanation: Table of Contents <p><div class="lev1"><a href="#Introduction-to-Pandas"><span class="toc-item-num">1&nbsp;&nbsp;</span>Introduction to Pandas</a></div><div class="lev2"><a href="#Pandas-Data-Structures"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Pandas Data Structures</a></div><div class="lev3"><a href="#Series"><span class="toc-item-num">1.1.1&nbsp;&nbsp;</span>Series</a></div><div class="lev3"><a href="#DataFrame"><span class="toc-item-num">1.1.2&nbsp;&nbsp;</span>DataFrame</a></div><div class="lev3"><a href="#Exercise-1"><span class="toc-item-num">1.1.3&nbsp;&nbsp;</span>Exercise 1</a></div><div class="lev3"><a href="#Exercise-2"><span class="toc-item-num">1.1.4&nbsp;&nbsp;</span>Exercise 2</a></div><div class="lev2"><a href="#Importing-data"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Importing data</a></div><div class="lev3"><a href="#Microsoft-Excel"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Microsoft Excel</a></div><div class="lev2"><a href="#Pandas-Fundamentals"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Pandas Fundamentals</a></div><div class="lev3"><a href="#Manipulating-indices"><span class="toc-item-num">1.3.1&nbsp;&nbsp;</span>Manipulating indices</a></div><div class="lev2"><a href="#Indexing-and-Selection"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Indexing and Selection</a></div><div class="lev3"><a href="#Exercise-3"><span class="toc-item-num">1.4.1&nbsp;&nbsp;</span>Exercise 3</a></div><div class="lev2"><a href="#Operations"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Operations</a></div><div class="lev2"><a href="#Sorting-and-Ranking"><span class="toc-item-num">1.6&nbsp;&nbsp;</span>Sorting and Ranking</a></div><div class="lev3"><a href="#Exercise-4"><span class="toc-item-num">1.6.1&nbsp;&nbsp;</span>Exercise 4</a></div><div class="lev2"><a href="#Hierarchical-indexing"><span class="toc-item-num">1.7&nbsp;&nbsp;</span>Hierarchical indexing</a></div><div class="lev2"><a href="#Missing-data"><span class="toc-item-num">1.8&nbsp;&nbsp;</span>Missing data</a></div><div class="lev3"><a href="#Exercise-5"><span class="toc-item-num">1.8.1&nbsp;&nbsp;</span>Exercise 5</a></div><div class="lev2"><a href="#Data-summarization"><span class="toc-item-num">1.9&nbsp;&nbsp;</span>Data summarization</a></div><div class="lev2"><a href="#Writing-Data-to-Files"><span class="toc-item-num">1.10&nbsp;&nbsp;</span>Writing Data to Files</a></div><div class="lev3"><a href="#Advanced-Exercise:-Compiling-Ebola-Data"><span class="toc-item-num">1.10.1&nbsp;&nbsp;</span>Advanced Exercise: Compiling Ebola Data</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.11&nbsp;&nbsp;</span>References</a></div> # Introduction to Pandas **pandas** is a Python package providing fast, flexible, and expressive data structures designed to work with *relational* or *labeled* data both. It is a fundamental high-level building block for doing practical, real world data analysis in Python. pandas is well suited for: - Tabular data with heterogeneously-typed columns, as in an SQL table or Excel spreadsheet - Ordered and unordered (not necessarily fixed-frequency) time series data. - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and column labels - Any other form of observational / statistical data sets. The data actually need not be labeled at all to be placed into a pandas data structure Key features: - Easy handling of **missing data** - **Size mutability**: columns can be inserted and deleted from DataFrame and higher dimensional objects - Automatic and explicit **data alignment**: objects can be explicitly aligned to a set of labels, or the data can be aligned automatically - Powerful, flexible **group by functionality** to perform split-apply-combine operations on data sets - Intelligent label-based **slicing, fancy indexing, and subsetting** of large data sets - Intuitive **merging and joining** data sets - Flexible **reshaping and pivoting** of data sets - **Hierarchical labeling** of axes - Robust **IO tools** for loading data from flat files, Excel files, databases, and HDF5 - **Time series functionality**: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc. End of explanation """ counts = pd.Series([632, 1638, 569, 115]) counts """ Explanation: Pandas Data Structures Series A Series is a single vector of data (like a NumPy array) with an index that labels each element in the vector. End of explanation """ counts.values counts.index """ Explanation: If an index is not specified, a default sequence of integers is assigned as the index. A NumPy array comprises the values of the Series, while the index is a pandas Index object. End of explanation """ bacteria = pd.Series([632, 1638, 569, 115], index=['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']) bacteria """ Explanation: We can assign meaningful labels to the index, if they are available: End of explanation """ bacteria['Actinobacteria'] bacteria bacteria[[name.endswith('bacteria') for name in bacteria.index]] [name.endswith('bacteria') for name in bacteria.index] """ Explanation: These labels can be used to refer to the values in the Series. End of explanation """ bacteria[0] """ Explanation: Notice that the indexing operation preserved the association between the values and the corresponding indices. We can still use positional indexing if we wish. End of explanation """ bacteria.name = 'counts' bacteria.index.name = 'phylum' bacteria """ Explanation: We can give both the array of values and the index meaningful labels themselves: End of explanation """ np.log(bacteria) """ Explanation: NumPy's math functions and other operations can be applied to Series without losing the data structure. End of explanation """ bacteria[bacteria>1000] """ Explanation: We can also filter according to the values in the Series: End of explanation """ bacteria_dict = {'Firmicutes': 632, 'Proteobacteria': 1638, 'Actinobacteria': 569, 'Bacteroidetes': 115} pd.Series(bacteria_dict) """ Explanation: A Series can be thought of as an ordered key-value store. In fact, we can create one from a dict: End of explanation """ bacteria2 = pd.Series(bacteria_dict, index=['Cyanobacteria','Firmicutes', 'Proteobacteria','Actinobacteria']) bacteria2 bacteria2.isnull() """ Explanation: Notice that the Series is created in key-sorted order. If we pass a custom index to Series, it will select the corresponding values from the dict, and treat indices without corrsponding values as missing. Pandas uses the NaN (not a number) type for missing values. End of explanation """ bacteria + bacteria2 """ Explanation: Critically, the labels are used to align data when used in operations with other Series objects: End of explanation """ data = pd.DataFrame({'value':[623, 1638, 569, 115, 433, 1130, 754, 555], 'patient':[1, 1, 1, 1, 2, 2, 2, 2], 'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes', 'Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes']}) data """ Explanation: Contrast this with NumPy arrays, where arrays of the same length will combine values element-wise; adding Series combined values with the same label in the resulting series. Notice also that the missing values were propogated by addition. DataFrame Inevitably, we want to be able to store, view and manipulate data that is multivariate, where for every index there are multiple fields or columns of data, often of varying data type. A DataFrame is a tabular data structure, encapsulating multiple series like columns in a spreadsheet. Data are stored internally as a 2-dimensional object, but the DataFrame allows us to represent and manipulate higher-dimensional data. End of explanation """ data[['phylum','value','patient']] """ Explanation: Notice the DataFrame is sorted by column name. We can change the order by indexing them in the order we desire: End of explanation """ data.columns """ Explanation: A DataFrame has a second index, representing the columns: End of explanation """ data.dtypes """ Explanation: The dtypes attribute reveals the data type for each column in our DataFrame. int64 is numeric integer values object strings (letters and numbers) float64 floating-point values End of explanation """ data['patient'] data.patient type(data.value) data[['value']] """ Explanation: If we wish to access columns, we can do so either by dict-like indexing or by attribute: End of explanation """ data.loc[0] """ Explanation: Notice this is different than with Series, where dict-like indexing retrieved a particular element (row). If we want access to a row in a DataFrame, we index its loc attribute. End of explanation """ data.head() """ Explanation: Exercise 1 Try out these commands to see what they return: data.head() data.tail(3) data.shape End of explanation """ data.tail(3) """ Explanation: data.head shows all of the data up to a certain number, if it is indicated in the parantheses End of explanation """ data.shape """ Explanation: this shows the last 3 rows of data of the data row End of explanation """ data = pd.DataFrame([{'patient': 1, 'phylum': 'Firmicutes', 'value': 632}, {'patient': 1, 'phylum': 'Proteobacteria', 'value': 1638}, {'patient': 1, 'phylum': 'Actinobacteria', 'value': 569}, {'patient': 1, 'phylum': 'Bacteroidetes', 'value': 115}, {'patient': 2, 'phylum': 'Firmicutes', 'value': 433}, {'patient': 2, 'phylum': 'Proteobacteria', 'value': 1130}, {'patient': 2, 'phylum': 'Actinobacteria', 'value': 754}, {'patient': 2, 'phylum': 'Bacteroidetes', 'value': 555}]) data """ Explanation: this gives you the number of rows of data (8) and columns (3) An alternative way of initializing a DataFrame is with a list of dicts: End of explanation """ vals = data.value vals vals[5] = 0 vals """ Explanation: Its important to note that the Series returned when a DataFrame is indexted is merely a view on the DataFrame, and not a copy of the data itself. So you must be cautious when manipulating this data: End of explanation """ vals = data.value.copy() vals[5] = 1000 vals """ Explanation: If we plan on modifying an extracted Series, its a good idea to make a copy. End of explanation """ data.value[3,4,5,6] = [14, 21, 1130,5] data type(data) data['year'] = 2013 data """ Explanation: We can create or modify columns by assignment: End of explanation """ data.treatment = 1 data data.treatment """ Explanation: But note, we cannot use the attribute indexing method to add a new column: End of explanation """ # get all of the values from the data where the phylum name ends with bacteria data[[phylum.endswith('bacteria') for phylum in data.phylum]] # take all of the rows where the value which are above 1000 data[data.value>1000] data data.phylum """ Explanation: Exercise 2 From the data table above, create an index to return all rows for which the phylum name ends in "bacteria" and the value is greater than 1000. End of explanation """ treatment = pd.Series([0]*4 + [1]*2) treatment data['treatment'] = treatment data """ Explanation: Specifying a Series as a new columns cause its values to be added according to the DataFrame's index: End of explanation """ month = ['Jan', 'Feb', 'Mar', 'Apr'] data['month'] = month data['month'] = ['Jan']*len(data) data """ Explanation: Other Python data structures (ones without an index) need to be the same length as the DataFrame: End of explanation """ data_nomonth = data.drop('month', axis=1) data_nomonth """ Explanation: We can use the drop method to remove rows or columns, which by default drops rows. We can be explicit by using the axis argument: End of explanation """ data.values """ Explanation: We can extract the underlying data as a simple ndarray by accessing the values attribute: End of explanation """ df = pd.DataFrame({'foo': [1,2,3], 'bar':[0.4, -1.0, 4.5]}) df.values """ Explanation: Notice that because of the mix of string and integer (and NaN) values, the dtype of the array is object. The dtype will automatically be chosen to be as general as needed to accomodate all the columns. End of explanation """ data.index """ Explanation: Pandas uses a custom data structure to represent the indices of Series and DataFrames. End of explanation """ data.index[0] = 15 """ Explanation: Index objects are immutable: End of explanation """ bacteria2.index = bacteria.index bacteria2 """ Explanation: This is so that Index objects can be shared between data structures without fear that they will be changed. End of explanation """ !cat Data/microbiome.csv """ Explanation: Importing data A key, but often under-appreciated, step in data analysis is importing the data that we wish to analyze. Though it is easy to load basic data structures into Python using built-in tools or those provided by packages like NumPy, it is non-trivial to import structured data well, and to easily convert this input into a robust data structure: genes = np.loadtxt("genes.csv", delimiter=",", dtype=[('gene', '|S10'), ('value', '&lt;f4')]) Pandas provides a convenient set of functions for importing tabular data in a number of formats directly into a DataFrame object. These functions include a slew of options to perform type inference, indexing, parsing, iterating and cleaning automatically as data are imported. Let's start with some more bacteria data, stored in csv format. End of explanation """ mb = pd.read_csv("data/microbiome.csv") mb """ Explanation: This table can be read into a DataFrame using read_csv: End of explanation """ pd.read_csv("Data/microbiome.csv", header=None).head() """ Explanation: Notice that read_csv automatically considered the first row in the file to be a header row. We can override default behavior by customizing some the arguments, like header, names or index_col. End of explanation """ mb = pd.read_table("Data/microbiome.csv", sep=',') """ Explanation: read_csv is just a convenience function for read_table, since csv is such a common format: End of explanation """ mb = pd.read_csv("Data/microbiome.csv", index_col=['Patient','Taxon']) mb.head(5) """ Explanation: The sep argument can be customized as needed to accomodate arbitrary separators. For example, we can use a regular expression to define a variable amount of whitespace, which is unfortunately very common in some data formats: sep='\s+' For a more useful index, we can specify the first two columns, which together provide a unique index to the data. End of explanation """ pd.read_csv("Data/microbiome.csv", skiprows=[3,4,6]).head() """ Explanation: This is called a hierarchical index, which we will revisit later in the section. If we have sections of data that we do not wish to import (for example, known bad data), we can populate the skiprows argument: End of explanation """ pd.read_csv("Data/microbiome.csv", nrows=4) """ Explanation: If we only want to import a small number of rows from, say, a very large data file we can use nrows: End of explanation """ pd.read_csv("Data/microbiome.csv", chunksize=14) data_chunks = pd.read_csv("Data/microbiome.csv", chunksize=14) mean_tissue = pd.Series({chunk.Taxon[0]: chunk.Tissue.mean() for chunk in data_chunks}) mean_tissue """ Explanation: Alternately, if we want to process our data in reasonable chunks, the chunksize argument will return an iterable object that can be employed in a data processing loop. For example, our microbiome data are organized by bacterial phylum, with 14 patients represented in each: End of explanation """ !cat Data/microbiome_missing.csv pd.read_csv("Data/microbiome_missing.csv").head(20) """ Explanation: Most real-world data is incomplete, with values missing due to incomplete observation, data entry or transcription error, or other reasons. Pandas will automatically recognize and parse common missing data indicators, including NA and NULL. End of explanation """ pd.isnull(pd.read_csv("Data/microbiome_missing.csv")).head(20) """ Explanation: Above, Pandas recognized NA and an empty field as missing data. End of explanation """ pd.read_csv("Data/microbiome_missing.csv", na_values=['?', -99999]).head(20) """ Explanation: Unfortunately, there will sometimes be inconsistency with the conventions for missing data. In this example, there is a question mark "?" and a large negative number where there should have been a positive integer. We can specify additional symbols with the na_values argument: End of explanation """ mb = pd.read_excel('Data/microbiome/MID2.xls', sheetname='Sheet 1', header=None) mb.head() """ Explanation: These can be specified on a column-wise basis using an appropriate dict as the argument for na_values. Microsoft Excel Since so much financial and scientific data ends up in Excel spreadsheets (regrettably), Pandas' ability to directly import Excel spreadsheets is valuable. This support is contingent on having one or two dependencies (depending on what version of Excel file is being imported) installed: xlrd and openpyxl (these may be installed with either pip or easy_install). The read_excel convenience function in pandas imports a specific sheet from an Excel file End of explanation """ baseball = pd.read_csv("Data/baseball.csv", index_col='id') baseball.head() """ Explanation: There are several other data formats that can be imported into Python and converted into DataFrames, with the help of buitl-in or third-party libraries. These include JSON, XML, HDF5, relational and non-relational databases, and various web APIs. These are beyond the scope of this tutorial, but are covered in Python for Data Analysis. Pandas Fundamentals This section introduces the new user to the key functionality of Pandas that is required to use the software effectively. For some variety, we will leave our digestive tract bacteria behind and employ some baseball data. End of explanation """ player_id = baseball.player + baseball.year.astype(str) baseball_newind = baseball.copy() baseball_newind.index = player_id baseball_newind.head() """ Explanation: Notice that we specified the id column as the index, since it appears to be a unique identifier. We could try to create a unique index ourselves by combining player and year: End of explanation """ baseball_newind.index.is_unique """ Explanation: This looks okay, but let's check: End of explanation """ pd.Series(baseball_newind.index).value_counts() """ Explanation: So, indices need not be unique. Our choice is not unique because some players change teams within years. End of explanation """ baseball_newind.loc['wickmbo012007'] """ Explanation: The most important consequence of a non-unique index is that indexing by label will return multiple values for some labels: End of explanation """ player_unique = baseball.player + baseball.team + baseball.year.astype(str) baseball_newind = baseball.copy() baseball_newind.index = player_unique baseball_newind.head() baseball_newind.index.is_unique """ Explanation: We will learn more about indexing below. We can create a truly unique index by combining player, team and year: End of explanation """ baseball.reindex(baseball.index[::-1]).head() """ Explanation: We can create meaningful indices more easily using a hierarchical index; for now, we will stick with the numeric id field as our index. Manipulating indices Reindexing allows users to manipulate the data labels in a DataFrame. It forces a DataFrame to conform to the new index, and optionally, fill in missing data if requested. A simple use of reindex is to alter the order of the rows: End of explanation """ id_range = range(baseball.index.values.min(), baseball.index.values.max()) baseball.reindex(id_range).head() """ Explanation: Notice that the id index is not sequential. Say we wanted to populate the table with every id value. We could specify and index that is a sequence from the first to the last id numbers in the database, and Pandas would fill in the missing data with NaN values: End of explanation """ baseball.reindex(id_range, method='ffill', columns=['player','year']).head() baseball.reindex(id_range, fill_value='charliebrown', columns=['player']).head() """ Explanation: Missing values can be filled as desired, either with selected values, or by rule: End of explanation """ baseball.shape baseball.drop([89525, 89526]) baseball.drop(['ibb','hbp'], axis=1) """ Explanation: Keep in mind that reindex does not work if we pass a non-unique index series. We can remove rows or columns via the drop method: End of explanation """ # Sample Series object hits = baseball_newind.h hits # Numpy-style indexing hits[:3] # Indexing by label hits[['womacto01CHN2006','schilcu01BOS2006']] """ Explanation: Indexing and Selection Indexing works analogously to indexing in NumPy arrays, except we can use the labels in the Index object to extract values in addition to arrays of integers. End of explanation """ hits['womacto01CHN2006':'gonzalu01ARI2006'] hits['womacto01CHN2006':'gonzalu01ARI2006'] = 5 hits """ Explanation: We can also slice with data labels, since they have an intrinsic order within the Index: End of explanation """ baseball_newind[['h','ab']] baseball_newind[baseball_newind.ab>500] """ Explanation: In a DataFrame we can slice along either or both axes: End of explanation """ baseball_newind.query('ab > 500') """ Explanation: For a more concise (and readable) syntax, we can use the new query method to perform selection on a DataFrame. Instead of having to type the fully-specified column, we can simply pass a string that describes what to select. The query above is then simply: End of explanation """ min_ab = 450 baseball_newind.query('ab > @min_ab') """ Explanation: The DataFrame.index and DataFrame.columns are placed in the query namespace by default. If you want to refer to a variable in the current namespace, you can prefix the variable with @: End of explanation """ baseball_newind.loc['gonzalu01ARI2006', ['h','X2b', 'X3b', 'hr']] baseball_newind.loc[:'myersmi01NYA2006', 'hr'] """ Explanation: The indexing field loc allows us to select subsets of rows and columns in an intuitive way: End of explanation """ baseball_newind.iloc[:5, 5:8] """ Explanation: In addition to using loc to select rows and columns by label, pandas also allows indexing by position using the iloc attribute. So, we can query rows and columns by absolute position, rather than by name: End of explanation """ baseball_newind[baseball_newind['team'].isin(['LAN','SFN'])] #there are 15 players who are either LAN or SFN """ Explanation: Exercise 3 You can use the isin method query a DataFrame based upon a list of values as follows: data['phylum'].isin(['Firmacutes', 'Bacteroidetes']) Use isin to find all players that played for the Los Angeles Dodgers (LAN) or the San Francisco Giants (SFN). How many records contain these values? End of explanation """ hr2006 = baseball.loc[baseball.year==2006, 'hr'] hr2006.index = baseball.player[baseball.year==2006] hr2007 = baseball.loc[baseball.year==2007, 'hr'] hr2007.index = baseball.player[baseball.year==2007] hr2007 """ Explanation: Operations DataFrame and Series objects allow for several operations to take place either on a single object, or between two or more objects. For example, we can perform arithmetic on the elements of two objects, such as combining baseball statistics across years. First, let's (artificially) construct two Series, consisting of home runs hit in years 2006 and 2007, respectively: End of explanation """ hr_total = hr2006 + hr2007 hr_total """ Explanation: Now, let's add them together, in hopes of getting 2-year home run totals: End of explanation """ hr_total[hr_total.notnull()] """ Explanation: Pandas' data alignment places NaN values for labels that do not overlap in the two Series. In fact, there are only 6 players that occur in both years. End of explanation """ hr2007.add(hr2006, fill_value=0) """ Explanation: While we do want the operation to honor the data labels in this way, we probably do not want the missing values to be filled with NaN. We can use the add method to calculate player home run totals by using the fill_value argument to insert a zero for home runs where labels do not overlap: End of explanation """ baseball.hr - baseball.hr.max() """ Explanation: Operations can also be broadcast between rows or columns. For example, if we subtract the maximum number of home runs hit from the hr column, we get how many fewer than the maximum were hit by each player: End of explanation """ baseball.loc[89521, "player"] stats = baseball[['h','X2b', 'X3b', 'hr']] diff = stats - stats.loc[89521] diff[:10] """ Explanation: Or, looking at things row-wise, we can see how a particular player compares with the rest of the group with respect to important statistics End of explanation """ stats.apply(np.median) def range_calc(x): return x.max() - x.min() stat_range = lambda x: x.max() - x.min() stats.apply(stat_range) """ Explanation: We can also apply functions to each column or row of a DataFrame End of explanation """ def slugging(x): bases = x['h']-x['X2b']-x['X3b']-x['hr'] + 2*x['X2b'] + 3*x['X3b'] + 4*x['hr'] ab = x['ab']+1e-6 return bases/ab baseball.apply(slugging, axis=1).round(3) """ Explanation: Lets use apply to calculate a meaningful baseball statistics, slugging percentage: $$SLG = \frac{1B + (2 \times 2B) + (3 \times 3B) + (4 \times HR)}{AB}$$ And just for fun, we will format the resulting estimate. End of explanation """ baseball_newind.sort_index().head() baseball_newind.sort_index(ascending=False).head() """ Explanation: Sorting and Ranking Pandas objects include methods for re-ordering data. End of explanation """ baseball_newind.sort_index(axis=1).head() """ Explanation: Try sorting the columns instead of the rows, in ascending order: End of explanation """ baseball.hr.sort_values() """ Explanation: We can also use sort_values to sort a Series by value, rather than by label. End of explanation """ baseball[['player','sb','cs']].sort_values(ascending=[False,True], by=['sb', 'cs']).head(10) """ Explanation: For a DataFrame, we can sort according to the values of one or more columns using the by argument of sort_values: End of explanation """ baseball.hr.rank() """ Explanation: Ranking does not re-arrange data, but instead returns an index that ranks each value relative to others in the Series. End of explanation """ pd.Series([100,100]).rank() """ Explanation: Ties are assigned the mean value of the tied ranks, which may result in decimal values. End of explanation """ baseball.hr.rank(method='first') """ Explanation: Alternatively, you can break ties via one of several methods, such as by the order in which they occur in the dataset: End of explanation """ baseball.rank(ascending=False).head() baseball[['r','h','hr']].rank(ascending=False).head() """ Explanation: Calling the DataFrame's rank method results in the ranks of all columns: End of explanation """ def onbasepercent(x): onbases = x['h'] + x['bb'] + x['hbp'] basetotal = x['ab'] + x['bb'] + x['hbp'] + x['sf']+1e-6 return onbases/basetotal baseball.apply(onbasepercent, axis=1).round(3) """ Explanation: Exercise 4 Calculate on base percentage for each player, and return the ordered series of estimates. $$OBP = \frac{H + BB + HBP}{AB + BB + HBP + SF}$$ End of explanation """ baseball_h = baseball.set_index(['year', 'team', 'player']) baseball_h.head(10) """ Explanation: Hierarchical indexing In the baseball example, I was forced to combine 3 fields to obtain a unique index that was not simply an integer value. A more elegant way to have done this would be to create a hierarchical index from the three fields. End of explanation """ baseball_h.index[:10] baseball_h.index.is_unique """ Explanation: This index is a MultiIndex object that consists of a sequence of tuples, the elements of which is some combination of the three columns used to create the index. Where there are multiple repeated values, Pandas does not print the repeats, making it easy to identify groups of values. End of explanation """ baseball_h.loc[(2007, 'ATL', 'francju01')] """ Explanation: Try using this hierarchical index to retrieve Julio Franco (francju01), who played for the Atlanta Braves (ATL) in 2007: End of explanation """ mb = pd.read_csv("Data/microbiome.csv", index_col=['Taxon','Patient']) mb.head(10) """ Explanation: Recall earlier we imported some microbiome data using two index columns. This created a 2-level hierarchical index: End of explanation """ mb.loc['Proteobacteria'] """ Explanation: With a hierachical index, we can select subsets of the data based on a partial index: End of explanation """ frame = pd.DataFrame(np.arange(12).reshape(( 4, 3)), index =[['a', 'a', 'b', 'b'], [1, 2, 1, 2]], columns =[['Ohio', 'Ohio', 'Colorado'], ['Green', 'Red', 'Green']]) frame """ Explanation: Hierarchical indices can be created on either or both axes. Here is a trivial example: End of explanation """ frame.index.names = ['key1', 'key2'] frame.columns.names = ['state', 'color'] frame """ Explanation: If you want to get fancy, both the row and column indices themselves can be given names: End of explanation """ frame.loc['a', 'Ohio'] """ Explanation: With this, we can do all sorts of custom indexing: End of explanation """ # Write your answer here frame.loc['b','2','Colorado'] """ Explanation: Try retrieving the value corresponding to b2 in Colorado: End of explanation """ mb.swaplevel('Patient', 'Taxon').head() """ Explanation: Additionally, the order of the set of indices in a hierarchical MultiIndex can be changed by swapping them pairwise: End of explanation """ mb.sortlevel('Patient', ascending=False).head() """ Explanation: Data can also be sorted by any index level, using sortlevel: End of explanation """ foo = pd.Series([np.nan, -3, None, 'foobar']) foo foo.isnull() """ Explanation: Missing data The occurence of missing data is so prevalent that it pays to use tools like Pandas, which seamlessly integrates missing data handling so that it can be dealt with easily, and in the manner required by the analysis at hand. Missing data are represented in Series and DataFrame objects by the NaN floating point value. However, None is also treated as missing, since it is commonly used as such in other contexts (e.g. NumPy). End of explanation """ bacteria2 bacteria2.dropna() bacteria2.isnull() bacteria2[bacteria2.notnull()] """ Explanation: Missing values may be dropped or indexed out: End of explanation """ data.dropna() """ Explanation: By default, dropna drops entire rows in which one or more values are missing. End of explanation """ data.dropna(how='all') """ Explanation: This can be overridden by passing the how='all' argument, which only drops a row when every field is a missing value. End of explanation """ data.loc[7, 'year'] = np.nan data data.dropna(thresh=5) """ Explanation: This can be customized further by specifying how many values need to be present before a row is dropped via the thresh argument. End of explanation """ # Write your answer here data.dropna(axis=1) """ Explanation: This is typically used in time series applications, where there are repeated measurements that are incomplete for some subjects. Exercise 5 Try using the axis argument to drop columns with missing values: End of explanation """ bacteria2.fillna(0) data.fillna({'year': 2013, 'treatment':2}) """ Explanation: Rather than omitting missing data from an analysis, in some cases it may be suitable to fill the missing value in, either with a default value (such as zero) or a value that is either imputed or carried forward/backward from similar data points. We can do this programmatically in Pandas with the fillna argument. End of explanation """ data.year.fillna(2013, inplace=True) data """ Explanation: Notice that fillna by default returns a new object with the desired filling behavior, rather than changing the Series or DataFrame in place (in general, we like to do this, by the way!). We can alter values in-place using inplace=True. End of explanation """ bacteria2.fillna(method='bfill') """ Explanation: Missing values can also be interpolated, using any one of a variety of methods: End of explanation """ baseball.sum() """ Explanation: Data summarization We often wish to summarize data in Series or DataFrame objects, so that they can more easily be understood or compared with similar data. The NumPy package contains several functions that are useful here, but several summarization or reduction methods are built into Pandas data structures. End of explanation """ baseball.mean() """ Explanation: Clearly, sum is more meaningful for some columns than others. For methods like mean for which application to string variables is not just meaningless, but impossible, these columns are automatically exculded: End of explanation """ bacteria2 bacteria2.mean() """ Explanation: The important difference between NumPy's functions and Pandas' methods is that the latter have built-in support for handling missing data. End of explanation """ bacteria2.mean(skipna=False) """ Explanation: Sometimes we may not want to ignore missing values, and allow the nan to propagate. End of explanation """ extra_bases = baseball[['X2b','X3b','hr']].sum(axis=1) extra_bases.sort_values(ascending=False) """ Explanation: Passing axis=1 will summarize over rows instead of columns, which only makes sense in certain situations. End of explanation """ baseball.describe() """ Explanation: A useful summarization that gives a quick snapshot of multiple statistics for a Series or DataFrame is describe: End of explanation """ baseball.player.describe() """ Explanation: describe can detect non-numeric data and sometimes yield useful information about it. End of explanation """ baseball.hr.cov(baseball.X2b) """ Explanation: We can also calculate summary statistics across multiple columns, for example, correlation and covariance. $$cov(x,y) = \sum_i (x_i - \bar{x})(y_i - \bar{y})$$ End of explanation """ baseball.hr.corr(baseball.X2b) baseball.ab.corr(baseball.h) """ Explanation: $$corr(x,y) = \frac{cov(x,y)}{(n-1)s_x s_y} = \frac{\sum_i (x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_i (x_i - \bar{x})^2 \sum_i (y_i - \bar{y})^2}}$$ End of explanation """ # Write answer here baseball.corr() """ Explanation: Try running corr on the entire baseball DataFrame to see what is returned: End of explanation """ mb.head() mb.sum(level='Taxon') """ Explanation: If we have a DataFrame with a hierarchical index (or indices), summary statistics can be applied with respect to any of the index levels: End of explanation """ mb.to_csv("mb.csv") """ Explanation: Writing Data to Files As well as being able to read several data input formats, Pandas can also export data to a variety of storage formats. We will bring your attention to just a couple of these. End of explanation """ baseball.to_pickle("baseball_pickle") """ Explanation: The to_csv method writes a DataFrame to a comma-separated values (csv) file. You can specify custom delimiters (via sep argument), how missing values are written (via na_rep argument), whether the index is writen (via index argument), whether the header is included (via header argument), among other options. An efficient way of storing data to disk is in binary format. Pandas supports this using Python’s built-in pickle serialization. End of explanation """ pd.read_pickle("baseball_pickle") """ Explanation: The complement to to_pickle is the read_pickle function, which restores the pickle to a DataFrame or Series: End of explanation """ # from http://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe import glob # Guinea path ='Data/ebola/guinea_data' # file path allFiles = glob.glob(path + "/*.csv") guinea_frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) guinea_frame = pd.concat(list_) # Liberia path ='Data/ebola/liberia_data' # file path allFiles = glob.glob(path + "/*.csv") liberia_frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) liberia_frame = pd.concat(list_) # Sierra Leone path ='Data/ebola/sl_data' # file path allFiles = glob.glob(path + "/*.csv") sl_frame = pd.DataFrame() list_ = [] for file_ in allFiles: df = pd.read_csv(file_,index_col=None, header=0) list_.append(df) sl_frame = pd.concat(list_) #do isin to get only New cases of confirmed and New deaths registered for each country guinea_frame_column = guinea_frame[guinea_frame['Description'].isin(['New cases of confirmed', 'New deaths registered'])] guinea_frame_column.head() #select only the relevant columns for us - date, description, totals guinea_frame_new= guinea_frame_column[['Date', 'Description', 'Totals']] guinea_frame_new.head() """ Explanation: As Wes warns in his book, it is recommended that binary storage of data via pickle only be used as a temporary storage format, in situations where speed is relevant. This is because there is no guarantee that the pickle format will not change with future versions of Python. Advanced Exercise: Compiling Ebola Data The Data/ebola folder contains summarized reports of Ebola cases from three countries during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country. From these data files, use pandas to import them and create a single data frame that includes the daily totals of new cases and deaths for each country. End of explanation """ #do isin to get only New cases of confirmed and New deaths registered for each country liberia_frame_column = liberia_frame[liberia_frame['Variable'].isin(['New case/s (confirmed)', 'Newly reported deaths'])] liberia_frame_column.head() #select only the relevant columns for us - date, variable, national liberia_frame_new= liberia_frame_column[['Date', 'Variable', 'National']] liberia_frame_new.head() #do isin to get only New cases of confirmed and New deaths registered for each country sl_frame_column = sl_frame[sl_frame['variable'].isin(['new_confirmed', 'death_confirmed'])] sl_frame_column.head() #select only the relevant columns for us - date, variable, national sl_frame_new= sl_frame_column[['date', 'variable', 'National']] sl_frame_new.head() #for each country #made the dates the index and drop the date column #add a column with the Country name at the end guinea_frame_new.index = guinea_frame_new.Date guinea_drop = guinea_frame_new.drop('Date', axis=1) guinea_drop['Country']=['Guinea']*len(guinea_drop) guinea_drop.head() liberia_frame_new.index = liberia_frame_new.Date liberia_drop = liberia_frame_new.drop('Date', axis=1) liberia_drop['Country']=['Liberia']*len(liberia_drop) #liberia_drop['Date'] = liberia_drop['Date'].astype('datetime64[ns]') liberia_drop.head() sl_frame_new.index = sl_frame_new.date sl_drop = sl_frame_new.drop('date', axis=1) sl_drop['Country']=['Sierra Leone']*len(sl_drop) sl_drop['Country']=['Sierra Leone'] #sl_drop['date'] = sl_drop['date'].astype('datetime64[ns]') sl_drop.head() #Change the names of the columns to the same guinea_drop.columns = liberia_drop.columns = sl_drop.columns = ['Description', 'National Total', 'Country'] #Change the names of the date column to the same guinea_drop.index.name = liberia_drop.index.name = sl_drop.index.name = ['Date'] #Merge the files dataframe = pd.concat([liberia_drop,guinea_drop,sl_drop]) dataframe[['Country','Description','National Total']] """ Explanation: References Python for Data Analysis Wes McKinney End of explanation """
liupengyuan/python_tutorial
chapter3/python正则表达式.ipynb
mit
import re """ Explanation: python正则表达式快速基础教程 正则表达式,这个术语不太容易望文生义(没有去考证是如何被翻译为正则表达式的),其实其英文为Regular Expression,直接翻译就是:有规律的表达式。这个表达式其实就是一个字符序列,反映某种字符规律,用(字符串模式匹配)来处理字符串。很多高级语言均支持利用正则表达式对字符串进行处理的操作。 python提供的正则表达式文档可参见:https://docs.python.org/3/library/re.html End of explanation """ s = 'Blow low, follow in of which low. lower, lmoww oow aow bow cow 23742937 dow kdiieur998.' p = 'low' """ Explanation: 首先引入python正则表达式库re 1. 初识 End of explanation """ m = re.findall(p, s) m """ Explanation: 假设要在字符串s中查找单词low,由于该单词的规律就是low,因此可将low作为一个正则表达式,命名为p。 End of explanation """ p = input('请输入字符模式,回车结束!\n') m = re.findall(p,s) if not m: print('没有找到匹配字符!') else: print('成功匹配!') """ Explanation: findall(pattern, string)是re模块中的函数,会在字符串string中将所有匹配正则表达式pattern模式的字符串提取出来,并以一个list的形式返回。该方法是从左到右进行扫描,所返回的list中的每个匹配按照从左到右匹配的顺序进行存放。 正则表达式low能够将所有单词low匹配出来,但是也会将lower,Blow等含有low字符串中的low也匹配出来。 End of explanation """ p = r'\blow\b' m = re.findall(p, s) m """ Explanation: 如果不存在可以匹配的字符模式,则返回空列表。可以利用列表是否为空作为分支条件。 End of explanation """ p = r'[lmo]ow' m = re.findall(p, s) m """ Explanation: \b,即boundary,是正则表达式中的一种特殊字符,表示单词的边界。正则表达式r'\blow\b'就是要单独匹配low,该字符串两侧为单词的边界(边界为空格等,但是并不对边界进行匹配) End of explanation """ p = r'[a-d]ow' m = re.findall(p, s) m """ Explanation: [lmo],匹配lmo字母中的任何一个 End of explanation """ p = r'\d' m = re.findall(p, s) m """ Explanation: [a-d],匹配abcd字母中的任何一个 End of explanation """ p = r'\d+' m = re.findall(p, s) m """ Explanation: \d,即digit,表示数字,\d表示数字(一个数字字符) End of explanation """ m = re.findall(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828') m """ Explanation: +,表示一个或者重复多个对象,对象为+前面指定的模式 因此\d+可以匹配长度至少为1的任意正整数字符。 2. 基本匹配与实例 字符模式|匹配模式内容|等价于 ----|---|-- [a-d]|One character of: a, b, c, d|[abcd] [^a-d]|One character except: a, b, c, d|[^abcd] abc丨def|abc or def| \d|One digit|[0-9] \D|One non-digit|[^0-9] \s|One whitespace|[ \t\n\r\f\v] \S|One non-whitespace|[^ \t\n\r\f\v] \w|One word character|[a-zA-Z0-9_] \W|One non-word character|[^a-zA-Z0-9_] .|Any character (except newline)|[^\n] 固定点标记|匹配模式内容 ----|--- ^|Start of the string $|End of the string \b|Boundary between word and non-word characters 数量词|匹配模式内容 ----|--- {5}|Match expression exactly 5 times {2,5}|Match expression 2 to 5 times {2,}|Match expression 2 or more times {,5}|Match expression 0 to 5 times *|Match expression 0 or more times {,}|Match expression 0 or more times ?|Match expression 0 or 1 times {0,1}|Match expression 0 or 1 times +|Match expression 1 or more times {1,}|Match expression 1 or more times 字符转义|转义匹配内容 ----|--- \.|. character \\|\ character \| character \+|+ character \?|? character \{|{ character \)|) character \[|[ character End of explanation """ m = re.findall(r'[\u4e00-\u9fa5]', '测试 汉 字,abc,测试xia,可以') m """ Explanation: 匹配电话号码,区号可以是3或者4位,号码为8位,中间可以有-或者没有。 End of explanation """ m = re.search(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828') m m.group() """ Explanation: 匹配汉字 几个组合实例 正则表达式|匹配内容 ----|--- [A-Za-z0-9]|匹配英文和数字 [\u4E00-\u9FA5A-Za-z0-9_]|中文英文和数字及下划线 ^[a-zA-Z][a-zA-Z0-9_]{4,15}$`|合法账号,长度在5-16个字符之间,只能用字母数字下划线,且第一个位置必须为字母 3. 进阶 3.1 python正则表达式常用函数 函数|功能|用法 ----|---|--- re.search|Return a match object if pattern found in string|re.search(r'[pat]tern', 'string') re.finditer|Return an iterable of match objects (one for each match)|re.finditer(r'[pat]tern', 'string') re.findall|Return a list of all matched strings (different when capture groups)|re.findall(r'[pat]tern', 'string') re.split|Split string by regex delimeter & return string list|re.split(r'[ -]', 'st-ri ng') re.compile|Compile a regular expression pattern for later use|re.compile(r'[pat]tern') re.sub|Replaces all occurrences of the RE pattern in string with repl, substituting all occurrences unless max provided. This method returns modified string|re.sub(r'[pat]tern', repl, 'string') End of explanation """ m.span() """ Explanation: search总是返回第一个成功匹配,如果没有匹配,则返回None 利用group()函数,取出match对象中的内容 End of explanation """ ms = re.finditer(r'\d{3,4}-?\d{8}', '010-66677788,02166697788, 0451-22882828') for m in ms: print(m.group()) """ Explanation: span()函数返回匹配字符串的起始和结束位置 End of explanation """ words = re.split(r'[,-]', '010-66677788,02166697788,0451-22882828') words """ Explanation: finditer()是返回所有匹配,放置在一个元组中,每个匹配都是类似search()函数所返回的match对象,内含每个匹配的详细信息 可以对该元组进行迭代,取得每个match对象,进一步可以取得其详细信息 与findall()的区别是,findall()只取得所有匹配字符串,返回包含所有匹配字符串的列表,不关心匹配字符串在原字符串中的各项信息。 End of explanation """ p = re.compile(r'[,-]') p.split('010-66677788,02166697788,0451-22882828') """ Explanation: 正则下的split(),是一般split()函数的增强版本,可以对字符串以正则表达式匹配的字符进行切割,返回切割后的列表。 End of explanation """ p = re.compile('(ab)+') p.search('ababababab').group() p.search('ababababab').groups() """ Explanation: 利用compile()函数将正则表达式编译,如以后多次运行,可加快程序运行速度 3.2 分组与引用 Group Type|Expression ----|--- Capturing|( ... ) Non-capturing|(?: ... ) Capturing group named Y|(?P&lt;Y&gt; ... ) Match the Y'th captured group|\Y Match the named group Y|(?P=Y) (...) 将括号中的部分,放在一起,视为一组,即group。以该group来匹配符合条件的字符串。 group,可被同一正则表达式的后续,所引用,引用可以利用其位置,或者利用其名称,可称为反向引用。 End of explanation """ p=re.compile('(\d)-(\d)-(\d)') p.search('1-2-3').group() p.search('1-2-3').groups() s = '喜欢/v 你/x 的/u 眼睛/n 和/u 深情/n 。/w' p = re.compile(r'(\S+)/n') m = p.findall(s) m """ Explanation: 有分组的情况,用groups()函数取出匹配的所有分组 End of explanation """ p=re.compile('(?P<first>\d)-(\d)-(\d)') p.search('1-2-3').group() """ Explanation: 按出现顺序捕获名词(/n)。 End of explanation """ p.search('1-2-3').group('first') """ Explanation: 在分组内,可通过?P&lt;name&gt;的形式,给该分组命名,其中name是给该分组的命名 End of explanation """ s = 'age:13,name:Tom;age:18,name:John' p = re.compile(r'age:(\d+),name:(\w+)') m = p.findall(s) m p = re.compile(r'age:(?:\d+),name:(\w+)') m = p.findall(s) m """ Explanation: 可利用group('name'),直接通过组名来获取匹配的该分组 End of explanation """ s = 'abcdebbcde' p = re.compile(r'([ab])\1') m = p.search(s) print('The match is {},the capture group is {}'.format(m.group(), m.groups())) """ Explanation: (?:\d+),匹配该模式,但不捕获该分组。因此没有捕获该分组的数字 End of explanation """ s = '12,56,89,123,56,98, 12' p = re.compile(r'\b(\d+)\b.*\b\1\b') m = p.search(s) m.group(1) """ Explanation: 此即为反向引用 当分组([ab])内的a或b匹配成功后,将开始匹配\1,\1将匹配前面分组成功的字符。因此该正则表达式将匹配aa或bb。 类似地,r'([a-z])\1{3}',该正则将匹配连续的4个英文小写字母。 End of explanation """ s = '12,56,89,123,56,98, 12' p = re.compile(r'\b(?P<name>\d+)\b.*\b(?P=name)\b') m = p.search(s) m.group(1) """ Explanation: 利用反向引用来判断是否含有重复数字,可提取第一个重复的数字。 其中\1是引用前一个分组的匹配。 End of explanation """ p = re.compile('<.*>') p.search('<python>perl>').group() """ Explanation: 与前一个类似,但是利用了带分组名称的反向引用。 3.3 贪婪与懒惰 数量词|匹配模式内容 ----|--- {2,5}?|Match 2 to 5 times (less preferred) {2,}?|Match 2 or more times (less preferred) {,5}?|Match 0 to 5 times (less preferred) *?|Match 0 or more times (less preferred) {,}?|Match 0 or more times (less preferred) ??|Match 0 or 1 times (less preferred) {0,1}?|Match 0 or 1 times (less preferred) +?|Match 1 or more times (less preferred) {1,}?|Match 1 or more times (less preferred) 当正则表达式中包含能接受重复的限定符时,通常的行为是(在使整个表达式能得到匹配的前提下)匹配尽可能多的字符。(贪婪匹配) 而懒惰匹配,是匹配尽可能少的字符。方法是在重复的后面加一个?。 End of explanation """ p = re.compile('<.*?>') p.search('<python>perl>').group() """ Explanation: 贪婪匹配(默认)将匹配尽可能多的重复 End of explanation """ p = re.compile('(ab)+') p.search('ababababab').group() p = re.compile('(ab)+?') p.search('ababababab').group() """ Explanation: 懒惰匹配(非贪婪匹配),将匹配尽可能少的重复 End of explanation """ with open(r'test_re.txt', encoding = 'utf-8') as f: text = f.read() """ Explanation: 4. 文本处理中的一些应用实例 4.1 提取文本中的符合某种模式的字串并进行处理后替换 End of explanation """ p = '[\S]+/t[^/]+/t' lines = re.findall(p, text) lines[:10] """ Explanation: 读入文本文件 End of explanation """ import re p = '((?:[\S]+/t\s){2,})' matchs = re.findall(p, text) matchs[:10] """ Explanation: 提取词性标记模式为连续2个时间标记t的子串 模式可构造为:1或多个非空白字符 /t 一或者多个不含/的字符 /t 利用findall()函数,返回所有符合模式的子串,并查看前10个子串 End of explanation """ def express(line): return line.group().replace('/t ', '')+'/t ' txt = re.sub(p, express, text) txt.split('\n')[:5] """ Explanation: 捕获连续两个以上的时间标记词汇 findall()函数在应用分组时,将捕获所有分组,不是捕获符合模式的字符串,因此这里将整体作为一个分组 ?:表示不捕获该分组内容 End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/05_review/labs/5_train_bqml.ipynb
apache-2.0
PROJECT = 'cloud-training-demos' # Replace with your PROJECT BUCKET = 'cloud-training-bucket' # Replace with your BUCKET REGION = 'us-central1' # Choose an available region for Cloud MLE import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION %load_ext google.cloud.bigquery """ Explanation: Predicting Babyweight Using BigQuery ML Learning Objectives - Explore the machine learning capabilities of BigQuery - Learn how to train a linear regression model in BigQuery - Examine the TRAINING_INFO produced by training a model - Make predictions with a trained model in BigQuery using ML.PREDICT Introduction BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat. But, for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow. That being said, what BQML gives up in complexity, it gains in ease of use. Please see this notebook for more context on this problem and how the features were chosen. We'll start as usual by setting our environment variables. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 LIMIT 10 """ Explanation: Exploring the data Here, we will be taking natality data and training on features to predict the birth weight. The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details: https://bigquery.cloud.google.com/table/publicdata:samples.natality?tab=details Lets start by looking at the data since 2000 with useful values; i.e. those greater than zero! End of explanation """ %%bigquery --project $PROJECT SELECT weight_pounds, -- this is the label; because it is continuous, we need to use regression CAST(is_male AS STRING) AS is_male, mother_age, CAST(plurality AS STRING) AS plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 LIMIT 10 """ Explanation: Define features Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight. Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones. The hashmonth is added so that we can repeatably split the data without leakage -- we want all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.) End of explanation """ %%bash bq --location=US mk -d demo """ Explanation: Train a model in BigQuery With the relevant columns chosen to accomplish predictions, it is then possible to create (train) the model in BigQuery. First, a dataset will be needed store the model. (if this throws an error in Datalab, simply create the dataset from the BigQuery console). End of explanation """ %%bigquery --project $PROJECT # TODO: Your code goes here WITH natality_data AS ( SELECT weight_pounds,-- this is the label; because it is continuous, we need to use regression CAST(is_male AS STRING) AS is_male, mother_age, CAST(plurality AS STRING) AS plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ) SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM natality_data WHERE ABS(MOD(hashmonth, 4)) < 3 -- select 75% of the data as training """ Explanation: With the demo dataset ready, it is possible to create a linear regression model to train the model. This will take approximately 4 minutes to run and will show Done when complete. Exercise 1 Complete the TODOs in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. Name your model babyweight_model_asis; it will reside within the demo dataset we created above. Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax. End of explanation """ %%bigquery --project $PROJECT # TODO: Your code goes here """ Explanation: Explore the training statistics During the model training (and after the training), it is possible to see the model's training evaluation statistics. For each training run, a table named &lt;model_name&gt;_eval is created. This table has basic performance statistics for each iteration. While the new model is training, review the training statistics in the BigQuery UI to see the below model training: https://bigquery.cloud.google.com/. Since these statistics are updated after each iteration of model training, you will see different values for each refresh while the model is training. The training details may also be viewed after the training completes from this notebook. Exercise 2 The cell below is missing the SQL query to examine the training statistics of our trained model. Complete the TODO below to view the results of our training job above. Look back at the usage of the ML.TRAINING_INFO function and its correct syntax. End of explanation """ from google.cloud import bigquery bq = bigquery.Client(project=PROJECT) df = bq.query("SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis)").to_dataframe() # plot both lines in same graph import matplotlib.pyplot as plt plt.plot( 'iteration', 'loss', data=df, marker='o', color='orange', linewidth=2) plt.plot( 'iteration', 'eval_loss', data=df, marker='', color='green', linewidth=2, linestyle='dashed') plt.xlabel('iteration') plt.ylabel('loss') plt.legend(); """ Explanation: Some of these columns are obvious; although, what do the non-specific ML columns mean (specific to BQML)? training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training. iteration - Number of the associated training_run, starting with zero for the first iteration. duration_ms - Indicates how long the iteration took (in ms). Note, you can also see these stats by refreshing the BigQuery UI window, finding the &lt;model_name&gt; table, selecting on it, and then the Training Stats sub-header. Let's plot the training and evaluation loss to see if the model has an overfit. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM # TODO: Your code goes here LIMIT 100 """ Explanation: As you can see, the training loss and evaluation loss are essentially identical. We do not seem to be overfitting. Make a prediction with BQML using the trained model With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned. When the ml.predict function is leveraged, output prediction column name for the model is predicted_&lt;label_column_name&gt;. Exercise 3 Complete the TODO in the cell below to make predictions in BigQuery with our newly trained model demo.babyweight_model_asis on the public.samples.natality table. You'll need to preprocess the data for training by selecting only those examples which have - year greater than 2000 - gestation_weeks greater than 0 - mother_age greater than 0 - plurality greater than 0 - weight_pounds greater than 0 Look at the expected syntax for the ML.PREDICT Function. Hint: You will need to cast the features is_male and plurality as STRINGs End of explanation """ %%bigquery --project $PROJECT SELECT weight_pounds, CAST(is_male AS STRING) AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, CAST(plurality AS STRING) AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 LIMIT 25 """ Explanation: More advanced... In the original example, we were taking into account the idea that if no ultrasound has been performed, some of the features (e.g. is_male) will not be known. Therefore, we augmented the dataset with such masked features and trained a single model to deal with both these scenarios. In addition, during data exploration, we learned that the data size set for mothers older than 45 was quite sparse, so we will discretize the mother age. End of explanation """ %%bigquery --project $PROJECT SELECT weight_pounds, 'Unknown' AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, IF(plurality > 1, 'Multiple', 'Single') AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 LIMIT 25 """ Explanation: On the same dataset, will also suppose that it is unknown whether the child is male or female (on the same dataset) to simulate that an ultrasound was not been performed. End of explanation """ %%bigquery --project $PROJECT WITH with_ultrasound AS ( SELECT weight_pounds, CAST(is_male AS STRING) AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, CAST(plurality AS STRING) AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), without_ultrasound AS ( SELECT weight_pounds, 'Unknown' AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, IF(plurality > 1, 'Multiple', 'Single') AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), preprocessed AS ( SELECT * from with_ultrasound UNION ALL SELECT * from without_ultrasound ) SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM preprocessed WHERE ABS(MOD(hashmonth, 4)) < 3 LIMIT 25 """ Explanation: Bringing these two separate data sets together, there is now a dataset for male or female children determined with ultrasound or unknown if without. End of explanation """ %%bigquery --project $PROJECT # TODO: Your code goes here WITH with_ultrasound AS ( SELECT weight_pounds, CAST(is_male AS STRING) AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, CAST(plurality AS STRING) AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), without_ultrasound AS ( SELECT weight_pounds, 'Unknown' AS is_male, IF(mother_age < 18, 'LOW', IF(mother_age > 45, 'HIGH', CAST(mother_age AS STRING))) AS mother_age, IF(plurality > 1, 'Multiple', 'Single') AS plurality, CAST(gestation_weeks AS STRING) AS gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), preprocessed AS ( SELECT * from with_ultrasound UNION ALL SELECT * from without_ultrasound ) SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM preprocessed WHERE ABS(MOD(hashmonth, 4)) < 3 """ Explanation: Create a new model With a data set which has been feature engineered, it is ready to create model with the CREATE or REPLACE MODEL statement This will take 5-10 minutes and will show Done when complete. Exercise 4 As in Exercise 1 above, below you are asked to complete the TODO in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. This time, since we're using the supplemented dataset containing without_ultrasound data, name your model babyweight_model_fc. This model will reside within the demo dataset. Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax. End of explanation """ bq = bigquery.Client(project=PROJECT) df = # TODO: Your code goes here # plot both lines in same graph import matplotlib.pyplot as plt plt.plot( 'iteration', 'loss', data=df, marker='o', color='orange', linewidth=2) plt.plot( 'iteration', 'eval_loss', data=df, marker='', color='green', linewidth=2, linestyle='dashed') plt.xlabel('iteration') plt.ylabel('loss') plt.legend(); """ Explanation: Training Statistics While the new model is training, review the training statistics in the BigQuery UI to see the below model training: https://bigquery.cloud.google.com/ The training details may also be viewed after the training completes from this notebook. Exercise 5 Just as in Exercise 2 above, let's plot the train and eval curve using the TRAINING_INFO from the model training job for the babyweight_model_fc model we trained above. Complete the TODO to create a Pandas dataframe that has the TRAINING_INFO from the training job. End of explanation """ %%bigquery --project $PROJECT SELECT * FROM # TODO: Your code goes here """ Explanation: Make a prediction with the new model Perhaps it is of interest to make a prediction of the baby's weight given a number of other factors: Male, Mother is 28 years old, Mother will only have one child, and the baby was born after 38 weeks of pregnancy. To make this prediction, these values will be passed into the SELECT statement. Exercise 6 Use your newly trained babyweight_model_fc to predict the birth weight of a baby that have the following characteristics - the baby is male - the mother's age is 28 - there are not multiple babies (i.e., no twins, triplets, etc) - the baby had 38 weeks of gestation End of explanation """
oscarmore2/deep-learning-study
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function a = 0 b = 255 return (x-a)/(b-a) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function num = len(x) arr = np.zeros((num, 10)) for i, xl in enumerate(x): arr[i][xl] = 1 return arr """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function #s = np.ones((1,1)) #s[0][0]=None #print(s) #s = np.c_[s, image_shape] c = np.concatenate(([None], image_shape)) x = tf.placeholder(dtype=tf.float32, shape=c, name='x') #shape=c print(x) return x def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ y = tf.placeholder(dtype=tf.float32, shape=([None, n_classes]), name='y') print(y) return y def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ z = tf.placeholder(dtype=tf.float32, name='keep_prob') return z """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function print(x_tensor) #print(conv_num_outputs) print(conv_ksize) #print(conv_strides) #print(pool_ksize) #print(pool_strides) dimension = x_tensor.get_shape().as_list() shape = list(conv_ksize + (dimension[-1],) + (conv_num_outputs,)) print(shape) weights = tf.Variable(tf.truncated_normal(shape, 0, 0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) #print(conv_num_outputs) #print(weights) #print(bias) conv_layer = tf.nn.conv2d(x_tensor, weights, strides = list((1,)+conv_strides+(1,)), padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) conv_layer = tf.nn.relu(conv_layer) conv_layer = tf.nn.max_pool( conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') #print("max pool conv_layer") #print(conv_layer) return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ from numpy import prod def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function dimension = x_tensor.get_shape().as_list() return tf.reshape(x_tensor,[-1,prod(dimension[1:])]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function print("Full_conn Variables") #print(x_tensor) dimension = x_tensor.get_shape().as_list() shape = list( (dimension[-1],) + (num_outputs,)) #print(shape) weight = tf.Variable(tf.truncated_normal(shape,0,0.1)) bias = tf.Variable(tf.zeros(num_outputs)) return tf.nn.relu(tf.add(tf.matmul(x_tensor,weight), bias)) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function #print(x_tensor) #print(num_outputs) #print("Full_conn Variables") dimension = x_tensor.get_shape().as_list() shape = list( (dimension[-1],) + (num_outputs,)) #print(shape) weight = tf.Variable(tf.truncated_normal(shape,0,0.01)) bias = tf.Variable(tf.zeros(num_outputs)) return tf.add(tf.matmul(x_tensor,weight), bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: #print(x) #print(keep_prob) n_class = 18 conNet = conv2d_maxpool(x, conv_num_outputs=n_class, conv_ksize=(4, 4), conv_strides=(2, 2), pool_ksize=(8, 8), pool_strides=(2, 2)) conNet = tf.nn.dropout(conNet, keep_prob) #conNet_Max_2 = conv2d_maxpool(conNet_Max_1, n_class, [1, 1], [2, 2], [2, 2], [2, 2], weights['wc2'], biases['bc2']) # TODO: Apply a Flatten Layer # Function Definition from Above: conNet = flatten(conNet) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: conNet = fully_conn(conNet, 384) conNet = tf.nn.dropout(conNet, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: conNet = output(conNet, 10) # TODO: return output return conNet """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') #print(accuracy) tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={x:feature_batch, y:label_batch, keep_prob:keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob:1.0}) valid_acc = sess.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format( loss, valid_acc)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 128 batch_size = 256 keep_probability = 0.5 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/ec-earth-consortium/cmip6/models/ec-earth3/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: EC-EARTH-CONSORTIUM Source ID: EC-EARTH3 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:59 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
ghvn7777/ghvn7777.github.io
content/fluent_python/20_describe.ipynb
apache-2.0
class Quantity: # 描述符类 def __init__(self, storage_name): # storage_name 是托管实例中存储值的属性的名称 self.storage_name = storage_name # 设置托管属性赋值会调用 __set__方法 # 这里的 self 是描述符实例,即 LineItem.weight 或 LineItem.price # instance 是托管实例(LineItem 实例),value 是要设定的值 def __set__(self, instance, value): if value > 0: # 这里必须设值 __dict__ 属性,如果使用内置的 setattr 会再次调用 __set__ 无限递归 instance.__dict__[self.storage_name] = value else: raise ValueError('value must be > 0') class LineItem: # 托管类 weight = Quantity('weight') # 第一个描述符实例绑定到 weight price = Quantity('price') def __init__(self, description, weight, price): self.description = description self.weight = weight self.price = price def subtotal(self): return self.weight * self.price """ Explanation: 我们上一章使用特性工厂函数编程模式避免重复写读值和设值方法,这里继续,把 quantity 特性工厂函数重构为 Quantity 描述符类 LineItem 类第三版:一个简单的描述符 实现了 __get__, __set__ 或 __delete__ 方法的类是描述符。描述符的用法是,创建一个实例,作为另一个类的属性 我们将定义一个 Quantity 描述符,LineItem 会用到两个 Quantity 实例,一个管理 weight 属性,一个管理 price 属性。 Quantity 实例是 LineItem 类的属性。 End of explanation """ truffle = LineItem('White truffle', 100, 10) truffle.weight # 其实是通过 Quantitiy.__get__ 方法返回的 truffle.__dict__['weight'] = 13 # 真实值存在这里,用 Quantitiy 类实例覆盖了它 truffle.weight truffle = LineItem('White truffle', 100, 0) # 代码正常运行,禁止 0 美元 """ Explanation: 上面的读值方法不需要特殊逻辑,所以 Quantity 类不需要定义 __get__ 方法 End of explanation """ class Quantity: __counter = 0 #类变量,为了为不同的实例创建不同的 sorage_name def __init__(self): cls = self.__class__ # Quantity 类的引用 prefix = cls.__name__ index = cls.__counter self.storage_name = '_{}#{}'.format(prefix, index) #独一无二的 storage_name cls.__counter += 1 # 因为托管属性名与 storage_name 不同,我们要实现 __get__ 方法 # 稍后说明 owner 参数 def __get__(self, instance, owner): return getattr(instance, self.storage_name) # 使用内置的 getattr 从 instance 获取值 def __set__(self, instance, value): if value > 0: setattr(instance, self.storage_name, value) # 使用内置的 setattr 向 instance 设置值 else: raise ValueError('value must be > 0') class LineItem: # 托管类 weight = Quantity() # 不用传入托管属性名称 price = Quantity() def __init__(self, description, weight, price): self.description = description self.weight = weight self.price = price def subtotal(self): return self.weight * self.price """ Explanation: 编写 __set__ 方法时,要记住 self 和 instance 参数的意思:self 是描述符实例,instance 是托管实例。管理实例属性的描述符应该把值存到托管实例中,因此,Python 才为描述符中的那个方法提供了 instance 参数 你可能想把各个托管属性的值直接存在描述符,但是这种做法是错误的。也就是说,在 __set__ 方法中,应该这么写: instance.__dict__[self.storage_name] = value 而不能试图下面这种错误的写法: self.__dict__[self.storage_name] = value 因为 self 是描述符实例,它其实是托管类(LineItem)的属性,同一时刻,内存中可能有几个 LineItem 实例,不过只会有两个描述符实例:LineItem.weight 和 LineItem.price(因为这是类属性而不是实例属性)。因此,存储在描述符实例中的数据,其实会变成 LineItem 类的类属性,从而由全部 LineItem 实例共享 上面有个缺点,在托管类的定义体中实例化描述符时要重复输入属性的名称。如果 LineItem 类像下面这样声明就好了。 class LineItem: weight = Quantity() price = Quantity() ... 但问题是,赋值语句右手边表达式先执行,此时变量还不存在,Quantity() 表达式计算的结果是创建描述符实例,而此时 Quantity 类中的代码无法猜出要把描述符绑定给哪个变量(例如 weight 或 price) 因此必须明确指明各个 Quantity 实例的名称,这么不仅麻烦,而且危险,如果程序员直接复制粘贴而忘记了编辑名称,例如 price = Quantity('weight') 就会出大事 下面我们先介绍一个不太优雅的解决方案,更优雅的下章介绍 LineItem 类第四版:自动获取存储属性的名称 我们不用管用户传什么名称,每个 Quantity 描述符有独一无二的 storage_name 就可以了 End of explanation """ coconuts = LineItem('Brazilian coconut', 20, 17.95) coconuts.weight, coconuts.price getattr(coconuts, '_Quantity#0'), getattr(coconuts, '_Quantity#1') """ Explanation: 这里可以使用 getattr 函数和 setattr 获取值,无需使用 instance.dict,因为托管属性和存储属性名称不同 End of explanation """ LineItem.weight """ Explanation: get 方法有 3 个参数,self, instance 和 owner。owner 参数是托管类(如 LineItem)的引用(注意是类而不是实例,instance 是类的实例),通过描述符从托管类中获取属性时用得到。 如果使用 LineItem.weight 从类中获取托管属性,描述符 __get__ instance 参数收到的值是 None,因此会抛出 AttributeError 异常 End of explanation """ class Quantity: __counter = 0 def __init__(self): cls = self.__class__ prefix = cls.__name__ index = cls.__counter self.storage_name = '_{}#{}'.format(prefix, index) cls.__counter += 1 def __get__(self, instance, owner): if instance is None: return self # 不是通过实例调用,返回描述符自身 else: return getattr(instance, self.storage_name) def __set__(self, instance, value): if value > 0: setattr(instance, self.storage_name, value) else: raise ValueError('value must be > 0') class LineItem: # 托管类 weight = Quantity() # 不用传入托管属性名称 price = Quantity() def __init__(self, description, weight, price): self.description = description self.weight = weight self.price = price def subtotal(self): return self.weight * self.price coconuts = LineItem('Brazilian coconut', 20, 17.95) LineItem.weight coconuts.price """ Explanation: 抛出 AttributeError 异常是实现 __get__ 方法方式之一,如果选择这么做,应该修改错误信息,去掉令人困惑的 NoneType 和 _Quantity#0,改成 'LineItem' class has no such attribute 更好。最好能给出缺少的属性名,但是在这里描述符不知道托管属性的名称,所以只能做到这样 此外,为了个用户提供内省和其它元编程技术支持,通过类访问托管属性时,最高让 __get__ 方法返回描述符实例。下面对 __get__ 做了一些改动 End of explanation """ def quantity(storage_name): try: quantity.counter += 1 except AttributeError: quantity.counter = 0 # 第一次赋值 # 借助闭包每次创建不同的 storage_name storage_name = '_{}:{}'.format('quantity', quantity.counter) def qty_getter(instance): return instance.__dict__[storage_name] def qty_setter(instance, value): if value > 0: instance.__dict__[storage_name] = value else: raise ValueError('value must be > 0') return property(qty_getter, qty_setter) """ Explanation: 看了上面例子,你可能觉得为了管理几个描述符写这么多代码不值得,但是开发框架的话,描述符会在一个单独的实用工具模块中定义,以便在整个应用中使用,就很值得了 ``` import model_v4c as model class LineItem: weight = model.Quantity() price = model.Quantity() ... ``` 就像上面这样,把描述符放到单独模块中。现在来说,Quantity 描述符能出色完成工作,唯一缺点是,出餐属性的名称是生成的(如 _Quantity#0),导致难以调试,如果想自动把出餐属性的名称设为与托管属性的名称类似,需要使用到类装饰器或元类,下章讨论 我们上一章的特性工厂函数其实也很容易实现与描述符同样的功能,如下 End of explanation """ import abc class AutoStorage: # 提供了之前 Quantity 大部分功能 __counter = 0 def __init__(self): cls = self.__class__ prefix = cls.__name__ index = cls.__counter self.storage_name = '_{}#{}'.format(prefix, index) cls.__counter += 1 def __get__(self, instance, owner): if instance is None: return self else: return getattr(instance, self.storage_name) def __set__(self, instance, value): setattr(instance, self.storage_name, value) # 不进行验证 class Validated(abc.ABC, AutoStorage): # 抽象类,也继承自 AutoStorage def __set__(self, instance, value): # __set__ 方法把验证委托给 validate 方法 value = self.validate(instance, value) #返回的 value 值返回给超类的 __set__ 方法,存储值 super().__set__(instance, value) @abc.abstractmethod def validate(self, instance, value): # 抽象方法 '''return validated value or raise ValueError''' class Quantity(Validated): '''a number greater than zero''' # 只需要根据不同的验证规则实现 validate 方法即可 def validate(self, instance, value): if value <= 0: raise ValueError('value must be > 0') return value class NonBlank(Validated): '''a string with at least one not-space character''' def validate(self, instance, value): value = value.strip() if len(value) == 0: raise ValueError('value cannot be empty or blank') return value class LineItem: # 托管类 weight = Quantity() price = Quantity() description = NonBlank() def __init__(self, description, weight, price): self.description = description self.weight = weight self.price = price def subtotal(self): return self.weight * self.price coconuts = LineItem('Brazilian coconut', 20, 17.95) coconuts.description, coconuts.weight, coconuts.price coconuts = LineItem(' ', 20, 17.95) """ Explanation: LineItem 类第五版:一种新型描述符 假如有机食物网站遇到问题,有个食品描述为空,为了解决这个问题,我们要再创建一个描述符 NonBlank,它和 Quantity 很像,只是验证逻辑不同 我们可以重构一下代码,创建两个基类。 AutoStorage: 自动管理存储属性的描述符类 Validated: 扩展 AutoStorage 类的抽象子类,覆盖 __set__ 方法,调用由子类实现的 validate 方法 End of explanation """ ## 辅助函数,仅用于显示 ## def cls_name(obj_or_cls): cls = type(obj_or_cls) if cls is type: cls = obj_or_cls return cls.__name__.split('.')[-1] def display(obj): cls = type(obj) if cls is type: return '<class {}>'.format(obj.__name__) elif cls in [type(None), int]: return repr(obj) else: return '<{} object>'.format(cls_name(obj)) def print_args(name, *args): pseudo_args = ', '.join(display(x) for x in args) print('-> {}.__{}__({})'.format(cls_name(args[0]), name, pseudo_args)) ### 对这个示例重要的类 class Overriding: '''也称数据描述符或强制描述符''' def __get__(self, instance, owner): print_args('get', self, instance, owner) def __set__(self, instance, value): print_args('set', self, instance, value) class OverridingNoGet: '''没有 __get__ 方法的覆盖型描述符''' def __set__(self, instance, owner): print_args('set', self, instance, owner) class NonOverriding: '''也称非数据描述符或遮盖型描述符''' def __get__(self, instance, owner): print_args('get', self, instance, owner) class Managed: over = Overriding() over_no_get = OverridingNoGet() non_over = NonOverriding() def spam(self): print('-> Managed.spam({})'.format(display(self))) """ Explanation: 本章所举的几个 LineItem 实例演示了描述符的典型用途 -- 管理数据属性。这种描述符也叫覆盖型描述符,因为描述符的 __set__ 方法使用托管实例中同名属性覆盖(即插手接管)了要设置的属性,不过也有非覆盖型描述符,下节介绍两种区别 覆盖型与非覆盖型描述符对比 如前面所说,Python 存取属性方式特别不对等,通过实例读取属性,通常返回是实例中定义的属性名,但是如果实例中没有指定的属性,那么会获取类属性,而为实例中属性赋值时,通常会在实例中创建属性,根本不影响类 这种不对等处理方式对描述符也有影响,其实根据是否定义 __set__ 方法,描述符行为差异,我们需要几个类(下面的 print_args 是为了显示好看,cls_name 和 display 是辅助函数,这几个函数没必要研究): End of explanation """ obj = Managed() obj.over # get 方法,第二个参数是托管实例 obj Managed.over #第二个参数是 None obj.over = 7 # 触发描述符的 __set__ 方法,最后一个参数是 7 obj.over # 仍然触发描述符的 __get__ 方法 obj.__dict__['over'] = 8 # 直接通过 obj.__dict__ 属性赋值 vars(obj) #确认值在 obj.__dict__ 下 obj.over # 即使有名为 over 的实例属性,Managed.over 描述符仍然会覆盖读取 obj.over 操作 """ Explanation: 覆盖型描述符 实现 __set__ 方法的描述符属于覆盖型描述符,因为虽然描述符是类属性,但是实现 __set__ 方法的话,会覆盖对实例属性的赋值操作。特性也是覆盖型描述符,如果没提供设置函数,property 会抛出 AttributeError 异常,指明那个属性是只读的。我们可以用上面代码测试覆盖型描述符行为: End of explanation """ obj.over_no_get Managed.over_no_get obj.over_no_get = 7 obj.over_no_get obj.__dict__['over_no_get'] = 9 obj.over_no_get obj.over_no_get = 7 obj.over_no_get """ Explanation: 没有 __get__ 方法的覆盖型描述符 如果描述符只设置 __set__ 方法,那么只有写操作由描述符处理,通过实例读描述符会返回描述符对象本身,因为没有处理操作的 __get__ 方法。如果直接通过实例的 __dict__ 属性创建同名实例属性,以后再设置那个属性时,仍然会由 __set__ 方法插手接管,但是读取那个属性的话,就会直接从实例中返回新赋予的值,而不是返回描述符对象。也就是说,实例属性会遮盖描述符,不过只有读操作如此 End of explanation """ obj = Managed() obj.non_over obj.non_over = 7 obj.non_over Managed.non_over del obj.non_over obj.non_over """ Explanation: 非覆盖型描述符 没有实现 __set__ 方法的描述符是非覆盖型描述符。如果设置了同名的实例属性,描述符会被遮盖,致使描述符无法处理那个实例的属性。方法是以非覆盖型描述符实现的。 End of explanation """ obj = Managed() Managed.over = 1 # 覆盖了描述符 Managed.over_no_get = 2 Managed.non_over = 3 obj.over, obj.over_no_get, obj.non_over """ Explanation: 在上面例子中,我们为几个与描述符同名的实例属性赋了值,结果根据描述符有没有 __set__ 方法不同。依附在类上的描述符无法控制为类属性赋值的操作。其实,这意味着类属性赋值能覆盖描述符属性 再类中覆盖描述符 不管描述符是不是覆盖类型,为类属性赋值都能覆盖描述符,这是一种猴子补丁技术: End of explanation """ obj = Managed() obj.spam # 获取的是绑定方法对象 Managed.spam # 获取的是函数 obj.spam = 7 # 遮盖类属性,导致无法通过 obj.spam 访问 spam 方法 obj.spam """ Explanation: 上面揭示了读写属性的另一种不对等,读类属性的操作可以由依附在托管类上定义有 __get__ 方法的描述符处理,但是写类属性的操作不会由依附在托管类上定义有 __set__ 方法的描述符处理 若想控制类属性的操作,要把描述符依附到类上,即依附到元类上。默认情况,对用户定义的类来说,其元类是 type,而我们不能为 type 添加属性,不过在下一章,我们会自己创建元类 方法是描述符 在类中定义的函数属于绑定方法,因为用户定义的函数都有 __get__ 方法,所以依附到类上时,就相当于描述符。下面演示了从 Managed 类中读取 spam 方法 End of explanation """ import collections class Text(collections.UserString): def __repr__(self): return 'Text({!r})'.format(self.data) def reverse(self): return self[::-1] word = Text('forward') word word.reverse() Text.reverse(Text('backward')) # 在类上调用方法相当于调用函数 type(Text.reverse), type(word.reverse) # 类型不相同,一个 function,一个 method list(map(Text.reverse, ['repaid', (10, 20, 30), Text('stressed')])) # Text.reverse 相当于函数,甚至可以处理 Text 实例外其它对象 Text.reverse.__get__(word) # 函数都是非覆盖型描述符。在函数上调用 __get__ 方法传入实例,得到的是绑定到那个实例的方法 word.reverse # 其实会调用 Text.reverse.__get__(word) 方法,返回对应绑定方法。 word.reverse.__self__ # 绑定放方法对象有个 __self__ 属性,其值是调用这个方法的实例引用 word.reverse.__func__ is Text.reverse # 绑定方法的 __func__ 是依附在托管类上的原始函数引用 """ Explanation: 函数没有实现 __set__ 方法,因此是非覆盖型描述符。 从上面能看出一个信息,obj.spam 和 Managed.spam 获取的是不同的对象,与描述符一样,通过托管类访问时,函数的 __get__ 方法会返回自身的引用。但是通过实例访问时,函数的 __get__ 方法返回的是绑定方法对象,一种可调用的对象,里面包装着函数,并把托管实例(如 obj)绑定给函数的第一个参数(即 self),这与 functools.partial 函数行为一致 End of explanation """
ellisonbg/talk-2014
Visualization.ipynb
mit
from IPython.display import display, Image, HTML from talktools import website, nbviewer """ Explanation: Plotting and visualization End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import matplotlib.mlab as mlab import numpy as np """ Explanation: One of the main usage cases for this display architecture is plotting and visualization. In the last two years, there has been an explosion of plotting and visualization libraries in Python and other languages. That has largely been fueled by visualization moving to the web (d3.js) in IPython and other similar environments. Giving a detailed and thorough overview of visualization in Python would require an entirely separate talk. The purpose here is to show a few of the visualization tools and their integration with the IPython Notebook. matplotlib The foundation for plotting and visualization in Python is matplotlib. While there are newer visualizations libraries, almost all of them use matplotlib as a base layer. IPython has a long history of tight integration with matplotlib. Inline plotting in the Notebook is enabled using %matplotlib inline: End of explanation """ # example data mu = 100 # mean of distribution sigma = 15 # standard deviation of distribution x = mu + sigma * np.random.randn(10000) num_bins = 50 # the histogram of the data n, bins, patches = plt.hist(x, num_bins, normed=1, facecolor='green', alpha=0.5) # add a 'best fit' line y = mlab.normpdf(bins, mu, sigma) plt.plot(bins, y, 'r--') plt.xlabel('Smarts') plt.ylabel('Probability') plt.title(r'Histogram of IQ: $\mu=100$, $\sigma=15$') # Tweak spacing to prevent clipping of ylabel plt.subplots_adjust(left=0.15) """ Explanation: Here is a simple plot from the matplotlib gallery: End of explanation """ import mpld3 mpld3.enable_notebook() """ Explanation: mpld3 The d3.js JavaScript library offers a powerful approach for interactive visualization in modern web browsers. The mpld3 Python package adds d3 based rendering to matplotlib. This provides interactivity (pan, zoom, hover, etc.) while maintaining the same matplotlib APIs. These interactive visualizations also display on http://nbviewer.ipython.org. End of explanation """ fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE')) N = 100 scatter = ax.scatter(np.random.normal(size=N), np.random.normal(size=N), c=np.random.random(size=N), s=1000 * np.random.random(size=N), alpha=0.3, cmap=plt.cm.jet) ax.grid(color='white', linestyle='solid') ax.set_title("Scatter Plot (with tooltips!)", size=20) labels = ['point {0}'.format(i + 1) for i in range(N)] tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels) mpld3.plugins.connect(fig, tooltip) """ Explanation: Here is an example of a 2d scatter plot with tooltips for each data point: End of explanation """ import vincent import pandas as pd import pandas.io.data as web import datetime all_data = {} date_start = datetime.datetime(2010, 1, 1) date_end = datetime.datetime(2014, 1, 1) for ticker in ['AAPL', 'IBM', 'YHOO', 'MSFT']: all_data[ticker] = web.DataReader(ticker, 'yahoo', date_start, date_end) price = pd.DataFrame({tic: data['Adj Close'] for tic, data in all_data.items()}) vincent.initialize_notebook() line = vincent.Line(price[['AAPL', 'IBM', 'YHOO', 'MSFT']], width=600, height=300) line.axis_titles(x='Date', y='Price') line.legend(title='Ticker') display(line) """ Explanation: Vincent Vincent is a visualization library that uses the Vega visualization grammar to build d3.js based visualizations in the Notebook and on http://nbviewer.ipython.org. Visualization objects in Vincent utilize IPython's display architecture with HTML and JavaScript representations. End of explanation """ import plotly py = plotly.plotly('IPython.Demo', '1fw3zw2o13') nr = np.random distributions = [nr.uniform, nr.normal , lambda size: nr.normal(0, 0.2, size=size), lambda size: nr.beta(a=0.5, b=0.5, size=size), lambda size: nr.beta(a=0.5, b=2, size=size)] names = ['Uniform(0,1)', 'Normal(0,1)', 'Normal(0, 0.2)', 'beta(a=0.5, b=0.5)', 'beta(a=0.5, b=2)'] boxes = [{'y': dist(size=50), 'type': 'box', 'boxpoints': 'all', 'jitter': 0.5, 'pointpos': -1.8, 'name': name} for dist, name in zip(distributions, names)] layout = {'title': 'A few distributions', 'showlegend': False, 'xaxis': {'ticks': '', 'showgrid': False, 'showline': False}, 'yaxis': {'zeroline': False, 'ticks': '', 'showline': False}, } py.iplot(boxes, layout = layout, filename='Distributions', fileopt='overwrite') """ Explanation: Plotly Plotly Analyze and Visualize Data, Together. Plotly is a web-based data analysis and plotting tool that has IPython integration and uses d3.js for its visualizations. It goes beyond plotting and enables the sharing of plots and analyses across a wide range of programming languages (Python, Matlab, R, Julia). End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/mpi-m/cmip6/models/sandbox-1/atmos.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'mpi-m', 'sandbox-1', 'atmos') """ Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: MPI-M Source ID: SANDBOX-1 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:17 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) """ Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) """ Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) """ Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) """ Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) """ Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) """ Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) """ Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) """ Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation """
datastax-demos/Muvr-Analytics
ipython-analysis/exercise-cnn.ipynb
bsd-3-clause
%matplotlib inline import logging logging.basicConfig(level=10) logger = logging.getLogger() import shutil from os import remove import cPickle as pkl from os.path import expanduser, exists """ Explanation: CNN Experiments on muvr data First we need to setup the environment and import all the necessary stuff. End of explanation """ # Dataset creation import numpy as np import math import random import csv from neon.datasets.dataset import Dataset class WorkoutDS(Dataset): # Number of features per example feature_count = None # Number of examples num_train_examples = None num_test_examples = None # Number of classes num_labels = None # Indicator if the data has been loaded yet initialized = False # Mapping of integer class labels to strings human_labels = {} def human_label_for(self, id): return self.human_labels[id] # Convert an integer representation to a one-hot-vector def as_one_hot(self, i, n): v = np.zeros(n) v[i] = 1 return v # Convert an one-hot-vector to an integer representation def as_int_rep(self, oh): return np.where(oh == 1)[0][0] # Loads a label mapping from file. The file should contain a CSV tabel mapping integer labels to human readable # labels. Integer class labels should start with 1 def load_label_mapping(self, filename): with open(expanduser(filename), 'rb') as csvfile: dialect = csv.Sniffer().sniff(csvfile.read(1024)) csvfile.seek(0) csv_data = csv.reader(csvfile, dialect) next(csv_data, None) # skip the headers label_mapping = {} for row in csv_data: # We need to offset the labels by one since counting starts at 0 in python... label_mapping[int(row[0]) - 1] = row[1] return label_mapping # Load examples from given CSV file. The dataset should already be splitted into test and train externally def load_examples(self, filename): with open(expanduser(filename), 'rb') as csvfile: dialect = csv.Sniffer().sniff(csvfile.read(1024)) csvfile.seek(0) csv_data = csv.reader(csvfile, dialect) next(csv_data, None) # skip the headers y = [] X = [] for row in csv_data: label = int(row[2]) - 1 y.append(self.as_one_hot(label, self.num_labels)) X.append(map(int, row[3:])) X = np.reshape(np.asarray(X, dtype = float), (len(X), len(X[0]))) y = np.reshape(np.asarray(y, dtype = float), (X.shape[0], self.num_labels)) return X,y # Load label mapping and train / test data from disk. def initialize(self): logger.info("Loading DS from files...") self.human_labels = self.load_label_mapping(expanduser('~/data/labeled_exercise_data_f400_LABELS.csv')) self.num_labels = len(self.human_labels) X_train, y_train = self.load_examples(expanduser('~/data/labeled_exercise_data_f400_TRAIN.csv')) X_test, y_test = self.load_examples(expanduser('~/data/labeled_exercise_data_f400_TEST.csv')) self.num_train_examples = X_train.shape[0] self.num_test_examples = X_test.shape[0] self.feature_count = X_train.shape[1] self.X_train = X_train self.y_train = y_train self.X_test = X_test self.y_test = y_test self.initialized = True # Get the dataset ready for Neon training def load(self, **kwargs): if not self.initialized: self.initialize() # Assign training and test datasets # INFO: This assumes the data is already shuffeled! Make sure it is! self.inputs['train'] = self.X_train self.targets['train'] = self.y_train self.inputs['test'] = self.X_test self.targets['test'] = self.y_test self.format() dataset = WorkoutDS() dataset.initialize() print "Number of training examples:", dataset.num_train_examples print "Number of test examples:", dataset.num_test_examples print "Number of features:", dataset.feature_count print "Number of labels:", dataset.num_labels """ Explanation: This time we are not going to generate the data but rather use real world annotated training examples. End of explanation """ from ipy_table import * from operator import itemgetter import numpy as np train_dist = np.reshape(np.transpose(np.sum(dataset.y_train, axis=0)), (dataset.num_labels,1)) test_dist = np.reshape(np.transpose(np.sum(dataset.y_test, axis=0)), (dataset.num_labels,1)) train_ratio = train_dist / dataset.num_train_examples test_ratio = test_dist / dataset.num_test_examples # Fiddle around to get it into table shape table = np.hstack((np.zeros((dataset.num_labels,1), dtype=int), train_dist, train_ratio, test_dist, test_ratio)) table = np.vstack((np.zeros((1, 5), dtype=int), table)).tolist() human_labels = map(dataset.human_label_for, range(0,dataset.num_labels)) for i,s in enumerate(human_labels): table[i + 1][0] = s table.sort(lambda x,y: cmp(x[1], y[1])) table[0][0] = "" table[0][1] = "Train" table[0][2] = "Train %" table[0][3] = "Test" table[0][4] = "Test %" make_table(table) set_global_style(float_format='%0.0f', align="center") set_column_style(2, float_format='%0.2f%%') set_column_style(4, float_format='%0.2f%%') set_column_style(0, align="left") """ Explanation: At first we want to inspect the class distribution of the training and test examples. End of explanation """ from matplotlib import pyplot, cm from pylab import * # Choose some random examples to plot from the training data number_of_examples_to_plot = 3 plot_ids = np.random.random_integers(0, dataset.num_train_examples - 1, number_of_examples_to_plot) print "Ids of plotted examples:",plot_ids # Retrieve a human readable label given the idx of an example def label_of_example(i): label_id = np.where(dataset.y_train[i] == 1)[0][0] return dataset.human_label_for(label_id) figure(figsize=(20,10)) ax1 = subplot(311) setp(ax1.get_xticklabels(), visible=False) ax1.set_ylabel('X - Acceleration') ax2 = subplot(312, sharex=ax1) setp(ax2.get_xticklabels(), visible=False) ax2.set_ylabel('Y - Acceleration') ax3 = subplot(313, sharex=ax1) ax3.set_ylabel('Z - Acceleration') for i in plot_ids: c = np.random.random((3,)) ax1.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,0:400], '-o', c=c) ax2.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,400:800], '-o', c=c) ax3.plot(range(0, dataset.feature_count / 3), dataset.X_train[i,800:1200], '-o', c=c) legend(map(label_of_example, plot_ids)) suptitle('Feature values for the first three training examples', fontsize=16) xlabel('Time') show() """ Explanation: Let's have a look at the generated data. We will plot some of the examples of the different classes. End of explanation """ from neon.backends import gen_backend from neon.layers import * from neon.models import MLP from neon.transforms import RectLin, Tanh, Logistic, CrossEntropy from neon.experiments import FitPredictErrorExperiment from neon.params import val_init from neon.util.persist import serialize # General settings max_epochs = 75 epoch_step_size = 1 batch_size = 30 # max(10, min(100, dataset.num_train_examples/10)) random_seed = 42 # Take your lucky number # Storage director of the model and its snapshots file_path = expanduser('~/data/workout-cnn/workout-cnn.prm') #if exists(file_path): # remove(file_path) # Captured errors for the different epochs train_err = [] test_err = [] print 'Epochs: %d Batch-Size: %d' % (max_epochs, batch_size) # Generate layers and a MLP model using the given settings def model_gen(lrate, momentum_coef, num_epochs, batch_size): layers = [] lrule = { 'lr_params': { 'learning_rate': lrate, 'momentum_params': { 'coef': momentum_coef, 'type': 'constant' }}, 'type': 'gradient_descent_momentum' } weight_init = val_init.UniformValGen(low=-0.1, high=0.1) layers.append(DataLayer( nofm=3, ofmshape=[400,1], is_local=True )) layers.append(ConvLayer( name="cv_1", nofm=16, fshape = [5,1], stride = 1, lrule_init=lrule, weight_init= weight_init, activation=RectLin() )) layers.append(PoolingLayer( name="po_1", op="max", fshape=[2,1], stride=2, )) layers.append(ConvLayer( name="cv_2", nofm=32, fshape = [5,1], stride = 1, lrule_init=lrule, weight_init= weight_init, activation=RectLin() )) layers.append(PoolingLayer( name="po_2", op="max", fshape=[2,1], stride=2, )) layers.append(DropOutLayer( name="do_1", keep = 0.9 ) ) layers.append(FCLayer( name="fc_1", nout=100, lrule_init=lrule, weight_init=weight_init, activation=RectLin() )) layers.append(DropOutLayer( name="do_2", keep = 0.9 ) ) layers.append(FCLayer( name="fc_2", nout=dataset.num_labels, lrule_init=lrule, weight_init=weight_init, activation = Logistic() )) layers.append(CostLayer( name='cost', ref_layer=layers[0], cost=CrossEntropy() )) model = MLP(num_epochs=num_epochs, batch_size=batch_size, layers=layers, serialized_path=file_path) return model # Set logging output... for name in ["neon.util.persist", "neon.datasets.dataset", "neon.models.mlp"]: dslogger = logging.getLogger(name) dslogger.setLevel(20) print "Starting training..." for num_epochs in range(26,max_epochs+1, epoch_step_size): if num_epochs > 230: lrate = 0.0000003 elif num_epochs > 60: lrate = 0.00001 elif num_epochs > 40: lrate = 0.00003 else: lrate = 0.0001 # set up the model and experiment model = model_gen(lrate = lrate, momentum_coef = 0.9, num_epochs = num_epochs, batch_size = batch_size) # Uncomment line below to run on CPU backend backend = gen_backend(rng_seed=random_seed) # Uncomment line below to run on GPU using cudanet backend # backend = gen_backend(rng_seed=0, gpu='cudanet') experiment = FitPredictErrorExperiment(model=model, backend=backend, dataset=dataset) # Run the training, and dump weights dest_path = expanduser('~/data/workout-cnn/workout-ep' + str(num_epochs) + '.prm') if num_epochs > 0: res = experiment.run() train_err.append(res['train']['MisclassPercentage_TOP_1']) test_err.append(res['test']['MisclassPercentage_TOP_1']) # Save the weights at this epoch shutil.copy2(file_path, dest_path) print "Finished epoch " + str(num_epochs) else: model.epochs_complete = 0 serialize(model.get_params(), dest_path) print "Finished training!" """ Explanation: Now we are going to create a neon model. We will start with a realy simple one layer preceptron having 500 hidden units. End of explanation """ import numpy as np import math from matplotlib import pyplot, cm from pylab import * from IPython.html import widgets from IPython.html.widgets import interact def closestSqrt(i): N = int(math.sqrt(i)) while True: M = int(i / N) if N * M == i: return N, M N -= 1 def plot_filters(**kwargs): n = kwargs['n'] layer_name = kwargs['layer'] dest_path = expanduser('~/data/workout-cnn/workout-ep' + str(n) + '.prm') params = pkl.load(open(dest_path, 'r')) wts = params[layer_name]['weights'] nrows, ncols = closestSqrt(wts.shape[0]) fr, fc = closestSqrt(wts.shape[1]) fi = 0 W = np.zeros((fr*nrows, fc*ncols)) for row, col in [(row, col) for row in range(nrows) for col in range(ncols)]: W[fr*row:fr*(row+1):,fc*col:fc*(col+1)] = wts[fi].reshape(fr,fc) fi = fi + 1 matshow(W, cmap=cm.gray) title('Visualizing weights of '+layer_name+' in epoch ' + str(n) ) show() layer_names = map(lambda l: l[1].name+"_"+str(l[0]), filter(lambda l: l[1].has_params, enumerate(model.layers))) _i = interact(plot_filters, layer=widgets.widget_selection.ToggleButtons(options = layer_names), n=widgets.IntSliderWidget(description='epochs', min=0, max=max_epochs, value=0, step=epoch_step_size)) print "Lowest test error: %0.2f%%" % np.min(test_err) print "Lowest train error: %0.2f%%" % np.min(train_err) pyplot.plot(range(epoch_step_size*26, max_epochs+1, epoch_step_size), train_err, linewidth=3, label='train') pyplot.plot(range(epoch_step_size*26, max_epochs+1, epoch_step_size), test_err, linewidth=3, label='test') pyplot.grid() pyplot.legend() pyplot.xlabel("epoch") pyplot.ylabel("error %") pyplot.show() """ Explanation: To check weather the network is learning something we will plot the weight matrices of the different training epochs. End of explanation """ from sklearn.metrics import confusion_matrix from ipy_table import * # confusion_matrix(y_true, y_pred) predicted, actual = model.predict_fullset(dataset, "test") y_pred = np.argmax(predicted.asnumpyarray(), axis = 0) y_true = np.argmax(actual.asnumpyarray(), axis = 0) confusion_mat = confusion_matrix(y_true, y_pred, range(0,dataset.num_labels)) # Fiddle around with cm to get it into table shape confusion_mat = vstack((np.zeros((1,dataset.num_labels), dtype=int), confusion_mat)) confusion_mat = hstack((np.zeros((dataset.num_labels + 1, 1), dtype=int), confusion_mat)) table = confusion_mat.tolist() human_labels = map(dataset.human_label_for, range(0,dataset.num_labels)) for i,s in enumerate(human_labels): table[0][i+1] = s table[i+1][0] = s table[0][0] = "actual \ predicted" mt = make_table(table) set_row_style(0, color='lightGray', rotate = "315deg") set_column_style(0, color='lightGray') set_global_style(align='center') for i in range(1, dataset.num_labels + 1): for j in range(1, dataset.num_labels + 1): if i == j: set_cell_style(i,j, color='lightGreen', width = 80) elif table[i][j] > 20: set_cell_style(i,j, color='Pink') elif table[i][j] > 0: set_cell_style(i,j, color='lightYellow') mt """ Explanation: Let's also have a look at the confusion matrix for the test dataset. End of explanation """
batfish/pybatfish
docs/source/notebooks/routingProtocols.ipynb
apache-2.0
bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Routing Protocol Sessions and Policies This category of questions reveals information regarding which routing protocol sessions are compatibly configured and which ones are established. It also allows to you analyze BGP routing policies. BGP Session Compatibility BGP Session Status BGP Edges OSPF Session Compatibility OSPF Edges Test Route Policies Search Route Policies End of explanation """ result = bf.q.bgpSessionCompatibility().answer().frame() """ Explanation: BGP Session Compatibility Returns the compatibility of configured BGP sessions. Checks the settings of each configured BGP peering and reports any issue with those settings locally or incompatiblity with its remote counterparts. Each row represents one configured BGP peering on a node and contains information about the session it is meant to establish. For dynamic peers, there is one row per compatible remote peer. Statuses that indicate an independently misconfigured peerings include NO_LOCAL_AS, NO_REMOTE_AS, NO_LOCAL_IP (for eBGP single-hop peerings), LOCAL_IP_UNKNOWN_STATICALLY (for iBGP or eBGP multi-hop peerings), NO_REMOTE_IP (for point-to-point peerings), and NO_REMOTE_PREFIX (for dynamic peerings). INVALID_LOCAL_IP indicates that the peering's configured local IP does not belong to any active interface on the node; UNKNOWN_REMOTE indicates that the configured remote IP is not present in the network. A locally valid point-to-point peering is deemed HALF_OPEN if it has no compatible remote peers, UNIQUE_MATCH if it has exactly one compatible remote peer, or MULTIPLE_REMOTES if it has multiple compatible remote peers. A locally valid dynamic peering is deemed NO_MATCH_FOUND if it has no compatible remote peers, or DYNAMIC_MATCH if it has at least one compatible remote peer. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include sessions whose first node matches this specifier. | NodeSpec | True | remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True | status | Only include sessions for which compatibility status matches this specifier. | BgpSessionCompatStatusSpec | True | type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True | Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Node | The node where this session is configured | str VRF | The VRF in which this session is configured | str Local_AS | The local AS of the session | int Local_Interface | Local interface of the session | Interface Local_IP | The local IP of the session | str Remote_AS | The remote AS or list of ASes of the session | str Remote_Node | Remote node for this session | str Remote_Interface | Remote interface for this session | Interface Remote_IP | Remote IP or prefix for this session | str Address_Families | Address Families participating in this session | Set of str Session_Type | The type of this session | str Configured_Status | Configured status | str Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.bgpSessionStatus().answer().frame() """ Explanation: BGP Session Status Returns the dynamic status of configured BGP sessions. Checks whether configured BGP peerings can be established. Each row represents one configured BGP peering and contains information about the session it is configured to establish. For dynamic peerings, one row is shown per compatible remote peer. Possible statuses for each session are NOT_COMPATIBLE, ESTABLISHED, and NOT_ESTABLISHED. NOT_COMPATIBLE sessions are those where one or both peers are misconfigured; the BgpSessionCompatibility question provides further insight into the nature of the configuration error. NOT_ESTABLISHED sessions are those that are configured compatibly but will not come up because peers cannot reach each other (e.g., due to being blocked by an ACL). ESTABLISHED sessions are those that are compatible and are expected to come up. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include sessions whose first node matches this specifier. | NodeSpec | True | remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True | status | Only include sessions for which status matches this specifier. | BgpSessionStatusSpec | True | type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True | Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Node | The node where this session is configured | str VRF | The VRF in which this session is configured | str Local_AS | The local AS of the session | int Local_Interface | Local interface of the session | Interface Local_IP | The local IP of the session | str Remote_AS | The remote AS or list of ASes of the session | str Remote_Node | Remote node for this session | str Remote_Interface | Remote interface for this session | Interface Remote_IP | Remote IP or prefix for this session | str Address_Families | Address Families participating in this session | Set of str Session_Type | The type of this session | str Established_Status | Established status | str Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.bgpEdges().answer().frame() """ Explanation: BGP Edges Returns BGP adjacencies. Lists all BGP adjacencies in the network. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | . remoteNodes | Include adjacencies whose second node matches this name or regex. | NodeSpec | True | . Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Node | Node from which the edge originates | str IP | IP at the side of originator | str Interface | Interface at which the edge originates | str AS_Number | AS Number at the side of originator | str Remote_Node | Node at which the edge terminates | str Remote_IP | IP at the side of the responder | str Remote_Interface | Interface at which the edge terminates | str Remote_AS_Number | AS Number at the side of responder | str Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.ospfSessionCompatibility().answer().frame() """ Explanation: OSPF Session Compatibility Returns compatible OSPF sessions. Returns compatible OSPF sessions in the network. A session is compatible if the interfaces involved are not shutdown and do run OSPF, are not OSPF passive and are associated with the same OSPF area. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include nodes matching this name or regex. | NodeSpec | True | remoteNodes | Include remote nodes matching this name or regex. | NodeSpec | True | statuses | Only include sessions matching this status specifier. | OspfSessionStatusSpec | True | Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Interface | Interface | Interface VRF | VRF | str IP | Ip | str Area | Area | int Remote_Interface | Remote Interface | Interface Remote_VRF | Remote VRF | str Remote_IP | Remote IP | str Remote_Area | Remote Area | int Session_Status | Status of the OSPF session | str Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.ospfEdges().answer().frame() """ Explanation: OSPF Edges Returns OSPF adjacencies. Lists all OSPF adjacencies in the network. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | . remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | . Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Interface | Interface from which the edge originates | Interface Remote_Interface | Interface at which the edge terminates | Interface Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.testRoutePolicies(policies='/as1_to_/', direction='in', inputRoutes=list([BgpRoute(network='10.0.0.0/24', originatorIp='4.4.4.4', originType='egp', protocol='bgp', asPath=[[64512, 64513], [64514]], communities=['64512:42', '64513:21'])])).answer().frame() """ Explanation: Test Route Policies Evaluates the processing of a route by a given policy. Find how the specified route is processed through the specified routing policies. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True | policies | Only consider policies that match this specifier. | RoutingPolicySpec | True | inputRoutes | The BGP route announcements to test the policy on. | List of BgpRoute | False | direction | The direction of the route, with respect to the device (IN/OUT). | str | False | Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Node | The node that has the policy | str Policy_Name | The name of this policy | str Input_Route | The input route | BgpRoute Action | The action of the policy on the input route | str Output_Route | The output route, if any | BgpRoute Difference | The difference between the input and output routes, if any | BgpRouteDiffs Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Print the first row of the returned Dataframe End of explanation """ result = bf.q.searchRoutePolicies(nodes='/^as1/', policies='/as1_to_/', inputConstraints=BgpRouteConstraints(prefix=["10.0.0.0/8:8-32", "172.16.0.0/28:28-32", "192.168.0.0/16:16-32"]), action='permit').answer().frame() """ Explanation: Search Route Policies Finds route announcements for which a route policy has a particular behavior. This question finds route announcements for which a route policy has a particular behavior. The behaviors can be: that the policy permits the route (permit) or that it denies the route (deny). Constraints can be imposed on the input route announcements of interest and, in the case of a permit action, also on the output route announcements of interest. Route policies are selected using node and policy specifiers, which might match multiple policies. In this case, a (possibly different) answer will be found for each policy. Note: This question currently does not support all of the route policy features that Batfish supports. The question only supports common forms of matching on prefixes, communities, and AS-paths, as well as common forms of setting communities, the local preference, and the metric. The question logs all unsupported features that it encounters as warnings. Due to unsupported features, it is possible for the question to return no answers even for route policies that can in fact exhibit the specified behavior. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- nodes | Only examine policies on nodes matching this specifier. | NodeSpec | True | policies | Only consider policies that match this specifier. | RoutingPolicySpec | True | inputConstraints | Constraints on the set of input BGP route announcements to consider. | BgpRouteConstraints | True | action | The behavior to be evaluated. Specify exactly one of permit or deny. | str | True | outputConstraints | Constraints on the set of output BGP route announcements to consider. | BgpRouteConstraints | True | Invocation End of explanation """ result.head(5) """ Explanation: Return Value Name | Description | Type --- | --- | --- Node | The node that has the policy | str Policy_Name | The name of this policy | str Input_Route | The input route | BgpRoute Action | The action of the policy on the input route | str Output_Route | The output route, if any | BgpRoute Difference | The difference between the input and output routes, if any | BgpRouteDiffs Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree Print the first 5 rows of the returned Dataframe End of explanation """ result.iloc[0] """ Explanation: Print the first row of the returned Dataframe End of explanation """
mne-tools/mne-tools.github.io
0.21/_downloads/f01121873dbae065a1740e6c0c20d1d5/plot_eeg_no_mri.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # Joan Massich <mailsik@gmail.com> # # License: BSD Style. import os.path as op import mne from mne.datasets import eegbci from mne.datasets import fetch_fsaverage # Download fsaverage files fs_dir = fetch_fsaverage(verbose=True) subjects_dir = op.dirname(fs_dir) # The files live in: subject = 'fsaverage' trans = 'fsaverage' # MNE has a built-in fsaverage transformation src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif') bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif') """ Explanation: EEG forward operator with a template MRI This tutorial explains how to compute the forward operator from EEG data using the standard template MRI subject fsaverage. .. caution:: Source reconstruction without an individual T1 MRI from the subject will be less accurate. Do not over interpret activity locations which can be off by multiple centimeters. :depth: 2 End of explanation """ raw_fname, = eegbci.load_data(subject=1, runs=[6]) raw = mne.io.read_raw_edf(raw_fname, preload=True) # Clean channel names to be able to use a standard 1005 montage new_names = dict( (ch_name, ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp')) for ch_name in raw.ch_names) raw.rename_channels(new_names) # Read and set the EEG electrode locations montage = mne.channels.make_standard_montage('standard_1005') raw.set_montage(montage) raw.set_eeg_reference(projection=True) # needed for inverse modeling # Check that the locations of EEG electrodes is correct with respect to MRI mne.viz.plot_alignment( raw.info, src=src, eeg=['original', 'projected'], trans=trans, show_axes=True, mri_fiducials=True, dig='fiducials') """ Explanation: Load the data We use here EEG data from the BCI dataset. <div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages available in MNE-Python.</p></div> End of explanation """ fwd = mne.make_forward_solution(raw.info, trans=trans, src=src, bem=bem, eeg=True, mindist=5.0, n_jobs=1) print(fwd) # for illustration purposes use fwd to compute the sensitivity map eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed') eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir, clim=dict(lims=[5, 50, 100])) """ Explanation: Setup source space and compute forward End of explanation """
danielwe/gridcell
example.ipynb
apache-2.0
%matplotlib inline """ Explanation: Example usage of the gridcell package End of explanation """ # Select data source datafile = '../../data/FlekkenBen/data.mat' # Load raw data from file from scipy import io raw_data = io.loadmat(datafile, squeeze_me=True) #print(raw_data) # Create sessions dict from the data from gridcell import transform positions = [pos.T for pos in raw_data['allpos']] spike_times = raw_data['allst'] data = transform.sessions(positions, spike_times) # Transform axes tight_range = ((-74.5, 74.5), (-74.5, 74.5)) data = transform.transform_sessions(data, global_=False, range_=tight_range, translate=True, rotate=True) """ Explanation: Importing from files We will use recorded data stored in the file ../data/FlekkenBen.mat. We have to manually work through this file and present the data it contains in a way that the gridcell package can understand. To this end, functions from the transform module may come in handy, both for formatting the data and transforming it to axes we want to use. End of explanation """ # Define the binning of the experimental environment bins = (50, 50) range_ = ((-75.0, 75.0), (-75.0, 75.0)) # Set filter parameters (use the same length unit as range_, # and the same time unit as in the raw data) speed_window = 0.5 min_speed = 5.0 position_kw = dict(speed_window=speed_window, min_speed=min_speed) bandwidth = 3.3 threshold = 0.2 # Only a default cell_kw = dict(bandwidth=bandwidth, threshold=threshold) # Instantiate CellCollection from gridcell import CellCollection cells = CellCollection.from_multiple_sessions( data, bins, range_, position_kw=position_kw, cell_kw=cell_kw) print("Number of cells: {}".format(len(cells))) """ Explanation: Setting up the CellCollection The representation of the data provided by data is just a temporary interface. The functionality of the package is provided mainly through a class Cell representing the cells, and a container class CellCollection representing several cells. The standardized dataset representation from transform.session can be used to initialize an instance of CellCollection, creating Cell instances for each cell in the process. End of explanation """ # To improve on the matplotlib aesthetics, we import the seaborn # library and choose some nice colormaps import seaborn seaborn.set(rc={'figure.facecolor': '.98', 'legend.frameon': True}) ratecmap = 'YlGnBu_r' corrcmap = 'RdBu_r' """ Explanation: Note that the CellCollection.from_multiple_sessions constructor takes a number of arguments affecting different aspects of the analysis. See the documentation for details. Plotting and iterating the parameters End of explanation """ # Select a cell to have a closer look at cell = cells[109] """ Explanation: Now, lets take a look at what we just created. The CellCollection instance can be accessed (and modified) like a list. End of explanation """ # Create a square patch representing the experimental environment from matplotlib import patches xmin, xmax = range_[0] ymin, ymax = range_[1] dx, dy = xmax - xmin, ymax - ymin box = patches.Rectangle((xmin, ymin), dx, dy, fill=False, label="Box") # Plot the path and spikes with seaborn.axes_style('ticks'): path = cell.position.plot_path(label='Path')[0] axes = path.axes cell.plot_spikes(axes=axes, alpha=0.2, label='Spikes') axes.add_patch(box) axes.set(xlim=[xmin - 0.05 * dx, xmax + 0.55 * dx], ylim=[ymin - 0.05 * dy, ymax + 0.05 * dy], xticks=[xmin, xmax], yticks=[xmin, xmax]) axes.legend(loc=5) seaborn.despine(offset=0, trim=True) """ Explanation: Let's begin by plotting the raw data -- the path of the rat, with the spike locations of this cell superimposed. End of explanation """ cell.plot_ratemap(cmap=ratecmap) """ Explanation: That looks promising. Let's plot the firing rate map. This map has been passed through a smoothing filter with filter size given by the parameter filter_size in the CellCollection instantiation. End of explanation """ cell.plot_acorr(cmap=corrcmap) """ Explanation: This definitely looks like a grid cell, with a firing fields spread out in a nice pattern. However, the difference in firing field strength is substantial. Let's see how the autocorrelogram looks. End of explanation """ cell.plot_acorr(cmap=corrcmap, threshold=True) """ Explanation: Pretty nice. But how does the default threshold work with those weak peaks? End of explanation """ cell.plot_acorr(cmap=corrcmap, threshold=0.12) """ Explanation: Two of the peaks are to low for this threshold. Let's find out what the threshold for this cell should be, assuming as a rule that the threshold should be as close as possible to the default value (0.20), while allowing all the six inner peaks to be identified and separated from each other and background noise, with at least four pixels per peak above the threshold. End of explanation """ cell.params['threshold'] = 0.12 """ Explanation: That's it! We had to go all the way down to 0.12 to get the required four pixels per peak. Let's update the 'threshold' parameter of the cell to reflect this End of explanation """ cell.plot_acorr(cmap=corrcmap, threshold=True, grid_peaks=True, grid_ellipse=True) """ Explanation: We should check that the problem has been fixed: End of explanation """ cell.plot_acorr(cmap=corrcmap, threshold=False) cell.plot_grid_peaks(marker='^', color='green', markersize=20) cell.plot_grid_ellipse(smajaxis=False, minaxis=True, color='magenta', linewidth=4, zorder=3) # There are other cells requiring custom thresholds cells[0].params['threshold'] = 0.17 cells[8].params['threshold'] = 0.31 cells[13].params['threshold'] = 0.21 cells[31].params['threshold'] = 0.11 cells[40].params['threshold'] = 0.08 cells[43].params['threshold'] = 0.09 cells[59].params['threshold'] = 0.18 cells[63].params['threshold'] = 0.27 cells[80].params['threshold'] = 0.18 cells[82].params['threshold'] = 0.16 cells[98].params['threshold'] = 0.19 cells[109].params['threshold'] = 0.12 cells[118].params['threshold'] = 0.40 # Or just 0.20 cells[128].params['threshold'] = 0.22 cells[129].params['threshold'] = 0.17 cells[133].params['threshold'] = 0.22 cells[150].params['threshold'] = 0.10 cells[153].params['threshold'] = 0.19 cells[159].params['threshold'] = 0.17 cells[160].params['threshold'] = 0.19 cells[161].params['threshold'] = 0.19 cells[162].params['threshold'] = 0.16 cells[168].params['threshold'] = 0.45 # Or 0.64 del cells[146] # Won't work using the default settings """ Explanation: Notice how the detected peak centers, and the ellipse fitted through them, were added using the keywords grid_peaks and grid_ellipse. These keywords are provided for convenience, and uses hardcoded defaults for the appearance of the peaks and ellipse. For more fine grained control, use the plot_grid_peaks and plot_grid_ellipse methods of the Cell instance instead. End of explanation """ # Find modules among the cells # The grid scale is weighted a little more than the other features # when clustering feat_kw = dict(weights={'logscale': 2.1}) k_means_kw = dict(n_clusters=4, n_runs=10, feat_kw=feat_kw) # We expect 4 modules from gridcell import Module labels = cells.k_means(**k_means_kw) modules, outliers = Module.from_labels(cells, labels) modules.sort(key=lambda mod: mod.template().scale()) """ Explanation: Clustering and modules The next step is to try to cluster the cells into modules. There are several clustering algorithms available for this purpose. Here, we use the K-means algorithm, implemented using the k_means function from scikit-learn. We anticipate 4 modules. End of explanation """ for (i, mod) in enumerate(modules): line = mod.plot_features(('scale',), label="Module {}".format(i + 1))[0] axes = line.axes axes.set_ylim(bottom=0.0) axes.legend(loc=0) for (i, mod) in enumerate(modules): line = mod.plot_ellpars(label="Module {}".format(i + 1))[0] axes = line.axes axes.legend(loc=0) """ Explanation: All clustering methods have a common return signature: modules, outliers. The variable modules is a list containing a Module instance for each of the detected modules. Module is a subclass of CellCollection, implementing some extra module-specific functionality for analyzing the phases of the cells in the module. The variable outliers is a CellCollection instance containing the cells that were not assigned to any module. When using the K-means algorithm, all cells are assigned to a module, so outliers is empty. Let's take a look at the clustering by plotting the scales, orientation angles and ellipse parameters of the cells in each module next to each other. End of explanation """
ES-DOC/esdoc-jupyterhub
notebooks/nims-kma/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
# DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'atmoschem') """ Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: NIMS-KMA Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:28 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation """ # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) """ Explanation: Document Authors Set document authors End of explanation """ # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) """ Explanation: Document Contributors Specify document contributors End of explanation """ # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) """ Explanation: Document Publication Specify document publication status End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) """ Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) """ Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) """ Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) """ Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) """ Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) """ Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) """ Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation """ # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) """ Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation """
zzsza/Datascience_School
10. 기초 확률론3 - 확률 분포 모형/03. 이항 확률 분포 (파이썬 버전).ipynb
mit
N = 10 theta = 0.6 rv = sp.stats.binom(N, theta) rv """ Explanation: 이항 확률 분포 성공확률이 $\theta$ 인 베르누이 시도를 $N$번 하는 경우를 생각해 보자. 가장 운이 좋을 때에는 $N$번 모두 성공할 것이고 가장 운이 나쁜 경우에는 한 번도 성공하지 못할 겻이다. $N$번 중 성공한 횟수를 확률 변수 $X$ 라고 한다면 $X$의 값은 0 부터 $N$ 까지의 정수 중 하나가 될 것이다. 이러한 확률 변수를 이항 분포(binomial distribution)를 따르는 확률 변수라고 하며 다음과 같이 표시한다. $$ X \sim \text{Bin}(x;N,\theta) $$ 이항 확률 분포를 수식으로 묘사해 보자. 0 또는 1이 나오는 베르누이 확률 분포를 따르는 확률 변수 $Y$를 가정한다. $$ Y \sim \text{Bern}(y;\theta) $$ 이 확률 변수의 $N$개의 샘플을 $y_1, y_2, \cdots, y_N$라고 하자. 이 값은 모두 0(실패) 아니면 1(성공) 이라는 값을 가지기 때문에 $N$번 중 성공한 횟수는 $N$개의 샘플 값의 총합이다. $$ X = \sum_{i=1}^N y_i $$ 이항 확률 분포를 수식으로 쓰면 다음과 같다. $$ \text{Bin}(x;N,\theta) = \binom N x \theta^x(1-\theta)^{N-x} $$ 이 식에서 $()$ 기호와 $!$ 기호는 각각 조합(combination)과 팩토리얼(factorial)을 뜻하면 다음과 같이 정의한다. $$ \binom N x =\dfrac{N!}{x!(N-x)!} $$ $$ N! = N\cdot (N-1) \cdots 2 \cdot 1 $$ SciPy를 사용한 베르누이 분포의 시뮬레이션 Scipy의 stats 서브 패키지에 있는 binom 클래스는 이항 분포 클래스이다. n 인수와 p 인수를 사용하여 모수를 설정한다 End of explanation """ xx = np.arange(N + 1) plt.bar(xx, rv.pmf(xx), align="center") plt.ylabel("P(x)") plt.title("pmf of binomial distribution") plt.show() """ Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다. End of explanation """ np.random.seed(0) x = rv.rvs(100) x sns.countplot(x) plt.show() """ Explanation: 시뮬레이션을 하려면 rvs 메서드를 사용한다. End of explanation """ y = np.bincount(x, minlength=N)/float(len(x)) df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack() df = df.reset_index() df.columns = ["value", "type", "ratio"] df.pivot("value", "type", "ratio") sns.barplot(x="value", y="ratio", hue="type", data=df) plt.show() """ Explanation: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다. End of explanation """
cochoa0x1/integer-programming-with-python
05-routes-and-schedules/traveling_salesman2_vehicle_routing.ipynb
mit
from pulp import * import numpy as np import matplotlib.pyplot as plt %matplotlib inline import seaborn as sn """ Explanation: Multiple Traveling Salesman and the Problem of routing vehicles Imagine we have instead of one salesman traveling to all the sites, that instead the workload is shared among many salesman. This generalization of the traveling salesman problem is called the multiple traveling salesman problem or mTSP. In lots of literature it is studied under the name of Vehicle Routing Problem or VRP, but it is equivalent. The problem goes back to the early 1960s where it was applied to oil delivery issues [1]. This is another NP-hard problem so for large amounts of locations a solution might take a long time to find. We can solve it for small values with Pulp though. [1] : https://andresjaquep.files.wordpress.com/2008/10/2627477-clasico-dantzig.pdf End of explanation """ #a handful of sites sites = ['org','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P'] print(len(sites)-1) #make some positions (so we can plot this) positions = dict( ( a, (np.random.rand()-.5, np.random.rand()-.5)) for a in sites) positions['org']=(0,0) for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); #straight line distance for simplicity d = lambda p1,p2: np.sqrt( (p1[0]-p2[0])**2+ (p1[1]-p2[1])**2) #calculate all the pairs distances=dict( ((s1,s2), d(positions[s1],positions[s2])) for s1 in positions for s2 in positions if s1!=s2) """ Explanation: 1. First lets make some fake data End of explanation """ K = 4 #the number of sales people #create the problme prob=LpProblem("vehicle",LpMinimize) #indicator variable if site i is connected to site j in the tour x = LpVariable.dicts('x',distances, 0,1,LpBinary) #dummy vars to eliminate subtours u = LpVariable.dicts('u', sites, 0, len(sites)-1, LpInteger) #the objective cost = lpSum([x[(i,j)]*distances[(i,j)] for (i,j) in distances]) prob+=cost #constraints for k in sites: cap = 1 if k != 'org' else K #inbound connection prob+= lpSum([ x[(i,k)] for i in sites if (i,k) in x]) ==cap #outbound connection prob+=lpSum([ x[(k,i)] for i in sites if (k,i) in x]) ==cap #subtour elimination N=len(sites)/K for i in sites: for j in sites: if i != j and (i != 'org' and j!= 'org') and (i,j) in x: prob += u[i] - u[j] <= (N)*(1-x[(i,j)]) - 1 """ Explanation: 2. The model With a few modifications, the original traveling salesman problem can support multiple salesman. Instead of making each facility only be visited once, the origin facility will be visited multiple times. If we have two salesman then the origin is visited exactly twice and so on. For $K$ vehicles or sales people Variables: indicators: $$x_{i,j} = \begin{cases} 1, & \text{if site i comes exactly before j in the tour} \ 0, & \text{otherwise} \end{cases} $$ order dummy variables: $$u_{i} : \text{order site i is visited}$$ Minimize: $$\sum_{i,j \space i \neq j} x_{i,j} Distance(i,j)$$ Subject to: $$\sum_{i \neq j} x_{i,j} = 1 \space \forall j \text{ except the origin}$$ $$\sum_{i \neq j} x_{i,origin} = K$$ $$\sum_{j \neq i} x_{i,j} = 1 \space \forall i \text{ except the origin}$$ $$\sum_{j \neq i} x_{i,origin} = K$$ $$u_{i}-u_{j} \leq (N \div M)(1-x_{i,j}) - 1 \ \forall i,j \text{ except origins}$$ End of explanation """ %time prob.solve() #prob.solve(GLPK_CMD(options=['--simplex'])) print(LpStatus[prob.status]) """ Explanation: Solve it! End of explanation """ non_zero_edges = [ e for e in x if value(x[e]) != 0 ] def get_next_site(parent): '''helper function to get the next edge''' edges = [e for e in non_zero_edges if e[0]==parent] for e in edges: non_zero_edges.remove(e) return edges tours = get_next_site('org') tours = [ [e] for e in tours ] for t in tours: while t[-1][1] !='org': t.append(get_next_site(t[-1][1])[-1]) """ Explanation: And the result: End of explanation """ for t in tours: print(' -> '.join([ a for a,b in t]+['org'])) #draw the tours colors = [np.random.rand(3) for i in range(len(tours))] for t,c in zip(tours,colors): for a,b in t: p1,p2 = positions[a], positions[b] plt.plot([p1[0],p2[0]],[p1[1],p2[1]], color=c) #draw the map again for s in positions: p = positions[s] plt.plot(p[0],p[1],'o') plt.text(p[0]+.01,p[1],s,horizontalalignment='left',verticalalignment='center') plt.gca().axis('off'); print(value(prob.objective)) """ Explanation: The optimal tours: End of explanation """
mumuwoyou/vnpy-master
sonnet/contrib/examples/CartPole_policy_gradient.ipynb
mit
import gym import numpy as np, pandas as pd import matplotlib.pyplot as plt %matplotlib inline env = gym.make("CartPole-v0") #gym compatibility: unwrap TimeLimit if hasattr(env,'env'): env=env.env env.reset() n_actions = env.action_space.n state_dim = env.observation_space.shape plt.imshow(env.render("rgb_array")) """ Explanation: REINFORCE in Sonnet This notebook implements a basic reinforce algorithm a.k.a. policy gradient for CartPole env. It has been deliberately written to be as simple and human-readable. Authors: Practical_RL course team The notebook assumes that you have openai gym installed. In case you're running on a server, use xvfb End of explanation """ import tensorflow as tf import sonnet as snt #create input variables. We only need <s,a,R> for REINFORCE states = tf.placeholder('float32',(None,)+state_dim,name="states") actions = tf.placeholder('int32',name="action_ids") cumulative_rewards = tf.placeholder('float32', name="cumulative_returns") def make_network(inputs): lin1 = snt.Linear(output_size=100)(inputs) elu1 = tf.nn.elu(lin1) logits = snt.Linear(output_size=n_actions)(elu1) policy = tf.nn.softmax(logits) log_policy = tf.nn.log_softmax(logits) return logits, policy, log_policy net = snt.Module(make_network,name="policy_network") logits,policy,log_policy = net(states) #utility function to pick action in one given state get_action_proba = lambda s: policy.eval({states:[s]})[0] """ Explanation: Building the network for REINFORCE For REINFORCE algorithm, we'll need a model that predicts action probabilities given states. End of explanation """ #REINFORCE objective function actions_1hot = tf.one_hot(actions,n_actions) log_pi_a = -tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=actions_1hot) J = tf.reduce_mean(log_pi_a * cumulative_rewards) #regularize with entropy entropy = -tf.reduce_mean(policy*log_policy) #all network weights all_weights = net.get_variables() #weight updates. maximizing J is same as minimizing -J loss = -J -0.1*entropy update = tf.train.AdamOptimizer().minimize(loss,var_list=all_weights) """ Explanation: Loss function and updates We now need to define objective and update over policy gradient. The objective function can be defined thusly: $$ J \approx \sum i log \pi\theta (a_i | s_i) \cdot R(s_i,a_i) $$ When you compute gradient of that function over network weights $ \theta $, it will become exactly the policy gradient. End of explanation """ def get_cumulative_rewards(rewards, #rewards at each step gamma = 0.99 #discount for reward ): """ take a list of immediate rewards r(s,a) for the whole session compute cumulative rewards R(s,a) (a.k.a. G(s,a) in Sutton '16) R_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ... The simple way to compute cumulative rewards is to iterate from last to first time tick and compute R_t = r_t + gamma*R_{t+1} recurrently You must return an array/list of cumulative rewards with as many elements as in the initial rewards. """ cumulative_rewards = [] R = 0 for r in rewards[::-1]: R = r + gamma*R cumulative_rewards.insert(0,R) return cumulative_rewards assert len(get_cumulative_rewards(range(100))) == 100 assert np.allclose(get_cumulative_rewards([0,0,1,0,0,1,0],gamma=0.9),[1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0]) assert np.allclose(get_cumulative_rewards([0,0,1,-2,3,-4,0],gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0]) assert np.allclose(get_cumulative_rewards([0,0,1,2,3,4,0],gamma=0), [0, 0, 1, 2, 3, 4, 0]) print("looks good!") def train_step(_states,_actions,_rewards): """given full session, trains agent with policy gradient""" _cumulative_rewards = get_cumulative_rewards(_rewards) update.run({states:_states,actions:_actions,cumulative_rewards:_cumulative_rewards}) """ Explanation: Computing cumulative rewards End of explanation """ def generate_session(t_max=1000): """play env with REINFORCE agent and train at the session end""" #arrays to record session states,actions,rewards = [],[],[] s = env.reset() for t in range(t_max): #action probabilities array aka pi(a|s) action_probas = get_action_proba(s) a = np.random.choice(n_actions,p=action_probas) new_s,r,done,info = env.step(a) #record session history to train later states.append(s) actions.append(a) rewards.append(r) s = new_s if done: break train_step(states,actions,rewards) return sum(rewards) s = tf.InteractiveSession() s.run(tf.global_variables_initializer()) for i in range(100): rewards = [generate_session() for _ in range(100)] #generate new sessions print ("mean reward:%.3f"%(np.mean(rewards))) if np.mean(rewards) > 300: print ("You Win!") break """ Explanation: Playing the game End of explanation """ #record sessions import gym.wrappers env = gym.wrappers.Monitor(gym.make("CartPole-v0"),directory="videos",force=True) sessions = [generate_session() for _ in range(100)] env.close() #show video from IPython.display import HTML import os video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./videos/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices #That's all, thank you for your attention! """ Explanation: Results & video End of explanation """
amcdawes/QMlabs
Lab 1 - Vectors and Matrices Solutions.ipynb
mit
from numpy import array, dot, outer, sqrt, matrix from numpy.linalg import eig, eigvals from matplotlib.pyplot import hist %matplotlib inline rv = array([1,2]) # a row vector rv cv = array([[3],[4]]) # a column vector cv """ Explanation: Lab 1 - Vectors and Matrices This notebook demonstrates the use of vectors and matrices in IPython. Note that the basis is not explicit in any of these operations. You must keep track of the basis yourself (using variable names, or notes etc). End of explanation """ dot(rv,cv) dot(cv,rv) """ Explanation: Two kinds of vector products we'll see: inner product (dot product) and outer product 1) Use the function dot(vector1, vector2) to find the dot product of rv and cv. Does the order of the arguments matter? End of explanation """ outer(rv,cv) outer(cv,rv) """ Explanation: 2) Use the function outer(vector1, vector2) to find the outer product of rv and cv. Does the order of the arguments matter? End of explanation """ # Complex numbers in python have a j term: a = 1+2j v1 = array([1+2j, 3+2j, 5+1j, 4+0j]) """ Explanation: II. Complex vectors End of explanation """ v1.conjugate() """ Explanation: The complex conjugate changes the sign of the imaginary part: End of explanation """ dot(v1.conjugate(),v1) """ Explanation: 3) Use dot() and .conjugate() to find the dot product of v1 and it's own conjugate: End of explanation """ # a two-dimensional array m1 = array([[2,1],[2,1]]) m1 # can find transpose with the T method: m1.T # find the eigenvalues and eigenvectors of a matrix: eig(m1) """ Explanation: III. Matrices End of explanation """ m2 = matrix( [[2,1],[2,1]]) m2.H eig(m2) # use a question mark to get help on a command eig? """ Explanation: Can also use the matrix type which is like array but restricts to 2D. Also, matrix adds .H and .I methods for hermitian and inverse, respectively. For more information, see Stack Overflow question #4151128 End of explanation """ M14 = array([[0,1],[-2,3]]) eig(M14) """ Explanation: Examples: Example 1.4 Find the eigenvalues and eigenvectors of M = ([0,1],[-2,3]]) End of explanation """ 1/sqrt(2) # this is the value for both entries in the first eigenvector 1/sqrt(5) # this is the first value in the second eigenvector 2/sqrt(5) # this is the second value in the second eigenvector eigvals(M14) """ Explanation: Interpret this result: the two eigenvalues are 1 and 2 the eigenvectors are strange decimals, but we can check them against the stated solution: End of explanation """ M16 = array([[0,-1j],[1j,0]]) evals, evecs = eig(M16) evecs evecs[:,0] evecs[:,1] dot(evecs[:,0].conjugate(),evecs[:,1]) """ Explanation: Signs are opposite compared to the book, but it turns out that (-) doesn't matter in the interpretation of eigenvectors: only "direction" matters (the relative size of the entries). Example: Problem 1.16 using Ipython functions End of explanation """ from qutip import * # Create a row vector: qv = Qobj([[1,2]]) qv # Find the corresponding column vector qv.dag() qv2 = Qobj([[1+2j,4-1j]]) qv2 qv2.dag() """ Explanation: Part 2: Using QuTiP Keeping track of row and column vectors in Ipython is somewhat artificial and tedious. The QuTiP library is designed to take care of many of these headaches End of explanation """ qv2*qv2.dag() # inner product (dot product) qv2.dag()*qv2 # outer product """ Explanation: Vector products in QuTiP Only need to know one operator: "*" The product will depend on the order, either inner or outer End of explanation """ qm = Qobj([[1,2],[2,1]]) qm qm.eigenenergies() # in quantum (as we will learn) eigenvalues often correspond to energy levels evals, evecs = qm.eigenstates() evecs evecs[0] """ Explanation: Matrix in QuTiP End of explanation """ # Solution n, bins, patches = hist([10,13,14,14,6,8,7,9,12,14,13,11,10,7,7],bins=5,range=(5,14)) # Solution n # Solution pvals = n/n.sum() """ Explanation: Practice: Problem 1.2 using the hist() function. End of explanation """ # Solution from sympy import * c,a,x = symbols("c a x") Q.positive((c,a)) first = integrate(c*exp(-a*x),(x,0,oo),conds='none') print("first = ",first) second = integrate(a*exp(-a*x),(x,0,oo),conds='none') print("second = ",second) """ Explanation: Problem 1.8 Hint: using sympy, we can calculate the relevant integral. The conds='none' asks the solver to ignore any strange conditions on the variables in the integral. This is fine for most of our integrals. Usually the variables are real and well-behaved numbers. End of explanation """
andreabduque/GAFE
GAFE tutorial.ipynb
mit
#Implements functional expansions from functions.FE import FE #Evaluates accuracy in a dataset for a particular classifier from fitness import Classifier #Implements gafe using DEAP toolbox import ga """ Explanation: Import modules from GAFE End of explanation """ from sklearn.preprocessing import MinMaxScaler import numpy as np import pandas as pd """ Explanation: Import modules from scikit-learn, numpy and pandas to help us deal with the data End of explanation """ iris = pd.read_csv("data/iris.data", sep=",") #Isolate the attributes columns irisAtts = iris.drop("class", 1) #Isolate the class column target = iris["class"] """ Explanation: Load data using pandas. We will use the famous Iris Dataset End of explanation """ scaledIris = MinMaxScaler().fit_transform(irisAtts) """ Explanation: Prior to expanding the data, put all values to interval [0,1] for better results End of explanation """ bestSingleMatch = {'knn': [(1,5) for x in range(4)], 'cart': [(3,2) for x in range(4)], 'svm': [(7,4) for x in range(4)]} """ Explanation: If, we didnt use GAFE, after testing 49 (7*7) combinations of FE-ES this configuration would be the best for each classifier. Note we are applying the same FE-ES pair for every data column End of explanation """ functionalExp = FE() for cl in ['knn', 'cart', 'svm']: #Folds are the number of folds used in crossvalidation #Jobs are the number of CPUS used in crossvalidation and some classifiers training step. #You can also change some classifier parameters, such as k_neigh for neighbors in knn, C in svm and others. #If you do not specify, it will use the articles default. model = Classifier(cl, target, folds=10, jobs=6) #The class internally normalizes data, so no need to send normalized data when classifying #accuracy without expanding print("original accuracy " + cl + " " + str(model.getAccuracy(irisAtts))) #Expand the scaled data expandedData = functionalExp.expandMatrix(scaledIris, bestSingleMatch[cl]) print("single match expansion accuracy " + cl + " " + str(model.getAccuracy(expandedData))) #If scaled is False, it will scale data in range [0,1] gafe = ga.GAFE(model, scaledIris, target, scaled=True) #Specify how many iterations of GAFE you wish with n_iter #Note that this is a slow method, so have patience if n_iter is high avg, bestPair = gafe.runGAFE(n_population=21, n_iter=1, verbose=True) print("gafe " + cl + " " + str(avg) ) """ Explanation: Now lets calculate the accuracy results for original data, single match and GAFE. End of explanation """
blackjax-devs/blackjax
examples/SGLD.ipynb
apache-2.0
import jax import jax.nn as nn import jax.numpy as jnp import jax.scipy.stats as stats import numpy as np """ Explanation: MNIST digit recognition with a 3-layer Perceptron This example is inspired form this notebook in the SGMCMCJax repository. We try to use a 3-layer neural network to recognise the digits in the MNIST dataset. End of explanation """ import tensorflow_datasets as tfds mnist_data, _ = tfds.load( name="mnist", batch_size=-1, with_info=True, as_supervised=True ) mnist_data = tfds.as_numpy(mnist_data) data_train, data_test = mnist_data["train"], mnist_data["test"] """ Explanation: Data preparation We download the MNIST data using tensorflow-datasets: End of explanation """ def one_hot_encode(x, k, dtype=np.float32): "Create a one-hot encoding of x of size k." return np.array(x[:, None] == np.arange(k), dtype) def prepare_data(dataset: tuple, num_categories=10): X, y = dataset y = one_hot_encode(y, num_categories) num_examples = X.shape[0] num_pixels = 28 * 28 X = X.reshape(num_examples, num_pixels) X = X / 255.0 return jnp.array(X), jnp.array(y), num_examples def batch_data(rng_key, data, batch_size, data_size): """Return an iterator over batches of data.""" while True: _, rng_key = jax.random.split(rng_key) idx = jax.random.choice( key=rng_key, a=jnp.arange(data_size), shape=(batch_size,) ) minibatch = tuple(elem[idx] for elem in data) yield minibatch X_train, y_train, N_train = prepare_data(data_train) X_test, y_test, N_test = prepare_data(data_train) """ Explanation: Now we need to apply several transformations to the dataset before splitting it into a test and a test set: - The images come into 28x28 pixels matrices; we reshape them into a vector; - The images are arrays of RGB codes between 0 and 255. We normalize them by the maximum value to get a range between 0 and 1; - We hot-encode category numbers. End of explanation """ def predict_fn(parameters, X): """Returns the probability for the image represented by X to be in each category given the MLP's weights vakues. """ activations = X for W, b in parameters[:-1]: outputs = jnp.dot(W, activations) + b activations = nn.softmax(outputs) final_W, final_b = parameters[-1] logits = jnp.dot(final_W, activations) + final_b return nn.log_softmax(logits) def logprior_fn(parameters): """Compute the value of the log-prior density function.""" logprob = 0.0 for W, b in parameters: logprob += jnp.sum(stats.norm.logpdf(W)) logprob += jnp.sum(stats.norm.logpdf(b)) return logprob def loglikelihood_fn(parameters, data): """Categorical log-likelihood""" X, y = data return jnp.sum(y * predict_fn(parameters, X)) def compute_accuracy(parameters, X, y): """Compute the accuracy of the model. To make predictions we take the number that corresponds to the highest probability value. """ target_class = jnp.argmax(y, axis=1) predicted_class = jnp.argmax( jax.vmap(predict_fn, in_axes=(None, 0))(parameters, X), axis=1 ) return jnp.mean(predicted_class == target_class) """ Explanation: Model: 3-layer perceptron We will use a very simple (bayesian) neural network in this example: A MLP with gaussian priors on the weights. We first need a function that computes the model's logposterior density given the data and the current values of the parameters. If we note $X$ the array that represents an image and $y$ the array such that $y_i = 0$ if the image is in category $i$, $y_i=1$ otherwise, the model can be written as: \begin{align} \boldsymbol{p} &= \operatorname{NN}(X)\ \boldsymbol{y} &\sim \operatorname{Categorical}(\boldsymbol{p}) \end{align} End of explanation """ def init_parameters(rng_key, sizes): """ Parameter ---------- rng_key PRNGKey used by JAX to generate pseudo-random numbers sizes List of size for the subsequent layers. The first size must correspond to the size of the input data and the last one to the number of categories. """ num_layers = len(sizes) keys = jax.random.split(rng_key, num_layers) return [ init_layer(rng_key, m, n) for rng_key, m, n in zip(keys, sizes[:-1], sizes[1:]) ] def init_layer(rng_key, m, n, scale=1e-2): """Initialize the weights for a single layer.""" key_W, key_b = jax.random.split(rng_key) return (scale * jax.random.normal(key_W, (n, m))), scale * jax.random.normal( key_b, (n,) ) """ Explanation: Sample from the posterior distribution of the perceptron's weights Now we need to get initial values for the parameters, and we simply sample from their prior distribution: End of explanation """ %%time import blackjax from blackjax.sgmcmc.gradients import grad_estimator data_size = len(y_train) batch_size = int(0.01 * data_size) layer_sizes = [784, 100, 10] step_size = 5e-5 num_warmup = 1000 num_samples = 2000 # Batch the data rng_key = jax.random.PRNGKey(1) batches = batch_data(rng_key, (X_train, y_train), batch_size, data_size) # Build the SGLD kernel schedule_fn = lambda _: step_size # constant step size grad_fn = grad_estimator(logprior_fn, loglikelihood_fn, data_size) sgld = blackjax.sgld(grad_fn, schedule_fn) # Set the initial state init_positions = init_parameters(rng_key, layer_sizes) state = sgld.init(init_positions, next(batches)) # Sample from the posterior accuracies = [] samples = [] steps = [] for step in range(num_samples + num_warmup): _, rng_key = jax.random.split(rng_key) batch = next(batches) state = sgld.step(rng_key, state, batch) if step % 100 == 0: accuracy = compute_accuracy(state.position, X_test, y_test) accuracies.append(accuracy) steps.append(step) if step > num_warmup: samples.append(state.position) """ Explanation: We now sample from the model's posteriors. We discard the first 1000 samples until the sampler has reached the typical set, and then take 2000 samples. We record the model's accuracy with the current values every 100 steps. End of explanation """ import matplotlib.pylab as plt fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(111) ax.plot(steps, accuracies) ax.set_xlabel("Number of sampling steps") ax.set_ylabel("Prediction accuracy") ax.set_xlim([0, num_warmup + num_samples]) ax.set_ylim([0, 1]) ax.set_yticks([0.1, 0.3, 0.5, 0.7, 0.9]) plt.title("Sample from 3-layer MLP posterior (MNIST dataset) with SgLD") plt.plot() print(f"The average accuracy in the sampling phase is {np.mean(accuracies[10:]):.2f}") """ Explanation: Let us plot the accuracy at different points in the sampling process: End of explanation """ predicted_class = np.exp( np.stack([jax.vmap(predict_fn, in_axes=(None, 0))(s, X_test) for s in samples]) ) max_predicted = [np.argmax(predicted_class[:, i, :], axis=1) for i in range(60000)] freq_max_predicted = np.array( [ (max_predicted[i] == np.argmax(np.bincount(max_predicted[i]))).sum() / 2000 for i in range(60000) ] ) certain_mask = freq_max_predicted > 0.95 """ Explanation: Which is not a bad accuracy at all for such a simple model and after only 1000 steps! Remember though that we draw samples from the posterior distribution of the digit probabilities; we can thus use this information to filter out examples for which the model is "unsure" of its prediction. Here we will say that the model is unsure of its prediction for a given image if the digit that is most often predicted for this image is predicted less tham 95% of the time. End of explanation """ most_uncertain_idx = np.argsort(freq_max_predicted) for i in range(10): print(np.bincount(max_predicted[most_uncertain_idx[i]]) / 2000) fig = plt.figure() plt.imshow(X_test[most_uncertain_idx[i]].reshape(28, 28), cmap="gray") plt.show() """ Explanation: Let's plot a few examples where the model was very uncertain: End of explanation """ avg_accuracy = np.mean( [compute_accuracy(s, X_test[certain_mask], y_test[certain_mask]) for s in samples] ) print( f"The average accuracy removing the samples for which the model is uncertain is {avg_accuracy:.3f}" ) """ Explanation: And now compute the average accuracy over all the samples without these uncertain predictions: End of explanation """
james-prior/cohpy
20170424-cohpy-lbyl-v-eafp.ipynb
mit
numbers = (3, 1, 0, -1, -2) def foo(x): return 10 // x for x in numbers: y = foo(x) print(f'foo({x}) --> {y}') """ Explanation: LBYL versus EAFP In some other languages, one can not recover from an error, or it is difficult to recover from an error, so one tests input before doing something that could provoke the error. This technique is called Look Before You Leap (LBYL) For example, one must avoid dividing by zero. Below is code that divides by numbers. When it gets the zero, it crashes. End of explanation """ def foo(x): if x == 0: y = 0 else: y = 10 // x return y for x in numbers: y = foo(x) print(f'foo({x}) --> {y}') """ Explanation: So one checks before dividing as shown below. Checking before doing something is called "Look Before You Leap" (LBYL). End of explanation """ def foo(x): try: y = 10 // x except ZeroDivisionError: y = 0 return y for x in numbers: y = foo(x) print(f'foo({x}) --> {y}') """ Explanation: Another technique is to just try stuff, and if it blows up, do something else. This technique is called Easier to Ask Forgiveness than Permission (EAFP). Python makes it very easy to do something else when something blows up. End of explanation """ import re def is_float(s, debug=False): digit = f'([0-9])' digitpart = f'({digit}(_?{digit})*)' # digit (["_"] digit)* fraction = f'([.]{digitpart})' # "." digitpart pointfloat = f'(({digitpart}?{fraction}) | ({digitpart}[.]))' # [digitpart] fraction | digitpart "." exponent = f'([eE][-+]?{digitpart})' # ("e" | "E") ["+" | "-"] digitpart exponentfloat = f'(({digitpart} | {pointfloat}) {exponent})' # (digitpart | pointfloat) exponent floatnumber = f'^({pointfloat} | {exponentfloat})$' # pointfloat | exponentfloat floatnumber = f'^[-+]?({pointfloat} | {exponentfloat} | {digitpart})$' # allow signs and ints if debug: regular_expressions = ( digit, digitpart, fraction, pointfloat, exponent, exponentfloat, floatnumber, ) for s in regular_expressions: print(repr(s)) # print(str(s)) float_pattern = re.compile(floatnumber, re.VERBOSE) return re.match(float_pattern, s) floats = ''' 2 0 -1 +17. . -.17 17e-3 -19.e-3 hello '''.split() floats is_float('', debug=True) for s in floats: print(f'{s!r} -> {bool(is_float(s))}') import re def is_float(s): digit = f'([0-9])' digitpart = f'({digit}(_?{digit})*)' # digit (["_"] digit)* fraction = f'([.]{digitpart})' # "." digitpart pointfloat = f'(({digitpart}?{fraction}) | ({digitpart}[.]))' # [digitpart] fraction | digitpart "." exponent = f'([eE][-+]?{digitpart})' # ("e" | "E") ["+" | "-"] digitpart exponentfloat = f'(({digitpart} | {pointfloat}) {exponent})' # (digitpart | pointfloat) exponent floatnumber = f'^({pointfloat} | {exponentfloat})$' # pointfloat | exponentfloat floatnumber = f'^[-+]?({pointfloat} | {exponentfloat} | {digitpart})$' # allow signs and ints float_pattern = re.compile(floatnumber, re.VERBOSE) return re.match(float_pattern, s) def safe_float(s, default=0.): if is_float(s): x = float(s) else: x = default return x def main(lines): total = sum(safe_float(line) for line in lines) print(f'total is {total}') main(floats) """ Explanation: For that simple example, EAFP does not have much if any benefit over LBYL. For that simple example, there is not much benefit in the size or readability of the code. However, for more complicated problems, EAFP lets one write much simpler and readable code. We will use the example of determining if a string is a valid float for Python. See 2.4.6. Floating point literals for what constitutes a valid float. floatnumber ::= pointfloat | exponentfloat pointfloat ::= [digitpart] fraction | digitpart "." exponentfloat ::= (digitpart | pointfloat) exponent digitpart ::= digit (["_"] digit)* fraction ::= "." digitpart exponent ::= ("e" | "E") ["+" | "-"] digitpart Some code for that follows. End of explanation """ def safe_float(s, default=0.): try: x = float(s) except ValueError: x = default return x def main(lines): total = sum(safe_float(line) for line in lines) print(f'total is {total}') main(floats) """ Explanation: Now we try EAFP technique below. End of explanation """ from glob import glob import os for filename in sorted(glob('20170424-cohpy-except-*.py')): print(79 * '#') print(filename) print() with open(filename) as f: print(f.read()) print() """ Explanation: The EAFP code is much much simpler. The LBYL version was very complicated. If there was a bug in the LBYL version, how would you find it? If you fixed it, how much confidence would you have that your fix is correct? How hard would it be to have test cases that covered all the edge cases? How much confidence would you have that the test cases were comprehensive? EAFP makes code easier to read, simpler, and more reliable. This is what makes try/except one of Python's superpowers!!! try/except best practices Should: always specify at least one exception put as little as possible in the try clause Because the 20170424-except-*.py programs can lock up Jupyter notebook, run them outside of the notebook. The cell below shows what they look like, but does not run them. End of explanation """
phoebe-project/phoebe2-docs
2.3/tutorials/l3.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.3,<2.4" """ Explanation: "Third" Light Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() """ Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation """ b.filter(qualifier='l3_mode') """ Explanation: Relevant Parameters An l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux. Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle. End of explanation """ b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01') """ Explanation: So let's add a LC dataset End of explanation """ print(b.filter(qualifier='l3*')) """ Explanation: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible. End of explanation """ print(b.filter(qualifier='l3*')) print(b.get_parameter('l3')) """ Explanation: l3_mode = 'flux' When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset. End of explanation """ print(b.compute_l3s()) """ Explanation: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options. Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically. End of explanation """ b.set_value('l3_mode', 'fraction') print(b.filter(qualifier='l3*')) print(b.get_parameter('l3_frac')) """ Explanation: l3_mode = 'fraction' When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter. End of explanation """ print(b.compute_l3s()) """ Explanation: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s. Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically. End of explanation """ b.run_compute(irrad_method='none', model='no_third_light') b.set_value('l3_mode', 'flux') b.set_value('l3', 5) b.run_compute(irrad_method='none', model='with_third_light') """ Explanation: Influence on Light Curves (Fluxes) "Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy. To see this we'll compare a light curve with and without "third" light. End of explanation """ afig, mplfig = b['lc01'].plot(model='no_third_light') afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True) """ Explanation: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount. End of explanation """ b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01']) b.set_value('l3', 0.0) b.run_compute(irrad_method='none', model='no_third_light', overwrite=True) b.set_value('l3', 5) b.run_compute(irrad_method='none', model='with_third_light', overwrite=True) print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light'))) print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light'))) print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light'))) print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light'))) """ Explanation: Influence on Meshes (Intensities) "Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes. NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details. To see this we can run both of our models again and look at the values of the intensities in the mesh. End of explanation """
AllenDowney/ModSimPy
notebooks/oem.ipynb
mit
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * """ Explanation: Modeling and Simulation in Python Case study. Copyright 2017 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation """ radian = UNITS.radian m = UNITS.meter s = UNITS.second minute = UNITS.minute hour = UNITS.hour km = UNITS.kilometer kg = UNITS.kilogram N = UNITS.newton rpm = UNITS.rpm """ Explanation: Electric car Olin Electric Motorsports is a club at Olin College that designs and builds electric cars, and participates in the Formula SAE Electric competition. The goal of this case study is to use simulation to guide the design of a car intended to accelerate from standing to 100 kph as quickly as possible. The world record for this event, using a car that meets the competition requirements, is 1.513 seconds. We'll start with a simple model that takes into account the characteristics of the motor and vehicle: The motor is an Emrax 228 high voltage axial flux synchronous permanent magnet motor; according to the data sheet, its maximum torque is 240 Nm, at 0 rpm. But maximum torque decreases with motor speed; at 5000 rpm, maximum torque is 216 Nm. The motor is connected to the drive axle with a chain drive with speed ratio 13:60 or 1:4.6; that is, the axle rotates once for each 4.6 rotations of the motor. The radius of the tires is 0.26 meters. The weight of the vehicle, including driver, is 300 kg. To start, we will assume no slipping between the tires and the road surface, no air resistance, and no rolling resistance. Then we will relax these assumptions one at a time. First we'll add drag, assuming that the frontal area of the vehicle is 0.6 square meters, with coefficient of drag 0.6. Next we'll add rolling resistance, assuming a coefficient of 0.2. Finally we'll compute the peak acceleration to see if the "no slip" assumption is credible. We'll use this model to estimate the potential benefit of possible design improvements, including decreasing drag and rolling resistance, or increasing the speed ratio. I'll start by loading the units we need. End of explanation """ params = Params(r_wheel=0.26 * m, speed_ratio=13/60, C_rr=0.2, C_d=0.5, area=0.6*m**2, rho=1.2*kg/m**3, mass=300*kg) """ Explanation: And store the parameters in a Params object. End of explanation """ def make_system(params): """Make a system object. params: Params object returns: System object """ init = State(x=0*m, v=0*m/s) rpms = [0, 2000, 5000] torques = [240, 240, 216] interpolate_torque = interpolate(Series(torques, rpms)) return System(params, init=init, interpolate_torque=interpolate_torque, t_end=3*s) """ Explanation: make_system creates the initial state, init, and constructs an interp1d object that represents torque as a function of motor speed. End of explanation """ system = make_system(params) system.init """ Explanation: Testing make_system End of explanation """ def compute_torque(omega, system): """Maximum peak torque as a function of motor speed. omega: motor speed in radian/s system: System object returns: torque in Nm """ factor = (1 * radian / s).to(rpm) x = magnitude(omega * factor) return system.interpolate_torque(x) * N * m compute_torque(0*radian/s, system) omega = (5000 * rpm).to(radian/s) compute_torque(omega, system) """ Explanation: Torque and speed The relationship between torque and motor speed is taken from the Emrax 228 data sheet. The following functions reproduce the red dotted line that represents peak torque, which can only be sustained for a few seconds before the motor overheats. End of explanation """ xs = linspace(0, 525, 21) * radian / s taus = [compute_torque(x, system) for x in xs] plot(xs, taus) decorate(xlabel='Motor speed (rpm)', ylabel='Available torque (N m)') """ Explanation: Plot the whole curve. End of explanation """ def slope_func(state, t, system): """Computes the derivatives of the state variables. state: State object t: time system: System object returns: sequence of derivatives """ x, v = state r_wheel, speed_ratio = system.r_wheel, system.speed_ratio mass = system.mass # use velocity, v, to compute angular velocity of the wheel omega2 = v / r_wheel # use the speed ratio to compute motor speed omega1 = omega2 / speed_ratio # look up motor speed to get maximum torque at the motor tau1 = compute_torque(omega1, system) # compute the corresponding torque at the axle tau2 = tau1 / speed_ratio # compute the force of the wheel on the ground F = tau2 / r_wheel # compute acceleration a = F/mass return v, a """ Explanation: Simulation Here's the slope function that computes the maximum possible acceleration of the car as a function of it current speed. End of explanation """ test_state = State(x=0*m, v=10*m/s) slope_func(test_state, 0*s, system) """ Explanation: Testing slope_func at linear velocity 10 m/s. End of explanation """ results, details = run_ode_solver(system, slope_func) details """ Explanation: Now we can run the simulation. End of explanation """ results.tail() """ Explanation: And look at the results. End of explanation """ v_final = get_last_value(results.v) v_final.to(km/hour) """ Explanation: After 3 seconds, the vehicle could be at 40 meters per second, in theory, which is 144 kph. End of explanation """ def plot_position(results): plot(results.x, label='x') decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results) """ Explanation: Plotting x End of explanation """ def plot_velocity(results): plot(results.v, label='v') decorate(xlabel='Time (s)', ylabel='Velocity (m/s)') plot_velocity(results) """ Explanation: Plotting v End of explanation """ def event_func(state, t, system): """Stops when we get to 100 km/hour. state: State object t: time system: System object returns: difference from 100 km/hour """ x, v = state # convert to km/hour factor = (1 * m/s).to(km/hour) v = magnitude(v * factor) return v - 100 results, details = run_ode_solver(system, slope_func, events=event_func) details """ Explanation: Stopping at 100 kph We'll use an event function to stop the simulation when we reach 100 kph. End of explanation """ subplot(2, 1, 1) plot_position(results) subplot(2, 1, 2) plot_velocity(results) savefig('figs/chap11-fig02.pdf') """ Explanation: Here's what the results look like. End of explanation """ t_final = get_last_label(results) * s """ Explanation: According to this model, we should be able to make this run in just over 2 seconds. End of explanation """ state = results.last_row() """ Explanation: At the end of the run, the car has gone about 28 meters. End of explanation """ v, a = slope_func(state, 0, system) v.to(km/hour) a g = 9.8 * m/s**2 (a / g).to(UNITS.dimensionless) """ Explanation: If we send the final state back to the slope function, we can see that the final acceleration is about 13 $m/s^2$, which is about 1.3 times the acceleration of gravity. End of explanation """ def drag_force(v, system): """Computes drag force in the opposite direction of `v`. v: velocity system: System object returns: drag force """ rho, C_d, area = system.rho, system.C_d, system.area f_drag = -np.sign(v) * rho * v**2 * C_d * area / 2 return f_drag """ Explanation: It's not easy for a vehicle to accelerate faster than g, because that implies a coefficient of friction between the wheels and the road surface that's greater than 1. But racing tires on dry asphalt can do that; the OEM team at Olin has tested their tires and found a peak coefficient near 1.5. So it's possible that our no slip assumption is valid, but only under ideal conditions, where weight is distributed equally on four tires, and all tires are driving. Exercise: How much time do we lose because maximum torque decreases as motor speed increases? Run the model again with no drop off in torque and see how much time it saves. Drag In this section we'll see how much effect drag has on the results. Here's a function to compute drag force, as we saw in Chapter 21. End of explanation """ drag_force(20 * m/s, system) """ Explanation: We can test it with a velocity of 20 m/s. End of explanation """ drag_force(20 * m/s, system) / system.mass """ Explanation: Here's the resulting acceleration of the vehicle due to drag. End of explanation """ def slope_func2(state, t, system): """Computes the derivatives of the state variables. state: State object t: time system: System object returns: sequence of derivatives """ x, v = state r_wheel, speed_ratio = system.r_wheel, system.speed_ratio mass = system.mass omega2 = v / r_wheel * radian omega1 = omega2 / speed_ratio tau1 = compute_torque(omega1, system) tau2 = tau1 / speed_ratio F = tau2 / r_wheel a_motor = F / mass a_drag = drag_force(v, system) / mass a = a_motor + a_drag return v, a """ Explanation: We can see that the effect of drag is not huge, compared to the acceleration we computed in the previous section, but it is not negligible. Here's a modified slope function that takes drag into account. End of explanation """ results2, details = run_ode_solver(system, slope_func2, events=event_func) details """ Explanation: And here's the next run. End of explanation """ t_final2 = get_last_label(results2) * s """ Explanation: The time to reach 100 kph is a bit higher. End of explanation """ t_final2 - t_final """ Explanation: But the total effect of drag is only about 2/100 seconds. End of explanation """ system.set(unit_rr = 1 * N / kg) def rolling_resistance(system): """Computes force due to rolling resistance. system: System object returns: force """ return -system.C_rr * system.mass * system.unit_rr """ Explanation: That's not huge, which suggests we might not be able to save much time by decreasing the frontal area, or coefficient of drag, of the car. Rolling resistance Next we'll consider rolling resistance, which the force that resists the motion of the car as it rolls on tires. The cofficient of rolling resistance, C_rr, is the ratio of rolling resistance to the normal force between the car and the ground (in that way it is similar to a coefficient of friction). The following function computes rolling resistance. End of explanation """ rolling_resistance(system) rolling_resistance(system) / system.mass """ Explanation: The acceleration due to rolling resistance is 0.2 (it is not a coincidence that it equals C_rr). End of explanation """ def slope_func3(state, t, system): """Computes the derivatives of the state variables. state: State object t: time system: System object returns: sequence of derivatives """ x, v = state r_wheel, speed_ratio = system.r_wheel, system.speed_ratio mass = system.mass omega2 = v / r_wheel * radian omega1 = omega2 / speed_ratio tau1 = compute_torque(omega1, system) tau2 = tau1 / speed_ratio F = tau2 / r_wheel a_motor = F / mass a_drag = drag_force(v, system) / mass a_roll = rolling_resistance(system) / mass a = a_motor + a_drag + a_roll return v, a """ Explanation: Here's a modified slope function that includes drag and rolling resistance. End of explanation """ results3, details = run_ode_solver(system, slope_func3, events=event_func) details """ Explanation: And here's the run. End of explanation """ t_final3 = get_last_label(results3) * s t_final3 - t_final2 """ Explanation: The final time is a little higher, but the total cost of rolling resistance is only 3/100 seconds. End of explanation """ def time_to_speed(speed_ratio, params): """Computes times to reach 100 kph. speed_ratio: ratio of wheel speed to motor speed params: Params object returns: time to reach 100 kph, in seconds """ params = Params(params, speed_ratio=speed_ratio) system = make_system(params) system.set(unit_rr = 1 * N / kg) results, details = run_ode_solver(system, slope_func3, events=event_func) t_final = get_last_label(results) a_initial = slope_func(system.init, 0, system) return t_final """ Explanation: So, again, there is probably not much to be gained by decreasing rolling resistance. In fact, it is hard to decrease rolling resistance without also decreasing traction, so that might not help at all. Optimal gear ratio The gear ratio 13:60 is intended to maximize the acceleration of the car without causing the tires to slip. In this section, we'll consider other gear ratios and estimate their effects on acceleration and time to reach 100 kph. Here's a function that takes a speed ratio as a parameter and returns time to reach 100 kph. End of explanation """ time_to_speed(13/60, params) """ Explanation: We can test it with the default ratio: End of explanation """ for teeth in linrange(8, 18): print(teeth, time_to_speed(teeth/60, params)) """ Explanation: Now we can try it with different numbers of teeth on the motor gear (assuming that the axle gear has 60 teeth): End of explanation """ def initial_acceleration(speed_ratio, params): """Maximum acceleration as a function of speed ratio. speed_ratio: ratio of wheel speed to motor speed params: Params object returns: peak acceleration, in m/s^2 """ params = Params(params, speed_ratio=speed_ratio) system = make_system(params) a_initial = slope_func(system.init, 0, system)[1] * m/s**2 return a_initial """ Explanation: Wow! The speed ratio has a big effect on the results. At first glance, it looks like we could break the world record (1.513 seconds) just by decreasing the number of teeth. But before we try it, let's see what effect that has on peak acceleration. End of explanation """ for teeth in linrange(8, 18): print(teeth, initial_acceleration(teeth/60, params)) """ Explanation: Here are the results: End of explanation """ 23.07 / 9.8 """ Explanation: As we decrease the speed ratio, the peak acceleration increases. With 8 teeth on the motor gear, we could break the world record, but only if we can accelerate at 2.3 times the acceleration of gravity, which is impossible without very sticky ties and a vehicle that generates a lot of downforce. End of explanation """
aschaffn/phys202-2015-work
assignments/assignment03/NumpyEx03.ipynb
mit
import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import antipackage import github.ellisonbg.misc.vizarray as va """ Explanation: Numpy Exercise 3 Imports End of explanation """ def brownian(maxt, n): """Return one realization of a Brownian (Wiener) process with n steps and a max time of t.""" t = np.linspace(0.0,maxt,n) h = t[1]-t[0] Z = np.random.normal(0.0,1.0,n-1) dW = np.sqrt(h)*Z W = np.zeros(n) W[1:] = dW.cumsum() return t, W """ Explanation: Geometric Brownian motion Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process. End of explanation """ t,W = brownian(1.0, 1000) assert isinstance(t, np.ndarray) assert isinstance(W, np.ndarray) assert t.dtype==np.dtype(float) assert W.dtype==np.dtype(float) assert len(t)==len(W)==1000 """ Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W. End of explanation """ plt.plot(t,W) plt.xlabel("$t$") plt.ylabel("$W(t)$") assert True # this is for grading """ Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes. End of explanation """ dW = np.diff(W) dW.mean(), dW.std() assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float) """ Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences. End of explanation """ def geo_brownian(t, W, X0, mu, sigma): """Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" exponent = 0.5 * t * (mu - sigma)**2 + sigma * W return X0 * np.exp(exponent) assert True # leave this for grading """ Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function. End of explanation """ plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3)) plt.xlabel("$t$") plt.ylabel("$X(t)$") assert True # leave this for grading """ Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. End of explanation """
DJCordhose/ai
notebooks/workshops/d2d/cnn-intro.ipynb
mit
import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np from distutils.version import StrictVersion import sklearn print(sklearn.__version__) assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0') # We need keras 2.0.6 or later as this is the version our GPU based system used to create some models !pip install keras --upgrade # after installation call Restart & Run All in Kernel menu import keras print(keras.__version__) assert StrictVersion(keras.__version__) >= StrictVersion('2.0.6') import pandas as pd print(pd.__version__) assert StrictVersion(pd.__version__) >= StrictVersion('0.19.0') """ Explanation: Einführung in CNNs Wir haben ein realitätsnahes Beispiel https://twitter.com/art_sobolev/status/907857395757481985?s=03: I don't think it's fine to only list experiments on toy datasets, and hide failures on more complicated cases. dafür können wir leider nicht alles auf lokalen Rechnern oder Azure trainieren End of explanation """ !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/speed-limit-signs.zip from zipfile import ZipFile zip = ZipFile(r'speed-limit-signs.zip') zip.extractall('.') !ls -l speed-limit-signs !cat speed-limit-signs/README.md """ Explanation: Laden und Vorbereiten der Bild-Daten End of explanation """ import os import skimage.data import skimage.transform from keras.utils.np_utils import to_categorical import numpy as np def load_data(data_dir, type=".ppm"): num_categories = 6 # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)] # For each label, load it's images and add them to the images list. # And add the label number (i.e. directory name) to the labels list. for f in file_names: images.append(skimage.data.imread(f)) labels.append(int(d)) images64 = [skimage.transform.resize(image, (64, 64)) for image in images] return images64, labels # Load datasets. ROOT_PATH = "./" original_dir = os.path.join(ROOT_PATH, "speed-limit-signs") images, labels = load_data(original_dir, type=".ppm") import matplotlib import matplotlib.pyplot as plt def display_images_and_labels(images, labels): """Display the first image of each label.""" unique_labels = set(labels) plt.figure(figsize=(15, 15)) i = 1 for label in unique_labels: # Pick the first image for each label. image = images[labels.index(label)] plt.subplot(8, 8, i) # A grid of 8 rows x 8 columns plt.axis('off') plt.title("Label {0} ({1})".format(label, labels.count(label))) i += 1 _ = plt.imshow(image) display_images_and_labels(images, labels) """ Explanation: Big Kudos to Waleed Abdulla for providing the initial idea and many of the functions used to prepare and display the images: https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6#.i728o84ib End of explanation """ # again a little bit of feature engeneering y = np.array(labels) X = np.array(images) from keras.utils.np_utils import to_categorical num_categories = 6 y = to_categorical(y, num_categories) from keras.models import Model from keras.layers import Dense, Dropout, Flatten, Input from keras.layers import Convolution2D, MaxPooling2D # input tensor for a 3-channel 64x64 image inputs = Input(shape=(64, 64, 3)) # one block of convolutional layers x = Convolution2D(64, 3, activation='relu', padding='same')(inputs) x = Convolution2D(64, 3, activation='relu', padding='same')(x) x = Convolution2D(64, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) # one more block x = Convolution2D(128, 3, activation='relu', padding='same')(x) x = Convolution2D(128, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) # one more block x = Convolution2D(256, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) x = Flatten()(x) x = Dense(256, activation='relu')(x) # softmax activation, 6 categories predictions = Dense(6, activation='softmax')(x) model = Model(input=inputs, output=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) """ Explanation: Modell-Architektur http://cs231n.github.io/neural-networks-1/#power Layout of a typical CNN http://cs231n.github.io/convolutional-networks/ Classic VGG like Architecture we use a VGG like architecture based on https://arxiv.org/abs/1409.1556 basic idea: sequential, deep, small convolutional filters, use dropouts to reduce overfitting 16/19 layers are typical we choose less layers, because we have limited resources Convolutional Blocks: Cascading many Convolutional Layers having down sampling in between http://cs231n.github.io/convolutional-networks/#conv Example of a Convolution Original Image Many convolutional filters applied over all channels http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Downlsampling Layer: Reduces data sizes and risk of overfitting http://cs231n.github.io/convolutional-networks/#pool Wähle aus einer von 2 Hands-Ons Hands-On Alternative 1 (Überblick) Mache Erfahrungen mit allen Schichten: https://transcranial.github.io/keras-js/#/mnist-cnn Die Architektur für MNIST ist einfacher, aber enthält alle Arten von Schichten, die wir auch hier nutzen Zeichne ein paar Zahlen und sieh dir die Zwischenergebnisse in allen Schichten an Nebenbei: Keras.js kann deine Keras-Modelle auch im Browser ausführen Hands-On Alternative 2 (Funktionsweise) Probiere Filter Kernel für CNNs aus: http://setosa.io/ev/image-kernels/ * Nutze zumindest Sharpen und Blur * Probiere beide auf einem Verkersschild aus: https://github.com/DJCordhose/speed-limit-signs/raw/master/data/real-world/4/100-sky-cutoff-detail.jpg * Erzeuge einen eigenen Filter * Kannst du einen Filter erstellen, der eine schwarze Ausgabe erzeugt? End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.9, random_state=42, stratify=y) X_train.shape, y_train.shape # %time model.fit(X_train, y_train, epochs=50, validation_split=0.3) %time model.fit(X_train, y_train, epochs=100, validation_split=0.3) """ Explanation: Optimizers: Adam and RMSprop seem nice http://cs231n.github.io/neural-networks-3/#ada Zuerst testen wir unser Modell, ob wir es überhaupt trainiert bekommen, indem wir auf einer kleinen Menge von Daten overfitten http://cs231n.github.io/neural-networks-3/#sanitycheck End of explanation """ from keras.models import Model from keras.layers import Dense, Dropout, Activation, Flatten, Input from keras.layers import Convolution2D, MaxPooling2D # this is important, try and vary between .4 and .75 drop_out = 0.7 # input tensor for a 3-channel 64x64 image inputs = Input(shape=(64, 64, 3)) # one block of convolutional layers x = Convolution2D(64, 3, activation='relu', padding='same')(inputs) x = Convolution2D(64, 3, activation='relu', padding='same')(x) x = Convolution2D(64, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) x = Dropout(drop_out)(x) # one more block x = Convolution2D(128, 3, activation='relu', padding='same')(x) x = Convolution2D(128, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) x = Dropout(drop_out)(x) # one more block x = Convolution2D(256, 3, activation='relu', padding='same')(x) x = MaxPooling2D(pool_size=(2, 2))(x) x = Dropout(drop_out)(x) x = Flatten()(x) x = Dense(256, activation='relu')(x) x = Dropout(drop_out)(x) # softmax activation, 6 categories predictions = Dense(6, activation='softmax')(x) model = Model(input=inputs, output=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) """ Explanation: Verlauf der Metriken beim Overfitting Accuracy Validation Accuracy Hands-On: Hauptübung Vereinfache die Architektur so weit, bis sie nicht mehr overfitten kann reduziere dazu entweder die Layers oder die Feature Channels Reduziere die Epochen auf ca. 50, damit du schnell experimentieren kannst Ein solches Modell kann extrem einfach sein und schon dadurch Overfittung verhindern. Wir verfolgen hier aber eine andere Philosophie: Wir machen das Modell so komplex wie es unsere Hardware zulässt und nutzen eine andere Methode, um Overfitting zu verhindern. Overfitting vermeiden mit Dropout End of explanation """ # https://keras.io/callbacks/#tensorboard tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log # open http://localhost:6006 early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100, verbose=1) checkpoint_callback = keras.callbacks.ModelCheckpoint('./model-checkpoints/weights.epoch-{epoch:02d}-val_loss-{val_loss:.2f}.hdf5'); !rm -r tf_log # Depends on harware GPU architecture, set as high as possible (this works well on K80) BATCH_SIZE = 500 # %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback]) %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback]) # %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2) """ Explanation: Training Auf einem GPU basierten System geht das in ein paar Minuten Azure Rechner sind relativ schnell, haben aber keine GPU Hier dauert jede Epoche ca. 10 Sekunden, bei 500 Epochen = 5000 Sekunden = 1,5 Stunden Das können wir nicht warten 2 Möglichkeiten: Du trainierst mit deinem vereinfachten Modell Wir trainieren mit diesem Notebook gemeinsam ein Modell auf einer K80 GPU dieses Modell wird dann geteilt und jeder macht dann die Auswertung wieder in seinem Notebook Während das Modell trainiert (dauert auf K80 GPU nur ein paar Minuten) sehen wir uns das TensorBoard an und verfolgen das Training Loss, Accuracy, Validation Loss, Validation Accuracy End of explanation """ model.save('conv-vgg.hdf5') !ls -lh # https://transfer.sh/ # Speichert eure Daten für 14 Tage !curl --upload-file conv-vgg.hdf5 https://transfer.sh/conv-vgg.hdf5 # Vortrainiertes Modell # loss: 0.0310 - acc: 0.9917 - val_loss: 0.4075 - val_acc: 0.9508 # https://transfer.sh/B1W8e/conv-vgg.hdf5 """ Explanation: Idealer Verlauf der Metriken beim vollen Training 100% bei Training und über 95% bei Validation sind möglich, sind bei der Datenmenge aber mit Vorsicht zu genießen Accuracy Validation Accuracy Sichern des Modells (falls in diesem Notebook trainiert wurde) unser Modell ist 55 MB groß, das ist ein wirklich großes Modell End of explanation """ !ls -lh !rm conv-vgg.hdf5 # anpassen an aktuelles Modell # Nachricht an Olli: Liegt auch local auf Ollis Rechner und kann zur Not von da hochgeladen werden (ai/models/conv-vgg.hdf5) !curl -O https://transfer.sh/B1W8e/conv-vgg.hdf5 !ls -lh from keras.models import load_model model = load_model('conv-vgg.hdf5') """ Explanation: ODER Laden des trainierten Modells End of explanation """ train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE) test_loss, test_accuracy """ Explanation: Bewertung End of explanation """ import random # Pick 10 random images for test data set random.seed(4) # to make this deterministic sample_indexes = random.sample(range(len(X_test)), 10) sample_images = [X_test[i] for i in sample_indexes] sample_labels = [y_test[i] for i in sample_indexes] ground_truth = np.argmax(sample_labels, axis=1) ground_truth X_sample = np.array(sample_images) prediction = model.predict(X_sample) predicted_categories = np.argmax(prediction, axis=1) predicted_categories # Display the predictions and the ground truth visually. def display_prediction (images, true_labels, predicted_labels): fig = plt.figure(figsize=(10, 10)) for i in range(len(true_labels)): truth = true_labels[i] prediction = predicted_labels[i] plt.subplot(5, 2,1+i) plt.axis('off') color='green' if truth == prediction else 'red' plt.text(80, 10, "Truth: {0}\nPrediction: {1}".format(truth, prediction), fontsize=12, color=color) plt.imshow(images[i]) display_prediction(sample_images, ground_truth, predicted_categories) """ Explanation: Ausprobieren auf ein paar Test-Daten End of explanation """
KshitijT/fundamentals_of_interferometry
3_Positional_Astronomy/3_1_equatorial_coordinates.ipynb
gpl-2.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS from IPython.display import HTML HTML('../style/code_toggle.html') import healpy as hp %pylab inline pylab.rcParams['figure.figsize'] = (15, 10) import matplotlib import ephem """ Explanation: Outline Glossary 3. Positional Astronomy Previous: 3. Positional Astronomy Next: 3.2 Hour Angle (HA) and Local Sidereal Time (LST) Import standard modules: End of explanation """ arcturus = ephem.star('Arcturus') arcturus.compute('2016/2/8',epoch=ephem.J2000) print('J2000: RA:%s DEC:%s' % (arcturus.ra, arcturus.dec)) arcturus.compute('2016/2/8', epoch=ephem.B1950) print('B1950: RA:%s DEC:%s' % (arcturus.a_ra, arcturus.a_dec)) """ Explanation: 3.1 Equatorial Coordinates (RA,DEC) 3.1.1 The Celestial Sphere We can use a geographical coordinate system to uniquely identify a position on earth. We normally use the coordinates latitude $L_a$ (to measure north and south) and longitude $L_o$ (to measure east and west) to accomplish this. The equatorial coordinate system is depicted in Fig. 3.1.1 &#10549;. <a id='pos:fig:geo'><!--\label{pos:fig:geo}--></a> <img src='figures/geo.svg' width=60%> Figure 3.1.1: The geographical coordinates latitude $L_a$ and longitude $L_o$. We also require a coordinate system to map the celestial objects. For all intents and purposes we may think of our universe as being projected onto a sphere of arbitrary radius. This sphere surrounds the Earth and is known as the celestial sphere. This is not a true representation of our universe, but it is a very useful approximate astronomical construct. The celestial equator is obtained by projecting the equator of the earth onto the celestial sphere. The stars themselves do not move on the celestial sphere and therefore have a unique location on it. The Sun is an exception, it changes position in a periodic fashion during the year (as the Earth orbits around the Sun). The path it traverses on the celestial sphere is known as the ecliptic. 3.1.2 The NCP and SCP The north celestial pole (NCP) is an important location on the celestial sphere and is obtained by projecting the north pole of the earth onto the celestial sphere. The star Polaris is very close to the NCP and serves as a reference when positioning a telescope. The south celestial pole (SCP) is obtained in a similar way. The imaginary circle known as the celestial equator is in the same plane as the equator of the earth and is obtained by projecting the equator of the earth onto the celestial sphere. The southern hemisphere counterpart of Polaris is <span style="background-color:cyan">KT:GM: Do you want to add Sigma Octanis to the Glossary?</span> Sigma Octanis. We use a specific point on the celestial equator from which we measure the location of all other celestial objects. This point is known as the first point of Aries ($\gamma$) <!--\vernal--> or the vernal equinox. The vernal equinox is the point where the ecliptic intersects the celestial equator (south to north). We discuss the vernal equinox in more detail in $\S$ 3.2.2 &#10142; <!--\ref{pos:sec:lst}-->. 3.1.3 Coordinate Definitions: We use the equatorial coordinates to uniquely identify the location of celestial objects rotating with the celestial sphere around the SCP/NCP axis. The Right Ascension $\alpha$ - We define the hour circle of an object as the circle on the celestial sphere that crosses the NCP and the object itself, while also perpendicularly intersecting with the celestial equator. The right ascension of an object is the angular distance between the vernal equinox and the hour circle of a celestial object measured along the celestial equator and is measured eastward. It is measured in Hours Minutes Seconds (e.g. $\alpha = 03^\text{h}13^\text{m}32.5^\text{s}$) and spans $360^\circ$ on the celestial sphere from $\alpha = 00^\text{h}00^\text{m}00^\text{s}$ (the coordinates of $\gamma$) to $\alpha = 23^\text{h}59^\text{m}59^\text{s}$. The Declination $\delta$ - the declination of an object is the angular distance from the celestial equator measured along its hour circle (it is positive in the northern celestial hemisphere and negative in the southern celestial hemisphere). It is measured in Degrees Arcmin Arcsec (e.g. $\delta = -15^\circ23'44''$) which spans from $\delta = -90^\circ00'00''$ (SCP) to $+\delta = 90^\circ00'00''$ (NCP). The equatorial coordinates are presented graphically in Fig. 3.1.2 &#10549; <!--\ref{pos:fig:equatorial_coordinates}-->. <div class=warn> <b>Warning:</b> As for any spherical system, the Right Ascension of the NCP ($\delta=+90^ \circ$) and the SCP ($\delta=-90^ \circ$) are ill-defined. And a source close to the any celestial pole can have an unintuitive Right Ascension. </div> <a id='pos:fig:equatorial_coordinates'></a> <!--\label{pos:fig:equatorial_coordinates}--> <img src='figures/equatorial.svg' width=500> Figure 3.1.2: The equatorial coordinates $\alpha$ and $\delta$. The vernal equinox $\gamma$, the equatorial reference point is also depicted. The vernal equinox is the point where the ecliptic (the path the sun traverses over one year) intersects the celestial equator. <span style="background-color:cyan">KT:XX: What are the green circles in the image? </span> <div class=warn> <b>Warning:</b> One arcminute of the declination axis (e.g. $00^\circ01'00''$) is not equal to one <em>minute</em> in right ascension axis (e.g. $00^\text{h}01^\text{m}00^\text{s}$). <br> Indeed, in RA, the 24$^\text{h}$ circle is mapped to a 360$^\circ$ circle meaning that 1 hour spans over a section of 15$^\circ$. And as 1$^\text{h}$ is 60$^\text{m}$, therefore 1$^\text{m}$ in RA correspond to $1^\text{m} = \frac{1^\text{h}}{60}=\frac{15^\circ}{60}=0.25'$. <br> You should be careful about this **factor of 4 difference between RA min and DEC arcmin** (i.e. $\text{RA} \; 00^\text{h}01^\text{m}00^\text{s}\neq \text{DEC} \; 00^\circ01'00''$) </div> 3.1.3 J2000 and B1950 We will be making use of the <cite data-cite=''>pyephem package</cite> &#10548; package in the rest of this chapter to help us clarify and better understand some theoretical concepts. The two classes we will be using are the Observer and the Body class. The Observer class acts as a proxy for an array, while the Body class embodies a specific celestial object. In this section we will only make use of the Body class. Earlier in this section I mentioned that the celestial objects do not move on the celestial sphere and therefore have fixed equatorial coordinates. This is not entirely true. Due to the precession (the change in the orientation of the earth's rotational axis) the location of the stars do in fact change minutely during the course of one generation. That is why we need to link the equatorial coordinates of a celestial object in a catalogue to a specific observational epoch (a specific instant in time). We can then easily compute the true coordinates as they would be today given the equatorial coordinates from a specific epoch as a starting point. There are two popular epochs that are often used, namely J2000 and B1950. Expressed in <cite data-cite=''>UT (Universal Time)</cite> &#10548;: * B1950 - 1949/12/31 22:09:50 UT, * J2000 - 2000/1/1 12:00:00 UT. The 'B' and the 'J' serve as a shorthand for the Besselian year and the Julian year respectively. They indicate the lenght of time used to measure one year while choosing the exact instant in time associated with J2000 and B1950. The Besselian year is based on the concept of a <cite data-cite=''>tropical year</cite> &#10548; and is not used anymore. The Julian year consists of 365.25 days. In the code snippet below we use pyephem to determine the J2000 and B1950 equatorial coordinates of Arcturus. End of explanation """ haslam = hp.read_map('../data/fits/haslam/lambda_haslam408_nofilt.fits') matplotlib.rcParams.update({'font.size': 10}) proj_map = hp.cartview(haslam,coord=['G','C'], max=2e5, xsize=2000,return_projected_map=True,title="Haslam 408 MHz with no filtering",cbar=False) hp.graticule() """ Explanation: 3.1.4 Example: The 408 MHz Haslam map To finish things off, let's make sure that given the concepts we have learned in this section we are able to interpret a radio skymap correctly. We will be plotting and inspecting the <cite data-cite=''>Haslam 408 MHz map</cite> &#10548;. We load the Haslam map with read_map and view it with cartview. These two functions form part of the <cite data-cite=''>healpy package</cite> &#10548;. End of explanation """ fig = plt.figure() ax = fig.add_subplot(111) matplotlib.rcParams.update({'font.size': 22}) #replot the projected healpy map ax.imshow(proj_map[::-1,:],vmax=2e5, extent=[12,-12,-90,90],aspect='auto') names = np.array(["Vernal Equinox","Cassiopeia A","Sagitarius A","Cygnus A","Crab Nebula","Fornax A","Pictor A"]) ra = np.array([0,(23 + 23./60 + 24./3600)-24,(17 + 42./60 + 9./3600)-24,(19 + 59./60 + 28./3600)-24,5+34./60+32./3600,3+22./60+41.7/3600,5+19./60+49.7/3600]) dec = np.array([0,58+48./60+54./3600,-28-50./60,40+44./60+2./3600,22+52./3600,-37-12./60-30./3600,-45-46./60-44./3600]) #mark the positions of important radio sources ax.plot(ra,dec,'ro',ms=20,mfc="None") for k in xrange(len(names)): ax.annotate(names[k], xy = (ra[k],dec[k]), xytext=(ra[k]+0.8, dec[k]+5)) #create userdefined axis labels and ticks ax.set_xlim(12,-12) ax.set_ylim(-90,90) ticks = np.array([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90]) plt.yticks(ticks) ticks = np.array([12,10,8,6,4,2,0,-2,-4,-8,-6,-10,-12]) plt.xticks(ticks) plt.xlabel("Right Ascension [$h$]") plt.ylabel("Declination [$^{\circ}$]") plt.title("Haslam 408 MHz with no filtering") #relabel the tick values fig.canvas.draw() labels = [item.get_text() for item in ax.get_xticklabels()] labels = np.array(["12$^h$","10$^h$","8$^h$","6$^h$","4$^h$","2$^h$","0$^h$","22$^h$","20$^h$","18$^h$","16$^h$","14$^h$","12$^h$"]) ax.set_xticklabels(labels) labels = [item.get_text() for item in ax.get_yticklabels()] labels = np.array(["-90$^{\circ}$","-80$^{\circ}$","-70$^{\circ}$","-60$^{\circ}$","-50$^{\circ}$","-40$^{\circ}$","-30$^{\circ}$","-20$^{\circ}$","-10$^{\circ}$","0$^{\circ}$","10$^{\circ}$","20$^{\circ}$","30$^{\circ}$","40$^{\circ}$","50$^{\circ}$","60$^{\circ}$","70$^{\circ}$","80$^{\circ}$","90$^{\circ}$"]) ax.set_yticklabels(labels) ax.grid('on') """ Explanation: The cartview function also produces a projected map as a byproduct (it takes the form of a 2D numpy array). We can now replot this projected map using matplotlib (see Fig. 3.1.3 &#10549; <!--\ref{pos:fig:haslam_map}-->). We do so in the code snippet that follows. End of explanation """
google/data-pills
pills/Google Ads/[DATA_PILL]_[Google_Ads]_Frequency_and_Audience_Analysis_(ADH).ipynb
apache-2.0
# The Developer Key is used to retrieve a discovery document containing the # non-public Full Circle Query v2 API. This is used to build the service used # in the samples to make API requests. Please see the README for instructions # on how to configure your Google Cloud Project for access to the Full Circle # Query v2 API. DEVELOPER_KEY = 'xxxx' #'INSERT_DEVELOPER_KEY_HERE' # The client secrets file can be downloaded from the Google Cloud Console. CLIENT_SECRETS_FILE = 'adh-key.json' #'Make sure you have correctly renamed this file and you have uploaded it in this colab' """ Explanation: PLEASE MAKE A COPY BEFORE CHANGING Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Important This content are intended for educational and informational purposes only. Configuration ADH APIs Configuration Steps Enable the ADH v1 API in the Google Cloud Storage account you use to access the API. When searching for the API in your GCP Console API Library, use the search term “adsdatahub”. Go to the Google Developers Console and verify that you have access to the Full Circle Query project via the drop-down menu at the top of the page. If you don't see the Full Circle Query project, you should reach out to the Ads Data Hub support team to get access. From the project drop-down menu, select your Big Query project. Click on the hamburger button on the top left corner of the page and click APIs & services > Credentials. If you have not done so already, create an API key by clicking the Create credentials drop-down menu and select API key. This will create an API key that you will need for a later step. If you have not done so already, create a new OAuth 2.0 client ID by clicking the Create credentials button and select OAuth client ID. For the Application type select Other and optionally enter a name to be associated with the client ID. Click Create to create the new Client ID and a dialog will appear to show you your client ID and secret. On the [Credentials page]((https://console.cloud.google.com/apis/credentials) for your project, find your new client ID listed under OAuth 2.0 client IDs, and click the corresponding download icon. The downloaded file will contain your credentials, which will be needed to step through the OAuth 2.0 installed application flow. update the DEVELOPER_KEY field to match the API key you retrieved earlier. Rename the credentials file you downloaded earlier to adh-key.json and upload the file in this colab (on the left menu click on the "Files" tab and then click on the "upload" button End of explanation """ import json import sys import argparse import pprint import random import datetime import pandas as pd import plotly.plotly as py import plotly.graph_objs as go from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient import discovery from oauthlib.oauth2.rfc6749.errors import InvalidGrantError from google.auth.transport.requests import AuthorizedSession from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from plotly.offline import iplot from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter from googleapiclient.errors import HttpError from google.colab import auth auth.authenticate_user() print('Authenticated') """ Explanation: Install Dependencies End of explanation """ # Allow plot images to be displayed %matplotlib inline # Functions def enable_plotly_in_cell(): import IPython from plotly.offline import init_notebook_mode display(IPython.core.display.HTML(''' <script src="/static/components/requirejs/require.js"></script> ''')) init_notebook_mode(connected=False) """ Explanation: Define function to enable charting library End of explanation """ #!/usr/bin/python # # Copyright 2017 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Utilities used to step through OAuth 2.0 flow. These are intended to be used for stepping through samples for the Full Circle Query v2 API. """ _APPLICATION_NAME = 'ADH Campaign Overlap' _CREDENTIALS_FILE = 'fcq-credentials.json' #just to rewrite a new credential file, not for users to provide _SCOPES = 'https://www.googleapis.com/auth/adsdatahub' _DISCOVERY_URL_TEMPLATE = 'https://%s/$discovery/rest?version=%s&key=%s' _FCQ_DISCOVERY_FILE = 'fcq-discovery.json' _FCQ_SERVICE = 'adsdatahub.googleapis.com' _FCQ_VERSION = 'v1' _REDIRECT_URI = 'urn:ietf:wg:oauth:2.0:oob' _SCOPE = ['https://www.googleapis.com/auth/adsdatahub'] _TOKEN_URI = 'https://accounts.google.com/o/oauth2/token' MAX_PAGE_SIZE = 50 def _GetCredentialsFromInstalledApplicationFlow(): """Get new credentials using the installed application flow.""" flow = InstalledAppFlow.from_client_secrets_file( CLIENT_SECRETS_FILE, scopes=_SCOPE) flow.redirect_uri = _REDIRECT_URI # Set the redirect URI used for the flow. auth_url, _ = flow.authorization_url(prompt='consent') print ('Log into the Google Account you use to access the adsdatahub Query ' 'v1 API and go to the following URL:\n%s\n' % auth_url) print 'After approving the token, enter the verification code (if specified).' code = raw_input('Code: ') try: flow.fetch_token(code=code) except InvalidGrantError as ex: print 'Authentication has failed: %s' % ex sys.exit(1) credentials = flow.credentials _SaveCredentials(credentials) return credentials def _LoadCredentials(): """Loads and instantiates Credentials from JSON credentials file.""" with open(_CREDENTIALS_FILE, 'rb') as handler: stored_creds = json.loads(handler.read()) creds = Credentials(client_id=stored_creds['client_id'], client_secret=stored_creds['client_secret'], token=None, refresh_token=stored_creds['refresh_token'], token_uri=_TOKEN_URI) return creds def _SaveCredentials(creds): """Save credentials to JSON file.""" stored_creds = { 'client_id': getattr(creds, '_client_id'), 'client_secret': getattr(creds, '_client_secret'), 'refresh_token': getattr(creds, '_refresh_token') } with open(_CREDENTIALS_FILE, 'wb') as handler: handler.write(json.dumps(stored_creds)) def GetCredentials(): """Get stored credentials if they exist, otherwise return new credentials. If no stored credentials are found, new credentials will be produced by stepping through the Installed Application OAuth 2.0 flow with the specified client secrets file. The credentials will then be saved for future use. Returns: A configured google.oauth2.credentials.Credentials instance. """ try: creds = _LoadCredentials() creds.refresh(Request()) except IOError: creds = _GetCredentialsFromInstalledApplicationFlow() return creds def GetDiscoveryDocument(): """Downloads the adsdatahub v1 discovery document. Downloads the adsdatahub v1 discovery document to fcq-discovery.json if it is accessible. If the file already exists, it will be overwritten. Raises: ValueError: raised if the discovery document is inaccessible for any reason. """ credentials = GetCredentials() discovery_url = _DISCOVERY_URL_TEMPLATE % ( _FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY) auth_session = AuthorizedSession(credentials) discovery_response = auth_session.get(discovery_url) if discovery_response.status_code == 200: with open(_FCQ_DISCOVERY_FILE, 'wb') as handler: handler.write(discovery_response.text) else: raise ValueError('Unable to retrieve discovery document for api name "%s"' 'and version "%s" via discovery URL: %s' % _FCQ_SERVICE, _FCQ_VERSION, discovery_url) def GetService(): """Builds a configured adsdatahub v1 API service. Returns: A googleapiclient.discovery.Resource instance configured for the adsdatahub v1 service. """ credentials = GetCredentials() discovery_url = _DISCOVERY_URL_TEMPLATE % ( _FCQ_SERVICE, _FCQ_VERSION, DEVELOPER_KEY) service = discovery.build( 'adsdatahub', _FCQ_VERSION, credentials=credentials, discoveryServiceUrl=discovery_url) return service def GetServiceFromDiscoveryDocument(): """Builds a configured Full Circle Query v2 API service via discovery file. Returns: A googleapiclient.discovery.Resource instance configured for the Full Circle Query API v2 service. """ credentials = GetCredentials() with open(_FCQ_DISCOVERY_FILE, 'rb') as handler: discovery_doc = handler.read() service = discovery.build_from_document( service=discovery_doc, credentials=credentials) return service try: full_circle_query = GetService() except IOError as ex: print ('Unable to create ads data hub service - %s' % ex) print ('Did you specify the client secrets file in samples_util.py?') sys.exit(1) try: # Execute the request. response = full_circle_query.customers().list().execute() except HttpError as e: print (e) sys.exit(1) if 'customers' in response: print ('ADH API Returned {} Ads Data Hub customers for the current user!'.format(len(response['customers']))) for customer in response['customers']: print(json.dumps(customer)) else: print ('No customers found for current user.') """ Explanation: Authenticate against the ADH API ADH documentation End of explanation """ #@title Define ADH configuration parameters customer_id = 000000001 #@param query_name = 'test1' #@param big_query_project = 'adh-scratch' #@param Destination Project ID big_query_dataset = 'test' #@param Destination Dataset big_query_destination_table = 'freqanalysis_test' #@param Destination Table big_query_destination_table_affinity = 'freqanalysis_test_affinity' #@param Affinity Destination Table big_query_destination_table_inmarket = 'freqanalysis_test_inmarket' #@param In-market Destination Table big_query_destination_table_age_gender = 'freqanalysis_test_age_gender' #@param Age/Gender Destination Table start_date = '2019-12-01' #@param {type:"date", allow-input: true} end_date = '2019-12-31' #@param {type:"date", allow-input: true} max_freq = 100 #@param {type:"integer", allow-input: true} cpm = 12#@param {type:"number", allow-input: true} id_type = "campaign_id" #@param ["", "advertiser_id", "campaign_id", "placement_id", "ad_id"] {type: "string", allow-input: false} IDs = "12345678" #@param {type: "string", allow-input: true} """ Explanation: Frequency Analysis <b>Purpose:</b> This tool should be used to guide you defining an optimal frequency cap considering the CTR curve. Due to that it is more useful in awareness use cases. Key notes For some campaings the user ID will be <b>zeroed</b> (e.g. Googel Data, ITP browsers and YouTube Data), therefore <b>excluded</b> from the analysis. For more information click <a href="https://support.google.com/dcm/answer/9006418" > here</a>; It will be only included in the analysis campaigns which clicks and impressions were tracked. Instructions * First of all: <b>MAKE A COPY</b> =); * Fulfill the query parameters in the Box 1; * In the menu above click in Runtime > Run All; * Authorize your credentials; * Go to the end of the colab and your figures will be ready; * After defining what should be the optimal frequency cap fill it in the Box 2 and press play. Step 1 - Instructions - Defining parameters to find the optimal frequency <b>max_freq:</b> Stands for the amount of frequency you want to plot the graphics (e.g. if you put 50, you will look for impressions that was shown up to 50 times for users); <b>id_type:</b> How do you want to filter your data (if you don't want to filter leave it blank); <b>IDs:</b> Accordingly to the id_type chosen before, fill in this field following this patterns: 'id-1111', 'id-2222', ... End of explanation """ def df_calc_fields(df): df['ctr'] = df.clicks / df.impressions df['cpc'] = df.cost / df.clicks df['cumulative_clicks'] = df.clicks.cumsum() df['cumulative_impressions'] = df.impressions.cumsum() df['cumulative_reach'] = df.reach.cumsum() df['cumulative_cost'] = df.cost.cumsum() df['coverage_clicks'] = df.cumulative_clicks / df.clicks.sum() df['coverage_impressions'] = df.cumulative_impressions / df.impressions.sum() df['coverage_reach'] = df.cumulative_reach / df.reach.sum() return df """ Explanation: Step 2 - Create a function for the final calculations From DT data Calculate metrics using pandas Pass through the pandas dataframe when you call this function End of explanation """ # Build the query dc = {} if (IDs == ""): dc['ID_filters'] = "" else: dc['id_type'] = id_type dc['IDs'] = IDs dc['ID_filters'] = '''AND {id_type} IN ({IDs})'''.format(**dc) #create global query list global_query_name = [] """ Explanation: Step 3 - Build the query Step 3a - Query for reach & frequency Set up the vairables End of explanation """ q1 = """ WITH imp_u_clicks AS ( SELECT user_id, query_id.time_usec AS interaction_time, 0 AS cost, 'imp' AS interaction_type FROM adh.google_ads_impressions WHERE user_id != '0' {ID_filters} """ """ Explanation: Part 1 - Find all impressions from the impression table: * Select all user IDs from the impression table * Select the event_time * Mark the interaction type as 'imp' for all of these rows * Filter for the dates set in Step 1 using the partition files to reduce bigQuery costs by only searching in files within a 2 day interval of the set date range * Filter out any user IDs that are 0 * If specific ID filters were applied in Step 1 filter the data for those IDs End of explanation """ q2 = """ UNION ALL ( SELECT user_id, click_id.time_usec AS interaction_time, advertiser_click_cost_usd AS cost, 'click' AS interaction_type FROM adh.google_ads_clicks WHERE user_id != '0' AND impression_data.{id_type} IN ({IDs}) ) ), """ """ Explanation: Part 2 - Find all clicks from the clicks table: Select all User IDs from the click table Select the event_time Mark the interaction type as 'click' for all of these rows Filter for the dates set in Step 1 using the partition files to reduce BigQuery costs by only searching in files within a 2 day interval of the set date range If specific ID filters were applied in Step 2 filter the data for those IDs Use a union to create a single table with both impressions and clicks End of explanation """ q3 = """ user_level_data AS ( SELECT user_id, SUM(IF(interaction_type = 'imp', 1, 0)) AS impressions, SUM(IF(interaction_type = 'click', 1, 0)) AS clicks, SUM(cost) AS cost FROM imp_u_clicks GROUP BY user_id) """ """ Explanation: output example: <table> <tr> <th>USER_ID</th> <th>interaction_time</th> <th>interaction_type</th> </tr> <tr> <td>001</td> <td>timestamp</td> <td>impression</td> </tr> <tr> <td>001</td> <td>timestamp</td> <td>impression</td> </tr> <tr> <td>001</td> <td>timestamp</td> <td>click</td> </tr> <tr> <td>002</td> <td>timestamp</td> <td>impression</td> </tr> </tr> <tr> <td>002</td> <td>timestamp</td> <td>click</td> </tr> </tr> <tr> <td>003</td> <td>timestamp</td> <td>impression</td> </tr> <tr> <td>001</td> <td>timestamp</td> <td>impression</td> </tr> </table> Part 3 - Calculate impressions and clicks per user: For each user, calculate the number of impressions and clicks using the table created in Part 1 and 2 End of explanation """ q4 = """ SELECT impressions AS frequency, SUM(clicks) AS clicks, SUM(impressions) AS impressions, COUNT(*) AS reach, SUM(cost) AS cost FROM user_level_data GROUP BY 1 ORDER BY frequency ASC """ """ Explanation: output example: <table> <tr> <th>USER_ID</th> <th>impressions</th> <th>clicks</th> </tr> <tr> <td>001</td> <td>3</td> <td>1</td> </tr> <tr> <td>002</td> <td>1</td> <td>1</td> </tr> <tr> <td>003</td> <td>1</td> <td>0</td> </tr> </table> Part 4 - Calculate metrics per frequency: Use the table created in Part 3 with metrics at user level to calculate metrics per each frequency Frequency: The number of impressions served to each user Clicks: The sum of clicks that occured at each frequency Impressions: The sum of all impressions that occured at each frequency Reach: The total number of unique users (the count of all user ids) Group by Frequency End of explanation """ query_text = (q1 + q2 + q3 + q4).format(**dc) print(query_text) """ Explanation: output example: <table> <tr> <th>frequency</th> <th>clicks</th> <th>impression</th> <th>reach</th> </tr> <tr> <td>1</td> <td>1</td> <td>2</td> <td>2</td> </tr> <tr> <td>2</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>3</td> <td>1</td> <td>3</td> <td>1</td> </tr> </table> Join the query and use pythons format method to pass in your parameters set in step 1 End of explanation """ import datetime try: full_circle_query = GetService() except IOError, ex: print 'Unable to create ads data hub service - %s' % ex print 'Did you specify the client secrets file?' sys.exit(1) d = datetime.datetime.today() query_create_body = { 'name': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq', 'title': query_name + '_' + d.strftime('%d-%m-%Y') + '_freq', 'queryText': query_text } try: # Execute the request. new_query = full_circle_query.customers().analysisQueries().create(body=query_create_body, parent='customers/' + str(customer_id)).execute() global_query_name.append(new_query["name"]) except HttpError as e: print e sys.exit(1) print 'New query %s created for customer ID "%s":' % (query_name, customer_id) print(json.dumps(new_query)) """ Explanation: Create the query required for ADH * When working with ADH the standard BigQuery query needs to be adapted to run in ADH * This can be done bia the API End of explanation """ qb1 = """ SELECT a.affinity_category, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr FROM adh.google_ads_impressions imp, UNNEST(affinity) aff LEFT JOIN adh.google_ads_clicks clk USING (query_id) JOIN adh.affinity a on aff = a.affinity_id WHERE a.affinity_category != '' {ID_filters} GROUP BY 1 ORDER BY 3 DESC """ """ Explanation: Step 3b - Query for demographics & interest End of explanation """ qb2 = """ SELECT a.in_market_category, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr FROM adh.google_ads_impressions imp, UNNEST(in_market) aff LEFT JOIN adh.google_ads_clicks clk USING (query_id) JOIN adh.in_market a on aff = a.in_market_id WHERE a.in_market_category != '' {ID_filters} GROUP BY 1 ORDER BY 3 DESC """ """ Explanation: output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>Gamers</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Music Lovers</td> <td>20</td> <td>10</td> <td>4</td> <td>5</td> <td>0.5</td> </tr> <tr> <td>Sports Fans</td> <td>40</td> <td>10</td> <td>2</td> <td>20</td> <td>0.25</td> </tr> </table> End of explanation """ qb3 = """ SELECT gen.gender_name, age.age_group_name, COUNT(imp.query_id.time_usec) AS impression, COUNT(clk.click_id.time_usec) AS clicks, COUNT(DISTINCT imp.user_id) AS reach, COUNT(imp.query_id.time_usec)/COUNT(DISTINCT imp.user_id) AS frequency, COUNT(clk.click_id.time_usec) /COUNT(imp.query_id.time_usec) AS ctr FROM adh.google_ads_impressions imp LEFT JOIN adh.google_ads_clicks clk USING (query_id) LEFT JOIN adh.age_group age ON imp.demographics.age_group = age_group_id LEFT JOIN adh.gender gen ON imp.demographics.gender = gender_id WHERE {id_type} IN ({IDs}) GROUP BY 1,2 ORDER BY 1,2 DESC """ import datetime try: full_circle_query = GetService() except IOError, ex: print 'Unable to create ads data hub service - %s' % ex print 'Did you specify the client secrets file?' sys.exit(1) # create request body method d = datetime.datetime.today() demoQuery = [qb1, qb2, qb3] def createRequestBody(arg): data = {} data['name'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' +str(arg) data['title'] = query_name + '_' + d.strftime('%d-%m-%Y') + '_demo_' + str(arg) data['queryText'] = demoQuery[arg].format(**dc) return data #create multiple query for i in range(len(demoQuery)): try: # Execute the request. queryBody = createRequestBody(i) new_query = full_circle_query.customers().analysisQueries().create(body=queryBody, parent='customers/' + str(customer_id)).execute() global_query_name.append(new_query["name"]) except HttpError as e: print e sys.exit(1) print 'New query %s created for customer ID "%s":' % (new_query_name, customer_id) """ Explanation: output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>SUVs</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Home & Garden</td> <td>20</td> <td>10</td> <td>4</td> <td>5</td> <td>0.5</td> </tr> <tr> <td>Education</td> <td>40</td> <td>10</td> <td>2</td> <td>20</td> <td>0.25</td> </tr> </table> End of explanation """ destination_table_full_path = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table destination_table_full_path_affinity = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_affinity destination_table_full_path_inmarket = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_inmarket destination_table_full_path_age_gender = big_query_project + '.' + big_query_dataset + '.' + big_query_destination_table_age_gender CUSTOMER_ID = customer_id QUERY_NAME = query_name DEST_TABLES = [destination_table_full_path, destination_table_full_path_affinity, destination_table_full_path_inmarket, destination_table_full_path_age_gender] #Dates format_str = '%Y-%m-%d' # The format start_date_obj = datetime.datetime.strptime(start_date, format_str) end_date_obj = datetime.datetime.strptime(end_date, format_str) START_DATE = { "year": start_date_obj.year, "month": start_date_obj.month, "day": start_date_obj.day } END_DATE = { "year": end_date_obj.year, "month": end_date_obj.month, "day": end_date_obj.day } try: full_circle_query = GetService() except IOError, ex: print('Unable to create ads data hub service - %s' % ex) print('Did you specify the client secrets file?') sys.exit(1) query_start_body = { 'spec': { 'startDate': START_DATE, 'endDate': END_DATE }, 'destTable': '', 'customerId': CUSTOMER_ID } #run all queries for i in range(len(global_query_name)): try: # Execute the request. query_start_body['destTable'] = DEST_TABLES[i] operation = full_circle_query.customers().analysisQueries().start(body=query_start_body, name=global_query_name[i].encode("ascii")).execute() except HttpError as e: print(e) sys.exit(1) print('Running query with name "%s" via the following operation:' % query_name) print(json.dumps(operation)) """ Explanation: output example: <table> <tr> <th>affinity</th> <th>impression</th> <th>click</th> <th>reach</th> <th>frequency</th> <th>ctr</th> </tr> <tr> <td>SUVs</td> <td>10</td> <td>2</td> <td>5</td> <td>2</td> <td>0.2</td> </tr> <tr> <td>Home & Garden</td> <td>20</td> <td>10</td> <td>4</td> <td>5</td> <td>0.5</td> </tr> <tr> <td>Education</td> <td>40</td> <td>10</td> <td>2</td> <td>20</td> <td>0.25</td> </tr> </table> Check your query exists https://adsdatahub.google.com/#/queries Find your query in the my queries tab Check and ensure your query is valid (there will be a green tick in the top right corner) If your query is not valid hover over the red exclamation mark to see issues that need to be resolved Step 4 - Run the query Start the query Pass the query in to ADH using the full_circle_query method set at the start Pass in the dates, the destination table name in BigQuery and the customer ID End of explanation """ import time statusDone = False while statusDone is False: print("waiting for the job to complete...") updatedOperation = full_circle_query.operations().get(name=operation['name']).execute() if updatedOperation.has_key('done') and updatedOperation['done'] == True: statusDone = True time.sleep(5) print("Job completed... Getting results") #run bigQuery query dc = [big_query_dataset + '.' + big_query_destination_table, big_query_dataset + '.' + big_query_destination_table_affinity, big_query_dataset + '.' + big_query_destination_table_inmarket, big_query_dataset + '.' + big_query_destination_table_age_gender] qs = ['select * from {}'.format(q) for q in dc] """ Explanation: Step 5 - Retrieve the table from BigQuery Retrieve the results from BigQuery Check to make sure the query has finished running and is saved in the new BigQuery Table When it is done we cane retrieve it End of explanation """ # Run query as save as a table (also known as dataframe) df = pd.io.gbq.read_gbq(q1, project_id=big_query_project, dialect='standard', reauth=True) dfs = [pd.io.gbq.read_gbq(q, project_id=big_query_project, dialect='standard', reauth=True) for q in qs] for i in range(4): print(dfs[i]) """ Explanation: We are using the pandas library to run the query. We pass in the query (q), the project id and set the SQL language to 'standard' (as opposed to legacy SQL) End of explanation """ # Save the original dataframe as a csv file in case you need to recover the original data dfs[0].to_csv('data_reach_freq.csv', index=False) dfs[1].to_csv('data_affinity.csv', index=False) dfs[2].to_csv('data_inmarket.csv', index=False) dfs[3].to_csv('data_age_gender.csv', index=False) """ Explanation: Save the output as a CSV End of explanation """ #prepare reach & frequency data df = pd.read_csv('data_reach_freq.csv') print(df.head()) #prepare affinity data df3 = pd.read_csv('data_affinity.csv') print(df3.head()) #prepare in_market data df4 = pd.read_csv('data_inmarket.csv') print(df4.head()) #prepare age & gender data df5 = pd.read_csv('data_age_gender.csv') print(df5.head()) """ Explanation: Setup up the dataframe and preview to check the data. End of explanation """ df = df[:max_freq+1] # Reduces the dataframe to have the size you set as the maximum frequency (max_freq) df = df_calc_fields(df) df2 = df.copy() # Copy the dataframe you calculated the fields in case you need to recover it graphs = [] # Variable to save all graphics df.head() """ Explanation: Step 6 - Set up the data and all the charts that will be plotted 6.1 Transform data Use the calculation function created to calculate all the values based off your data End of explanation """ # Save all data into a list to plot the graphics impressions = dict(type='bar', x=df.frequency, y=df.impressions, name='impressions', marker=dict(color='rgb(0, 29, 255)', line=dict(width=1))) ctr = dict( type='scatter', x=df.frequency, y=df.ctr, name='ctr', marker=dict(color='rgb(255, 148, 0)', line=dict(width=1)), xaxis='x1', yaxis='y2', ) layout = dict( title='Impressions and CTR Comparison on Each Frequency', autosize=True, legend=dict(x=1.15, y=1), hovermode='x', xaxis=dict(tickangle=-45, autorange=True, tickfont=dict(size=10), title='frequency', type='category'), yaxis=dict(showgrid=True, title='impressions'), yaxis2=dict(overlaying='y', anchor='x', side='right', showgrid=False, title='ctr'), ) fig = dict(data=[impressions, ctr], layout=layout) graphs.append(fig) clicks = dict(type='bar', x= df.frequency, y= df.clicks, name='Clicks', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) ctr = dict(type='scatter', x= df.frequency, y= df.cpc, name='cpc', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y2' ) layout = dict(autosize= True, title='Clicks and CPC Comparison on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'clicks' ), yaxis2=dict( overlaying= 'y', anchor= 'x', side= 'right', showgrid= False, title= 'cpc' ) ) fig = dict(data=[clicks, ctr], layout=layout) graphs.append(fig) ctr = dict(type='scatter', x= df.frequency, y= df.ctr, name='ctr', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) cpc = dict(type='scatter', x= df.frequency, y= df.cpc, name='cpc', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y2' ) layout = dict(autosize= True, title='CTR and CPC Comparison on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category', showgrid =False ), yaxis=dict( showgrid=False, title= 'ctr' ), yaxis2=dict( overlaying= 'y', anchor= 'x', side= 'right', showgrid= False, title= 'cpc' ) ) fig = dict(data=[ctr, cpc], layout=layout) graphs.append(fig) pareto = dict(type='scatter', x= df.frequency, y= df.coverage_clicks, name='Cumulative % Clicks', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) cpc = dict(type='scatter', x= df.frequency, y= df.cpc, name='cpc', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y2' ) layout = dict(autosize= True, title='Cumulative Clicks and CPC Comparison on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'cum clicks' ), yaxis2=dict( overlaying= 'y', anchor= 'x', side= 'right', showgrid= False, title= 'cpc' ) ) fig = dict(data=[pareto, cpc], layout=layout) graphs.append(fig) pareto = dict(type='scatter', x= df.frequency, y= df.coverage_clicks, name='Cumulative % Clicks', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) cpc = dict(type='scatter', x= df.frequency, y= df.ctr, name='ctr', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y2' ) layout = dict(autosize= True, title='Cumulative Clicks and CTR Comparison on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'cum clicks' ), yaxis2=dict( overlaying= 'y', anchor= 'x', side= 'right', showgrid= False, title= 'ctr' ) ) fig = dict(data=[pareto, cpc], layout=layout) graphs.append(fig) pareto = dict(type='scatter', x= df.frequency, y= df.coverage_reach, name='Cumulative % Reach', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) cpc = dict(type='scatter', x= df.frequency, y= df.cost, name='cost', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y2' ) layout = dict(autosize= True, title='Cumulative Reach and Cost Comparison on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'cummulative reach' ), yaxis2=dict( overlaying= 'y', anchor= 'x', side= 'right', showgrid= False, title= 'cost' ) ) """ Explanation: Analysis 1: Frequency Analysis by user Step 1: Set up graphs End of explanation """ # Show the first 5 rows of the dataframe (data matrix) with the final data df.head() # Export the whole dataframe to a csv file that can be used in an external environment df.to_csv('freq_analysis.csv', index=False) """ Explanation: Step 2: Export all the data (optional) End of explanation """ enable_plotly_in_cell() iplot(graphs[0]) """ Explanation: Output: Visualise the data Impression and CTR on each frequency Clicks and CPC Comparison on Each Frequency CTR and CPC Comparison on Each Frequency Cumulative Clicks and CPC Comparison on Each Frequency Cumulative Clicks and CTR Comparison on Each Frequency Impression and CTR on each frequency Consider your frequency range, ensure frequency management is in place. Where is your CTR floor? At what point does your CTR drop below a level that you care about. Determine what the wasted impressions is if you don't change your frequency. End of explanation """ enable_plotly_in_cell() iplot(graphs[1]) """ Explanation: Clicks and CPC Comparison on Each Frequency What is your CPC ceiling Understand what the frequency is at that level Determine what impact changing your frequency will have on clicks End of explanation """ enable_plotly_in_cell() iplot(graphs[2]) """ Explanation: CTR and CPC Comparison on Each Frequency How does your CTR and CPC impact each other Make an informed decision regarding suitable goals End of explanation """ enable_plotly_in_cell() iplot(graphs[3]) """ Explanation: Cumulative Clicks and CPC Comparison on Each Frequency Understand what a suitable CPC goal might be 1. What is the change in cost for increased clicks 2. What is the incremental gains for an increased cost End of explanation """ enable_plotly_in_cell() iplot(graphs[4]) """ Explanation: Cumulative Clicks and CTR Comparison on Each Frequency At what frequency does your CTR drop below an acceptable value End of explanation """ #Understand the logic behind calculation graphs2 = [] pareto = dict(type='scatter', x= df.frequency, y= df.coverage_reach, name='Cummulative % Reach', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) ccm_imp = dict(type='scatter', x= df.frequency, y= df.coverage_impressions, name='Cummulative % Impressions', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y' ) layout = dict(autosize= True, title='Cummulative Impressions and Cummulative Reach on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'cummulative %' ) ) fig = dict(data=[pareto, ccm_imp], layout=layout) graphs2.append(fig) pareto = dict(type='scatter', x= df.frequency, y= df.coverage_clicks, name='Cummulative % Clicks', marker=dict(color= 'rgb(0, 29, 255)', line= dict(width= 1)) ) ccm_imp = dict(type='scatter', x= df.frequency, y= df.coverage_impressions, name='Cummulative % Impressions', marker=dict(color= 'rgb(255, 148, 0)', line= dict(width= 1)), xaxis='x1', yaxis='y' ) layout = dict(autosize= True, title='Cumulative Impressions and Cummulative Clicks on Each Frequency', legend= dict(x= 1.15, y= 1 ), hovermode='x', xaxis=dict(tickangle= -45, autorange=True, tickfont=dict(size= 10), title= 'frequency', type= 'category' ), yaxis=dict( showgrid=True, title= 'cummulative %' ) ) fig = dict(data=[pareto, ccm_imp], layout=layout) graphs2.append(fig) """ Explanation: Analysis 2: Understanding optimal frequency Step 1: Set up charts End of explanation """ enable_plotly_in_cell() iplot(graphs2[0]) """ Explanation: Output: Visualise the results Cummulative Impressions and Cummulative Reach on Each Frequency How do you maximise your reach without drastically increasing your impressions? To obtain my reach goals, what frequency do I need at what impression cost? With higher frequency caps you will need more impressions to maximise your reach End of explanation """ enable_plotly_in_cell() iplot(graphs2[1]) """ Explanation: Cummulative Impressions and Cummulative Clicks on Each Frequency To obtain my goals in terms of clicks, what frequency do I need, at what impression cost? End of explanation """ #@title 1.1 - Optimal Frequency optimal_freq = 3#@param {type:"integer", allow-input: true} slider_value = 1 #@param test if optimal_freq > len(df2): raise Exception('Your optimal frequency is higher than the maxmium frequency in your campaign please make sure it is lower than {}'.format(len(df2))) """ Explanation: Analysis 3: Determine impressions outside optimal frequency Step 1: Define parameter to be the Optimal Frequency This parameter below will guide the analysis of media loss talking about impressions. We will calculate the percentage of impressions that are out of the number you set as the optimal frequency. End of explanation """ from __future__ import division df2 = df_calc_fields(df2) df_opt, df_not_opt = df[:optimal_freq], df[optimal_freq:] total_impressions = list(df2.cumulative_impressions)[-1] total_imp_not_opt = list(df_not_opt.cumulative_impressions)[-1] - list(df_opt.cumulative_impressions)[-1] imp_not_opt_ratio = total_imp_not_opt / total_impressions total_clicks = list(df2.cumulative_clicks)[-1] total_clicks_not_opt = list(df_not_opt.cumulative_clicks)[-1] - list(df_opt.cumulative_clicks)[-1] clicks_within_opt_ratio = 1-(total_clicks_not_opt / total_clicks) print("{:.1f}% of your total impressions are out of the optimal frequency.".format(imp_not_opt_ratio*100)) print("{:,} of your impressions are out of the optimal frequency".format(total_imp_not_opt)) print("At a CPM of {} - preventing these would result in a cost saving of {:,}".format(cpm, cpm*total_imp_not_opt)) print("") print("If you limited frequency to {}, you would still achieve {:.1f}% of your clicks").format(optimal_freq, clicks_within_opt_ratio*100) """ Explanation: Output: Calculate impression loss End of explanation """ # calculate the color scale diff_affinity = df3.ctr.max(0) - df3.ctr.min(0) df3['colorscale'] = (df3.ctr - df3.ctr.min(0))/ diff_affinity *40 + 130 diff_in_market = df4.ctr.max(0) - df4.ctr.min(0) df4['colorscale'] = (df4.ctr - df4.ctr.min(0))/ diff_in_market *40 + 130 diff_in_market = df5.ctr.max(0) - df5.ctr.min(0) df5['colorscale'] = (df5.ctr - df5.ctr.min(0))/ diff_in_market *40 + 130 #prepare the graph, rerun if no graph shown #affinity graphs3 = [] size1=df3.reach affinity = dict(type='scatter', x= df3.frequency, y= df3.ctr, text= df3.affinity_category, name='Cummulative % Reach', marker=dict(color= df3.colorscale, size=size1, sizemode='area', sizeref=2.*max(size1)/(40.**2), sizemin=4, showscale=True), mode='markers' ) layout = dict(autosize= True, title='affinity bubble chart', xaxis=dict(autorange=True, title= 'frequency' ), yaxis=dict(autorange=True, title= 'CTR' ) ) fig = dict(data=[affinity], layout=layout) graphs3.append(fig) #in market size2=df5.reach in_market = dict(type='scatter', x= df4.frequency, y= df4.ctr, text= df4.in_market_category, name='Cummulative % Reach', marker=dict(color= df4.colorscale, size=size2, sizemode='area', sizeref=2.*max(size2)/(40.**2), sizemin=4, showscale=True), mode='markers' ) layout = dict(autosize= True, title='in_market bubble chart', xaxis=dict(autorange=True, title= 'frequency' ), yaxis=dict(autorange=True, title= 'CTR' ) ) fig = dict(data=[in_market], layout=layout) graphs3.append(fig) #age & demo male = dict(type='scatter', x= df5.frequency[df5.gender_name == 'male'], y= df5.ctr[df5.gender_name == 'male'], text= df5.age_group_name[df5.gender_name == 'male'], name='male', marker=dict(size=df5.reach[df5.gender_name == 'male'], sizemode='area', sizeref=2.*max(df5.reach[df5.gender_name == 'male'])/(40.**2), sizemin=4), mode='markers' ) female = dict(type='scatter', x= df5.frequency[df5.gender_name == 'female'], y= df5.ctr[df5.gender_name == 'female'], text= df5.age_group_name[df5.gender_name == 'female'], name='female', marker=dict(size=df5.reach[df5.gender_name == 'female'], sizemode='area', sizeref=2.*max(df5.reach[df5.gender_name == 'female'])/(40.**2), sizemin=4 ), mode='markers' ) unknown = dict(type='scatter', x= df5.frequency[df5.gender_name == 'unknown'], y= df5.ctr[df5.gender_name == 'unknown'], text= df5.age_group_name[df5.gender_name == 'unknown'], name='unknown', marker=dict(size=df5.reach[df5.gender_name == 'unknown'], sizemode='area', sizeref=2.*max(df5.reach[df5.gender_name == 'unknown'])/(40.**2), sizemin=4), mode='markers' ) layout = dict(autosize= True, title='age & gender bubble chart', xaxis=dict(autorange=True, title= 'frequency' ), yaxis=dict(autorange=True, title= 'CTR' ) ) fig = dict(data=[male,female,unknown], layout=layout) graphs3.append(fig) """ Explanation: Analysis 4: Determine which affinity work best Step 1: Set up charts End of explanation """ enable_plotly_in_cell() iplot(graphs3[0]) """ Explanation: Ouput: Visualise the data Affinity bubble chart What affinity is underreaching? What affinity should I switch the cost to? End of explanation """ enable_plotly_in_cell() iplot(graphs3[1]) """ Explanation: check whether big bubble is in the bottom left corner, those are the affinities with lowest efficency bubble in the upper left corner are the affinities with high potentials you should keep your investment in the upper right hand corner bubble shift budget from bottom right hand corner to other area In market bubble chart What in market segment is underreaching? What in market segment should I switch the cost to? End of explanation """ enable_plotly_in_cell() iplot(graphs3[2]) """ Explanation: check whether big bubble is in the bottom left corner, those are the in market segment with lowest efficency bubble in the upper left corner are the in market segment with high potentials you should keep your investment in the upper right hand corner bubble shift budget from bottom right hand corner to other area age & gender bubble chart which gender is underreaching? Which age group should I switch the cost to? End of explanation """
chetan51/nupic.research
projects/dynamic_sparse/notebooks/ExperimentAnalysis-Neurips-debug-hebbianANDmagnitude.ipynb
gpl-3.0
%load_ext autoreload %autoreload 2 import sys sys.path.append("../../") from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import glob import tabulate import pprint import click import numpy as np import pandas as pd from ray.tune.commands import * from dynamic_sparse.common.browser import * """ Explanation: Experiment: Compare prunning by Hebbian Learning and Weight Magnitude. Motivation. Verify if Hebbian Learning pruning outperforms pruning by Magnitude Conclusions: No pruning leads (0,0) to acc of 0.976 Pruning all connections at every epoch (1,0) leads to acc of 0.964 Best performing model is still no hebbian pruning, and weight pruning set to 0.2 (0.981) Pruning only by hebbian learning decreases accuracy Combining hebbian and weight magnitude is not an improvement compared to simple weight magnitude pruning End of explanation """ exps = ['neurips_debug_test6', ] paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps] df = load_many(paths) df.head(5) # replace hebbian prine df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True) df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True) df.columns df.shape df.iloc[1] df.groupby('model')['model'].count() """ Explanation: Load and check data End of explanation """ # Did any trials failed? df[df["epochs"]<30]["epochs"].count() # Removing failed or incomplete trials df_origin = df.copy() df = df_origin[df_origin["epochs"]>=30] df.shape # which ones failed? # failed, or still ongoing? df_origin['failed'] = df_origin["epochs"]<30 df_origin[df_origin['failed']]['epochs'] # helper functions def mean_and_std(s): return "{:.3f} ± {:.3f}".format(s.mean(), s.std()) def round_mean(s): return "{:.0f}".format(round(s.mean())) stats = ['min', 'max', 'mean', 'std'] def agg(columns, filter=None, round=3): if filter is None: return (df.groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round) else: return (df[filter].groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round) """ Explanation: ## Analysis Experiment Details End of explanation """ # ignoring experiments where weight_prune_perc = 1, results not reliable filter = (df['weight_prune_perc'] < 1) agg(['hebbian_prune_perc'], filter) """ Explanation: What are optimal levels of hebbian and weight pruning End of explanation """ filter = (df['weight_prune_perc'] < 1) agg(['weight_prune_perc'], filter) """ Explanation: No relevant difference End of explanation """ magonly = (df['hebbian_prune_perc'] == 0.0) & (df['weight_prune_perc'] < 0.6) agg(['weight_prune_perc'], magonly) """ Explanation: Optimal level between 0.2 and 0.4 (consistent with previous experiments and SET paper, where 0.3 is an optimal value) End of explanation """ pd.pivot_table(df[filter], index='hebbian_prune_perc', columns='weight_prune_perc', values='val_acc_max', aggfunc=mean_and_std) pd.pivot_table(df[filter], index='hebbian_prune_perc', columns='weight_prune_perc', values='val_acc_last', aggfunc=mean_and_std) df.shape """ Explanation: What is the optimal combination of both End of explanation """
sofianehaddad/gosa
doc/example_gosa.ipynb
lgpl-3.0
import openturns as ot import numpy as np import pygosa %pylab inline """ Explanation: Example of using pygosa We illustrate hereafter the use of the pygosa module. End of explanation """ model = ot.SymbolicFunction(["x1","x2","x3"], ["sin(x1) + 7*sin(x2)^2 + 0.1*(x3^4)*sin(x1)"]) dist = ot.ComposedDistribution( 3 * [ot.Uniform(-np.pi, np.pi)] ) """ Explanation: We define Sobol use-case, which is very common in case of sensitivity analysis: End of explanation """ mcsp = pygosa.SensitivityDesign(dist=dist, model=model, size=1000) """ Explanation: Design of experiment We define the experiment design: End of explanation """ sam = pygosa.MeanSensitivities(mcsp) factors_m = sam.compute_factors() fig, ax = sam.boxplot() figure = pygosa.plot_mean_sensitivities(sam,set_labels=True) """ Explanation: The benefits of using a crude Monte-Carlo approach is the potential use of several contrasts. In this demonstrate example, the used contrast are : Mean contrast to derive its sensitivities Quantile contrast to derive sensitivities for some specific quantile levels Mean contrast to derive sensitivities for some specific threshold values Mean contrast & sensitivities Hereafter we apply the mean contrast to the previous design in order to get the sensitivities : End of explanation """ saq = pygosa.QuantileSensitivities(mcsp) factors_q = [saq.compute_factors(alpha=q) for q in [0.05, 0.25, 0.50, 0.75, 0.95]] fig, ax = saq.boxplot() figure = pygosa.plot_quantiles_sensitivities(saq,set_labels=True) """ Explanation: Quantile sensitivities Hereafter we apply the quantile contrast to the previous design in order to get the sensitivities for quantile levels $\alpha=(5\%, 25\%, 50\%, 75\%, 95\%)$: End of explanation """ sap = pygosa.ProbabilitySensitivities(mcsp) factors_p = [sap.compute_factors(threshold=v) for v in [-2.5, 0, 2.5, 7.0, 7.85]] fig, ax = sap.boxplot(threshold=7.85) figure = pygosa.plot_probability_sensitivities(sap, set_labels=True) """ Explanation: Probability sensitivities Hereafter we apply the probability contrast to the previous design in order to get the sensitivities for thresholds $t=(-2.50, 0, 2.50, 7.0, 7.85)$: End of explanation """
bbalasub1/glmnet_python
docs/glmnet_vignette.ipynb
gpl-3.0
# Jupyter setup to expand cell display to 100% width on your screen (optional) from IPython.core.display import display, HTML display(HTML("<style>.container { width:100% !important; }</style>")) # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'QuickStartExampleX.dat', dtype = scipy.float64) y = scipy.loadtxt(baseDataDir + 'QuickStartExampleY.dat', dtype = scipy.float64) # create weights t = scipy.ones((50, 1), dtype = scipy.float64) wts = scipy.row_stack((t, 2*t)) """ Explanation: Glmnet Vignette (for python) July 12, 2017 Authors Trevor Hastie, B. J. Balakumar Introduction Glmnet is a package that fits a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. The algorithm is extremely fast, and can exploit sparsity in the input matrix x. It fits linear, logistic and multinomial, poisson, and Cox regression models. A variety of predictions can be made from the fitted models. It can also fit multi-response linear regression. The authors of glmnet are Jerome Friedman, Trevor Hastie, Rob Tibshirani and Noah Simon. The Python package is maintained by B. J. Balakumar. The R package is maintained by Trevor Hastie. The matlab version of glmnet is maintained by Junyang Qian. This vignette describes the usage of glmnet in Python. glmnet solves the following problem: $$ \min_{\beta_0, \beta}\frac{1}{N} \sum_{i=1}^N w_i l(y_i, \beta_0+ \beta^T x_i)^2+\lambda \left[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\right], $$ over a grid of values of $\lambda$ covering the entire range. Here $l(y, \eta)$ is the negative log-likelihood contribution for observation $i$; e.g. for the Gaussian case it is $\frac{1}{2} l(y-\eta)^2$. The elastic-net penalty is controlled by $\alpha$, and bridges the gap between lasso ($\alpha=1$, the default) and ridge ($\alpha=0$). The tuning parameter $\lambda$ controls the overall strength of the penalty. It is known that the ridge penalty shrinks the coefficients of correlated predictors towards each other while the lasso tends to pick one of them and discard the others. The elastic-net penalty mixes these two; if predictors are correlated in groups, an $\alpha=0.5$ tends to select the groups in or out together. This is a higher level parameter, and users might pick a value upfront, else experiment with a few different values. One use of $\alpha$ is for numerical stability; for example, the elastic net with $\alpha = 1-\varepsilon$ for some small $\varepsilon>0$ performs much like the lasso, but removes any degeneracies and wild behavior caused by extreme correlations. The glmnet algorithms use cyclical coordinate descent, which successively optimizes the objective function over each parameter with others fixed, and cycles repeatedly until convergence. The package also makes use of the strong rules for efficient restriction of the active set. Due to highly efficient updates and techniques such as warm starts and active-set convergence, our algorithms can compute the solution path very fast. The code can handle sparse input-matrix formats, as well as range constraints on coefficients. The core of glmnet is a set of fortran subroutines, which make for very fast execution. The package also includes methods for prediction and plotting, and a function that performs K-fold cross-validation. Installation Using pip (recommended, courtesy: Han Fan) pip install glmnet_py Complied from source git clone https://github.com/bbalasub1/glmnet_python.git cd glmnet_python python setup.py install Requirement Python 3, Linux Currently, the checked-in version of GLMnet.so is compiled for the following config: Linux: Linux version 2.6.32-573.26.1.el6.x86_64 (gcc version 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) ) OS: CentOS 6.7 (Final) Hardware: 8-core Intel(R) Core(TM) i7-2630QM gfortran: version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) Usage import glmnet_python from glmnet import glmnet Linear Regression Linear regression here refers to two families of models. One is gaussian, the Gaussian family, and the other is mgaussian, the multiresponse Gaussian family. We first discuss the ordinary Gaussian and the multiresponse one after that. Linear Regression - Gaussian family gaussian is the default family option in the function glmnet. Suppose we have observations $x_i \in \mathbb{R}^p$ and the responses $y_i \in \mathbb{R}, i = 1, \ldots, N$. The objective function for the Gaussian family is $$ \min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}}\frac{1}{2N} \sum_{i=1}^N (y_i -\beta_0-x_i^T \beta)^2+\lambda \left[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\right], $$ where $\lambda \geq 0$ is a complexity parameter and $0 \leq \alpha \leq 1$ is a compromise between ridge ($\alpha = 0$) and lasso ($\alpha = 1$). Coordinate descent is applied to solve the problem. Specifically, suppose we have current estimates $\tilde{\beta_0}$ and $\tilde{\beta}\ell$ $\forall j\in 1,\ldots,p$. By computing the gradient at $\beta_j = \tilde{\beta}_j$ and simple calculus, the update is $$ \tilde{\beta}_j \leftarrow \frac{S(\frac{1}{N}\sum{i=1}^N x_{ij}(y_i-\tilde{y}_i^{(j)}),\lambda \alpha)}{1+\lambda(1-\alpha)}, $$ where $\tilde{y}i^{(j)} = \tilde{\beta}_0 + \sum{\ell \neq j} x_{i\ell} \tilde{\beta}\ell$, and $S(z, \gamma)$ is the soft-thresholding operator with value $\text{sign}(z)(|z|-\gamma)+$. This formula above applies when the x variables are standardized to have unit variance (the default); it is slightly more complicated when they are not. Note that for "family=gaussian", glmnet standardizes $y$ to have unit variance before computing its lambda sequence (and then unstandardizes the resulting coefficients); if you wish to reproduce/compare results with other software, best to supply a standardized $y$ first (Using the "1/N" variance formula). glmnet provides various options for users to customize the fit. We introduce some commonly used options here and they can be specified in the glmnet function. alpha is for the elastic-net mixing parameter $\alpha$, with range $\alpha \in [0,1]$. $\alpha = 1$ is the lasso (default) and $\alpha = 0$ is the ridge. weights is for the observation weights. Default is 1 for each observation. (Note: glmnet rescales the weights to sum to N, the sample size.) nlambda is the number of $\lambda$ values in the sequence. Default is 100. lambda can be provided, but is typically not and the program constructs a sequence. When automatically generated, the $\lambda$ sequence is determined by lambda.max and lambda.min.ratio. The latter is the ratio of smallest value of the generated $\lambda$ sequence (say lambda.min) to lambda.max. The program then generated nlambda values linear on the log scale from lambda.max down to lambda.min. lambda.max is not given, but easily computed from the input $x$ and $y$; it is the smallest value for lambda such that all the coefficients are zero. For alpha=0 (ridge) lambda.max would be $\infty$; hence for this case we pick a value corresponding to a small value for alpha close to zero.) standardize is a logical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. For more information, type help(glmnet) or simply ?glmnet. Let us start by loading the data: End of explanation """ # call glmnet fit = glmnet(x = x.copy(), y = y.copy(), family = 'gaussian', \ weights = wts, \ alpha = 0.2, nlambda = 20 ) """ Explanation: As an example, we set $\alpha = 0.2$ (more like a ridge regression), and give double weights to the latter half of the observations. To avoid too long a display here, we set nlambda to 20. In practice, however, the number of values of $\lambda$ is recommended to be 100 (default) or more. In most cases, it does not come with extra cost because of the warm-starts used in the algorithm, and for nonlinear models leads to better convergence properties. End of explanation """ glmnetPrint(fit) """ Explanation: We can then print the glmnet object. End of explanation """ glmnetPlot(fit, xvar = 'lambda', label = True); """ Explanation: This displays the call that produced the object fit and a three-column matrix with columns Df (the number of nonzero coefficients), %dev (the percent deviance explained) and Lambda (the corresponding value of $\lambda$). (Note that the digits option can used to specify significant digits in the printout.) Here the actual number of $\lambda$'s here is less than specified in the call. The reason lies in the stopping criteria of the algorithm. According to the default internal settings, the computations stop if either the fractional change in deviance down the path is less than $10^{-5}$ or the fraction of explained deviance reaches $0.999$. From the last few lines , we see the fraction of deviance does not change much and therefore the computation ends when meeting the stopping criteria. We can change such internal parameters. For details, see the Appendix section or type help(glmnet.control). We can plot the fitted object as in the previous section. There are more options in the plot function. Users can decide what is on the X-axis. xvar allows three measures: "norm" for the $\ell_1$-norm of the coefficients (default), "lambda" for the log-lambda value and "dev" for %deviance explained. Users can also label the curves with variable sequence numbers simply by setting label = TRUE. Let's plot "fit" against the log-lambda value and with each curve labeled. End of explanation """ glmnetPlot(fit, xvar = 'dev', label = True); """ Explanation: Now when we plot against %deviance we get a very different picture. This is percent deviance explained on the training data. What we see here is that toward the end of the path this value are not changing much, but the coefficients are "blowing up" a bit. This lets us focus attention on the parts of the fit that matter. This will especially be true for other models, such as logistic regression. End of explanation """ any(fit['lambdau'] == 0.5) glmnetCoef(fit, s = scipy.float64([0.5]), exact = False) """ Explanation: We can extract the coefficients and make predictions at certain values of $\lambda$. Two commonly used options are: s specifies the value(s) of $\lambda$ at which extraction is made. exact indicates whether the exact values of coefficients are desired or not. That is, if exact = TRUE, and predictions are to be made at values of s not included in the original fit, these values of s are merged with object$lambda, and the model is refit before predictions are made. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s that do not coincide with lambdas used in the fitting algorithm. A simple example is: End of explanation """ fc = glmnetPredict(fit, x[0:5,:], ptype = 'response', \ s = scipy.float64([0.05])) print(fc) """ Explanation: The output is for False.(TBD) The exact = 'True' option is not yet implemented. Users can make predictions from the fitted object. In addition to the options in coef, the primary argument is newx, a matrix of new values for x. The type option allows users to choose the type of prediction: * "link" gives the fitted values "response" the sames as "link" for "gaussian" family. "coefficients" computes the coefficients at values of s "nonzero" retuns a list of the indices of the nonzero coefficients for each value of s. For example, End of explanation """ warnings.filterwarnings('ignore') cvfit = cvglmnet(x = x.copy(), y = y.copy(), ptype = 'mse', nfolds = 20) warnings.filterwarnings('default') """ Explanation: gives the fitted values for the first 5 observations at $\lambda = 0.05$. If multiple values of s are supplied, a matrix of predictions is produced. Users can customize K-fold cross-validation. In addition to all the glmnet parameters, cvglmnet has its special parameters including nfolds (the number of folds), foldid (user-supplied folds), ptype(the loss used for cross-validation): "deviance" or "mse" uses squared loss "mae" uses mean absolute error As an example, End of explanation """ cvfit['lambda_min'] cvglmnetCoef(cvfit, s = 'lambda_min') cvglmnetPredict(cvfit, newx = x[0:5,], s='lambda_min') """ Explanation: does 20-fold cross-validation, based on mean squared error criterion (default though). Parallel computing is also supported by cvglmnet. Parallel processing is turned off by default. It can be turned on using parallel=True in the cvglmnet call. Parallel computing can significantly speed up the computation process, especially for large-scale problems. But for smaller problems, it could result in a reduction in speed due to the additional overhead. User discretion is advised. Functions coef and predict on cv.glmnet object are similar to those for a glmnet object, except that two special strings are also supported by s (the values of $\lambda$ requested): "lambda.1se": the largest $\lambda$ at which the MSE is within one standard error of the minimal MSE. "lambda.min": the $\lambda$ at which the minimal MSE is achieved. End of explanation """ foldid = scipy.random.choice(10, size = y.shape[0], replace = True) cv1=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=1) cv0p5=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=0.5) cv0=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=0) """ Explanation: Users can control the folds used. Here we use the same folds so we can also select a value for $\alpha$. End of explanation """ f = plt.figure() f.add_subplot(2,2,1) cvglmnetPlot(cv1) f.add_subplot(2,2,2) cvglmnetPlot(cv0p5) f.add_subplot(2,2,3) cvglmnetPlot(cv0) f.add_subplot(2,2,4) plt.plot( scipy.log(cv1['lambdau']), cv1['cvm'], 'r.') plt.hold(True) plt.plot( scipy.log(cv0p5['lambdau']), cv0p5['cvm'], 'g.') plt.plot( scipy.log(cv0['lambdau']), cv0['cvm'], 'b.') plt.xlabel('log(Lambda)') plt.ylabel(cv1['name']) plt.xlim(-6, 4) plt.ylim(0, 9) plt.legend( ('alpha = 1', 'alpha = 0.5', 'alpha = 0'), loc = 'upper left', prop={'size':6}); """ Explanation: There are no built-in plot functions to put them all on the same plot, so we are on our own here: End of explanation """ cl = scipy.array([[-0.7], [0.5]], dtype = scipy.float64) tfit=glmnet(x = x.copy(),y= y.copy(), cl = cl) glmnetPlot(tfit); """ Explanation: We see that lasso (alpha=1) does about the best here. We also see that the range of lambdas used differs with alpha. Coefficient upper and lower bounds These are recently added features that enhance the scope of the models. Suppose we want to fit our model, but limit the coefficients to be bigger than -0.7 and less than 0.5. This is easily achieved via the upper.limits and lower.limits arguments: End of explanation """ pfac = scipy.ones([1, 20]) pfac[0, 4] = 0; pfac[0, 9] = 0; pfac[0, 14] = 0 pfit = glmnet(x = x.copy(), y = y.copy(), penalty_factor = pfac) glmnetPlot(pfit, label = True); """ Explanation: These are rather arbitrary limits; often we want the coefficients to be positive, so we can set only lower.limit to be 0. (Note, the lower limit must be no bigger than zero, and the upper limit no smaller than zero.) These bounds can be a vector, with different values for each coefficient. If given as a scalar, the same number gets recycled for all. Penalty factors This argument allows users to apply separate penalty factors to each coefficient. Its default is 1 for each parameter, but other values can be specified. In particular, any variable with penalty.factor equal to zero is not penalized at all! Let $v_j$ denote the penalty factor for $j$ th variable. The penalty term becomes $$ \lambda \sum_{j=1}^p \boldsymbol{v_j} P_\alpha(\beta_j) = \lambda \sum_{j=1}^p \boldsymbol{v_j} \left[ (1-\alpha)\frac{1}{2} \beta_j^2 + \alpha |\beta_j| \right]. $$ Note the penalty factors are internally rescaled to sum to nvars. This is very useful when people have prior knowledge or preference over the variables. In many cases, some variables may be so important that one wants to keep them all the time, which can be achieved by setting corresponding penalty factors to 0: End of explanation """ scipy.random.seed(101) x = scipy.random.rand(100,10) y = scipy.random.rand(100,1) fit = glmnet(x = x, y = y) glmnetPlot(fit); """ Explanation: We see from the labels that the three variables with 0 penalty factors always stay in the model, while the others follow typical regularization paths and shrunken to 0 eventually. Some other useful arguments. exclude allows one to block certain variables from being the model at all. Of course, one could simply subset these out of x, but sometimes exclude is more useful, since it returns a full vector of coefficients, just with the excluded ones set to zero. There is also an intercept argument which defaults to True; if False the intercept is forced to be zero. Customizing plots Sometimes, especially when the number of variables is small, we want to add variable labels to a plot. Since glmnet is intended primarily for wide data, this is not supprted in plot.glmnet. However, it is easy to do, as the following little toy example shows. We first generate some data, with 10 variables, and for lack of imagination and ease we give them simple character names. We then fit a glmnet model, and make the standard plot. End of explanation """ %%capture # Output from this sample code has been suppressed due to (possible) Jupyter limitations # The code works just fine from ipython (tested on spyder) c = glmnetCoef(fit) c = c[1:, -1] # remove intercept and get the coefficients at the end of the path h = glmnetPlot(fit) ax1 = h['ax1'] xloc = plt.xlim() xloc = xloc[1] for i in range(len(c)): ax1.text(xloc, c[i], 'var' + str(i)); """ Explanation: We wish to label the curves with the variable names. Here's a simple way to do this, using the matplotlib library in python (and a little research into how to customize it). We need to have the positions of the coefficients at the end of the path. End of explanation """ # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'MultiGaussianExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'MultiGaussianExampleY.dat', dtype = scipy.float64, delimiter = ',') """ Explanation: We have done nothing here to avoid overwriting of labels, in the event that they are close together. This would be a bit more work, but perhaps best left alone, anyway. Linear Regression - Multiresponse Gaussian Family The multiresponse Gaussian family is obtained using family = "mgaussian" option in glmnet. It is very similar to the single-response case above. This is useful when there are a number of (correlated) responses - the so-called "multi-task learning" problem. Here the sharing involves which variables are selected, since when a variable is selected, a coefficient is fit for each response. Most of the options are the same, so we focus here on the differences with the single response model. Obviously, as the name suggests, $y$ is not a vector, but a matrix of quantitative responses in this section. The coefficients at each value of lambda are also a matrix as a result. Here we solve the following problem: $$ \min_{(\beta_0, \beta) \in \mathbb{R}^{(p+1)\times K}}\frac{1}{2N} \sum_{i=1}^N ||y_i -\beta_0-\beta^T x_i||^2_F+\lambda \left[ (1-\alpha)||\beta||F^2/2 + \alpha\sum{j=1}^p||\beta_j||_2\right]. $$ Here, $\beta_j$ is the jth row of the $p\times K$ coefficient matrix $\beta$, and we replace the absolute penalty on each single coefficient by a group-lasso penalty on each coefficient K-vector $\beta_j$ for a single predictor $x_j$. We use a set of data generated beforehand for illustration. End of explanation """ mfit = glmnet(x = x.copy(), y = y.copy(), family = 'mgaussian') """ Explanation: We fit the data, with an object "mfit" returned. End of explanation """ glmnetPlot(mfit, xvar = 'lambda', label = True, ptype = '2norm'); """ Explanation: For multiresponse Gaussian, the options in glmnet are almost the same as the single-response case, such as alpha, weights, nlambda, standardize. A exception to be noticed is that standardize.response is only for mgaussian family. The default value is FALSE. If standardize.response = TRUE, it standardizes the response variables. To visualize the coefficients, we use the plot function. End of explanation """ f = glmnetPredict(mfit, x[0:5,:], s = scipy.float64([0.1, 0.01])) print(f[:,:,0], '\n') print(f[:,:,1]) """ Explanation: Note that we set type.coef = "2norm". Under this setting, a single curve is plotted per variable, with value equal to the $\ell_2$ norm. The default setting is type.coef = "coef", where a coefficient plot is created for each response (multiple figures). xvar and label are two other options besides ordinary graphical parameters. They are the same as the single-response case. We can extract the coefficients at requested values of $\lambda$ by using the function coef and make predictions by predict. The usage is similar and we only provide an example of predict here. End of explanation """ warnings.filterwarnings('ignore') cvmfit = cvglmnet(x = x.copy(), y = y.copy(), family = "mgaussian") warnings.filterwarnings('default') """ Explanation: The prediction result is saved in a three-dimensional array with the first two dimensions being the prediction matrix for each response variable and the third indicating the response variables. We can also do k-fold cross-validation. The options are almost the same as the ordinary Gaussian family and we do not expand here. End of explanation """ cvglmnetPlot(cvmfit) """ Explanation: We plot the resulting cv.glmnet object "cvmfit". End of explanation """ cvmfit['lambda_min'] cvmfit['lambda_1se'] """ Explanation: To show explicitly the selected optimal values of $\lambda$, type End of explanation """ # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'BinomialExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'BinomialExampleY.dat', dtype = scipy.float64) """ Explanation: As before, the first one is the value at which the minimal mean squared error is achieved and the second is for the most regularized model whose mean squared error is within one standard error of the minimal. Prediction for cvglmnet object works almost the same as for glmnet object. We omit the details here. Logistic Regression Logistic regression is another widely-used model when the response is categorical. If there are two possible outcomes, we use the binomial distribution, else we use the multinomial. Logistic Regression: Binomial Models For the binomial model, suppose the response variable takes value in $\mathcal{G}={1,2}$. Denote $y_i = I(g_i=1)$. We model $$ \mbox{Pr}(G=2|X=x)+\frac{e^{\beta_0+\beta^Tx}}{1+e^{\beta_0+\beta^Tx}}, $$ which can be written in the following form $$ \log\frac{\mbox{Pr}(G=2|X=x)}{\mbox{Pr}(G=1|X=x)}=\beta_0+\beta^Tx, $$ the so-called "logistic" or log-odds transformation. The objective function for the penalized logistic regression uses the negative binomial log-likelihood, and is $$ \min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}} -\left[\frac{1}{N} \sum_{i=1}^N y_i \cdot (\beta_0 + x_i^T \beta) - \log (1+e^{(\beta_0+x_i^T \beta)})\right] + \lambda \big[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\big]. $$ Logistic regression is often plagued with degeneracies when $p > N$ and exhibits wild behavior even when $N$ is close to $p$; the elastic-net penalty alleviates these issues, and regularizes and selects variables as well. Our algorithm uses a quadratic approximation to the log-likelihood, and then coordinate descent on the resulting penalized weighted least-squares problem. These constitute an outer and inner loop. For illustration purpose, we load pre-generated input matrix x and the response vector y from the data file. End of explanation """ fit = glmnet(x = x.copy(), y = y.copy(), family = 'binomial') """ Explanation: The input matrix $x$ is the same as other families. For binomial logistic regression, the response variable $y$ should be either a factor with two levels, or a two-column matrix of counts or proportions. Other optional arguments of glmnet for binomial regression are almost same as those for Gaussian family. Don't forget to set family option to "binomial". End of explanation """ glmnetPlot(fit, xvar = 'dev', label = True); """ Explanation: Like before, we can print and plot the fitted object, extract the coefficients at specific $\lambda$'s and also make predictions. For plotting, the optional arguments such as xvar and label are similar to the Gaussian. We plot against the deviance explained and show the labels. End of explanation """ glmnetPredict(fit, newx = x[0:5,], ptype='class', s = scipy.array([0.05, 0.01])) """ Explanation: Prediction is a little different for logistic from Gaussian, mainly in the option type. "link" and "response" are never equivalent and "class" is only available for logistic regression. In summary, * "link" gives the linear predictors "response" gives the fitted probabilities "class" produces the class label corresponding to the maximum probability. "coefficients" computes the coefficients at values of s "nonzero" retuns a list of the indices of the nonzero coefficients for each value of s. For "binomial" models, results ("link", "response", "coefficients", "nonzero") are returned only for the class corresponding to the second level of the factor response. In the following example, we make prediction of the class labels at $\lambda = 0.05, 0.01$. End of explanation """ warnings.filterwarnings('ignore') cvfit = cvglmnet(x = x.copy(), y = y.copy(), family = 'binomial', ptype = 'class') warnings.filterwarnings('default') """ Explanation: For logistic regression, cvglmnet has similar arguments and usage as Gaussian. nfolds, weights, lambda, parallel are all available to users. There are some differences in ptype: "deviance" and "mse" do not both mean squared loss and "class" is enabled. Hence, * "mse" uses squared loss. "deviance" uses actual deviance. "mae" uses mean absolute error. "class" gives misclassification error. "auc" (for two-class logistic regression ONLY) gives area under the ROC curve. For example, End of explanation """ cvglmnetPlot(cvfit) cvfit['lambda_min'] cvfit['lambda_1se'] """ Explanation: It uses misclassification error as the criterion for 10-fold cross-validation. We plot the object and show the optimal values of $\lambda$. End of explanation """ cvglmnetCoef(cvfit, s = 'lambda_min') """ Explanation: coef and predict are simliar to the Gaussian case and we omit the details. We review by some examples. End of explanation """ cvglmnetPredict(cvfit, newx = x[0:10, ], s = 'lambda_min', ptype = 'class') """ Explanation: As mentioned previously, the results returned here are only for the second level of the factor response. End of explanation """ # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'MultinomialExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'MultinomialExampleY.dat', dtype = scipy.float64) """ Explanation: Like other GLMs, glmnet allows for an "offset". This is a fixed vector of N numbers that is added into the linear predictor. For example, you may have fitted some other logistic regression using other variables (and data), and now you want to see if the present variables can add anything. So you use the predicted logit from the other model as an offset in. Like other GLMs, glmnet allows for an "offset". This is a fixed vector of N numbers that is added into the linear predictor. For example, you may have fitted some other logistic regression using other variables (and data), and now you want to see if the present variables can add anything. So you use the predicted logit from the other model as an offset in. Logistic Regression - Multinomial Models For the multinomial model, suppose the response variable has $K$ levels ${\cal G}={1,2,\ldots,K}$. Here we model $$\mbox{Pr}(G=k|X=x)=\frac{e^{\beta_{0k}+\beta_k^Tx}}{\sum_{\ell=1}^Ke^{\beta_{0\ell}+\beta_\ell^Tx}}.$$ Let ${Y}$ be the $N \times K$ indicator response matrix, with elements $y_{i\ell} = I(g_i=\ell)$. Then the elastic-net penalized negative log-likelihood function becomes $$ \ell({\beta_{0k},\beta_{k}}1^K) = -\left[\frac{1}{N} \sum{i=1}^N \Big(\sum_{k=1}^Ky_{il} (\beta_{0k} + x_i^T \beta_k)- \log \big(\sum_{k=1}^K e^{\beta_{0k}+x_i^T \beta_k}\big)\Big)\right] +\lambda \left[ (1-\alpha)||\beta||F^2/2 + \alpha\sum{j=1}^p||\beta_j||_q\right]. $$ Here we really abuse notation! $\beta$ is a $p\times K$ matrix of coefficients. $\beta_k$ refers to the kth column (for outcome category k), and $\beta_j$ the jth row (vector of K coefficients for variable j). The last penalty term is $||\beta_j||_q$, we have two options for q: $q\in {1,2}$. When q=1, this is a lasso penalty on each of the parameters. When q=2, this is a grouped-lasso penalty on all the K coefficients for a particular variables, which makes them all be zero or nonzero together. The standard Newton algorithm can be tedious here. Instead, we use a so-called partial Newton algorithm by making a partial quadratic approximation to the log-likelihood, allowing only $(\beta_{0k}, \beta_k)$ to vary for a single class at a time. For each value of $\lambda$, we first cycle over all classes indexed by $k$, computing each time a partial quadratic approximation about the parameters of the current class. Then the inner procedure is almost the same as for the binomial case. This is the case for lasso (q=1). When q=2, we use a different approach, which we wont dwell on here. For the multinomial case, the usage is similar to logistic regression, and we mainly illustrate by examples and address any differences. We load a set of generated data. End of explanation """ fit = glmnet(x = x.copy(), y = y.copy(), family = 'multinomial', mtype = 'grouped') """ Explanation: The optional arguments in glmnet for multinomial logistic regression are mostly similar to binomial regression except for a few cases. The response variable can be a nc &gt;= 2 level factor, or a nc-column matrix of counts or proportions. Internally glmnet will make the rows of this matrix sum to 1, and absorb the total mass into the weight for that observation. offset should be a nobs x nc matrix if there is one. A special option for multinomial regression is mtype, which allows the usage of a grouped lasso penalty if mtype = 'grouped'. This will ensure that the multinomial coefficients for a variable are all in or out together, just like for the multi-response Gaussian. End of explanation """ glmnetPlot(fit, xvar = 'lambda', label = True, ptype = '2norm'); """ Explanation: We plot the resulting object "fit". End of explanation """ warnings.filterwarnings('ignore') cvfit=cvglmnet(x = x.copy(), y = y.copy(), family='multinomial', mtype = 'grouped'); warnings.filterwarnings('default') cvglmnetPlot(cvfit) """ Explanation: The options are xvar, label and ptype, in addition to other ordinary graphical parameters. xvar and label are the same as other families while ptype is only for multinomial regression and multiresponse Gaussian model. It can produce a figure of coefficients for each response variable if ptype = "coef" or a figure showing the $\ell_2$-norm in one figure if ptype = "2norm" We can also do cross-validation and plot the returned object. End of explanation """ cvglmnetPredict(cvfit, newx = x[0:10, :], s = 'lambda_min', ptype = 'class') """ Explanation: Note that although mtype is not a typical argument in cvglmnet, in fact any argument that can be passed to glmnet is valid in the argument list of cvglmnet. We also use parallel computing to accelerate the calculation. Users may wish to predict at the optimally selected $\lambda$: End of explanation """ # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'PoissonExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'PoissonExampleY.dat', dtype = scipy.float64, delimiter = ',') """ Explanation: Poisson Models Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale: $\log \mu(x) = \beta_0+\beta' x$. The log-likelihood for observations ${x_i,y_i}1^N$ is given my $$ l(\beta|X, Y) = \sum{i=1}^N (y_i (\beta_0+\beta' x_i) - e^{\beta_0+\beta^Tx_i}. $$ As before, we optimize the penalized log-likelihood: $$ \min_{\beta_0,\beta} -\frac1N l(\beta|X, Y) + \lambda \left((1-\alpha) \sum_{i=1}^N \beta_i^2/2) +\alpha \sum_{i=1}^N |\beta_i|\right). $$ Glmnet uses an outer Newton loop, and an inner weighted least-squares loop (as in logistic regression) to optimize this criterion. First, we load a pre-generated set of Poisson data. End of explanation """ fit = glmnet(x = x.copy(), y = y.copy(), family = 'poisson') """ Explanation: We apply the function glmnet with the "poisson" option. End of explanation """ glmnetPlot(fit); """ Explanation: The optional input arguments of glmnet for "poisson" family are similar to those for others. offset is a useful argument particularly in Poisson models. When dealing with rate data in Poisson models, the counts collected are often based on different exposures, such as length of time observed, area and years. A poisson rate $\mu(x)$ is relative to a unit exposure time, so if an observation $y_i$ was exposed for $E_i$ units of time, then the expected count would be $E_i\mu(x)$, and the log mean would be $\log(E_i)+\log(\mu(x)$. In a case like this, we would supply an offset $\log(E_i)$ for each observation. Hence offset is a vector of length nobs that is included in the linear predictor. Other families can also use options, typically for different reasons. (Warning: if offset is supplied in glmnet, offsets must also also be supplied to predict to make reasonable predictions.) Again, we plot the coefficients to have a first sense of the result. End of explanation """ glmnetCoef(fit, s = scipy.float64([1.0])) glmnetPredict(fit, x[0:5,:], ptype = 'response', s = scipy.float64([0.1, 0.01])) """ Explanation: Like before, we can extract the coefficients and make predictions at certain $\lambda$'s by using coef and predict respectively. The optional input arguments are similar to those for other families. In function predict, the option type, which is the type of prediction required, has its own specialties for Poisson family. That is, * "link" (default) gives the linear predictors like others * "response" gives the fitted mean * "coefficients" computes the coefficients at the requested values for s, which can also be realized by coef function * "nonzero" returns a a list of the indices of the nonzero coefficients for each value of s. For example, we can do as follows: End of explanation """ warnings.filterwarnings('ignore') cvfit = cvglmnet(x.copy(), y.copy(), family = 'poisson') warnings.filterwarnings('default') """ Explanation: We may also use cross-validation to find the optimal $\lambda$'s and thus make inferences. End of explanation """ cvglmnetPlot(cvfit) """ Explanation: Options are almost the same as the Gaussian family except that for type.measure, * "deviance" (default) gives the deviance * "mse" stands for mean squared error * "mae" is for mean absolute error. We can plot the cvglmnet object. End of explanation """ optlam = scipy.array([cvfit['lambda_min'], cvfit['lambda_1se']]).reshape([2,]) cvglmnetCoef(cvfit, s = optlam) """ Explanation: We can also show the optimal $\lambda$'s and the corresponding coefficients. End of explanation """ # Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'CoxExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'CoxExampleY.dat', dtype = scipy.float64, delimiter = ',') """ Explanation: The predict method is similar and we do not repeat it here. Cox Models The Cox proportional hazards model is commonly used for the study of the relationship beteween predictor variables and survival time. In the usual survival analysis framework, we have data of the form $(y_1, x_1, \delta_1), \ldots, (y_n, x_n, \delta_n)$ where $y_i$, the observed time, is a time of failure if $\delta_i$ is 1 or right-censoring if $\delta_i$ is 0. We also let $t_1 < t_2 < \ldots < t_m$ be the increasing list of unique failure times, and $j(i)$ denote the index of the observation failing at time $t_i$. The Cox model assumes a semi-parametric form for the hazard $$ h_i(t) = h_0(t) e^{x_i^T \beta}, $$ where $h_i(t)$ is the hazard for patient $i$ at time $t$, $h_0(t)$ is a shared baseline hazard, and $\beta$ is a fixed, length $p$ vector. In the classic setting $n \geq p$, inference is made via the partial likelihood $$ L(\beta) = \prod_{i=1}^m \frac{e^{x_{j(i)}^T \beta}}{\sum_{j \in R_i} e^{x_j^T \beta}}, $$ where $R_i$ is the set of indices $j$ with $y_j \geq t_i$ (those at risk at time $t_i$). Note there is no intercept in the Cox mode (its built into the baseline hazard, and like it, would cancel in the partial likelihood.) We penalize the negative log of the partial likelihood, just like the other models, with an elastic-net penalty. We use a pre-generated set of sample data and response. Users can load their own data and follow a similar procedure. In this case $x$ must be an $n\times p$ matrix of covariate values — each row corresponds to a patient and each column a covariate. $y$ is an $n \times 2$ matrix, with a column "time" of failure/censoring times, and "status" a 0/1 indicator, with 1 meaning the time is a failure time, and zero a censoring time. End of explanation """ fit = glmnet(x = x.copy(), y = y.copy(), family = 'cox') """ Explanation: The Surv function in the package survival can create such a matrix. Note, however, that the coxph and related linear models can handle interval and other fors of censoring, while glmnet can only handle right censoring in its present form. We apply the glmnet function to compute the solution path under default settings. End of explanation """ glmnetPlot(fit); """ Explanation: All the standard options are available such as alpha, weights, nlambda and standardize. Their usage is similar as in the Gaussian case and we omit the details here. Users can also refer to the help file help(glmnet). We can plot the coefficients. End of explanation """ glmnetCoef(fit, s = scipy.float64([0.05])) """ Explanation: As before, we can extract the coefficients at certain values of $\lambda$. End of explanation """
JamesLuoau/deep-learning-getting-started
first-neural-network/Your_first_neural_network.ipynb
apache-2.0
%matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt """ Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation """ data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() """ Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation """ rides[:24*10].plot(x='dteday', y='cnt') """ Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation """ dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() """ Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation """ quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std """ Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation """ # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] """ Explanation: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation """ # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] """ Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation """ class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 0 # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with your calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = error # TODO: Calculate the hidden layer's contribution to the error hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T) hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += hidden_error_term * X[:, None] # Weight step (hidden to output) delta_weights_h_o += (output_error_term * hidden_outputs)[:, None] # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) """ Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation """ import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() print("run:", network.run(inputs)) self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) """ Explanation: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. End of explanation """ import sys ### Set the hyperparameters here ### iterations = 30000 learning_rate = 0.5 hidden_nodes = 40 output_nodes = 1 N_i = train_features.shape[1] print("features:", N_i) network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'][5000:], label='Training loss') plt.plot(losses['validation'][5000:], label='Validation loss') plt.legend() _ = plt.ylim() import sys ### Set the hyperparameters here ### iterations = 15000 learning_rate = 0.5 hidden_nodes = 40 output_nodes = 1 N_i = train_features.shape[1] print("features:", N_i) network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'][5000:], label='Training loss') plt.plot(losses['validation'][5000:], label='Validation loss') plt.legend() _ = plt.ylim() """ Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation """ fig, ax = plt.subplots(figsize=(18,8)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) """ Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation """
rasbt/python-machine-learning-book
code/ch06/ch06.ipynb
mit
%load_ext watermark %watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn """ Explanation: Copyright (c) 2015 - 2017 Sebastian Raschka https://github.com/rasbt/python-machine-learning-book MIT License Python Machine Learning - Code Examples Chapter 6 - Learning Best Practices for Model Evaluation and Hyperparameter Tuning Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s). End of explanation """ from IPython.display import Image %matplotlib inline # Added version check for recent scikit-learn 0.18 checks from distutils.version import LooseVersion as Version from sklearn import __version__ as sklearn_version """ Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark. <br> <br> Overview Streamlining workflows with pipelines Loading the Breast Cancer Wisconsin dataset Combining transformers and estimators in a pipeline Using k-fold cross-validation to assess model performance The holdout method K-fold cross-validation Debugging algorithms with learning and validation curves Diagnosing bias and variance problems with learning curves Addressing overfitting and underfitting with validation curves Fine-tuning machine learning models via grid search Tuning hyperparameters via grid search Algorithm selection with nested cross-validation Looking at different performance evaluation metrics Reading a confusion matrix Optimizing the precision and recall of a classification model Plotting a receiver operating characteristic The scoring metrics for multiclass classification Summary <br> <br> End of explanation """ import pandas as pd import urllib try: df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases' '/breast-cancer-wisconsin/wdbc.data', header=None) except urllib.error.URLError: df = pd.read_csv('https://raw.githubusercontent.com/rasbt/' 'python-machine-learning-book/master/code/' 'datasets/wdbc/wdbc.data', header=None) print('rows, columns:', df.shape) df.head() df.shape from sklearn.preprocessing import LabelEncoder X = df.loc[:, 2:].values y = df.loc[:, 1].values le = LabelEncoder() y = le.fit_transform(y) le.transform(['M', 'B']) if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.20, random_state=1) """ Explanation: Streamlining workflows with pipelines ... Loading the Breast Cancer Wisconsin dataset End of explanation """ from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline pipe_lr = Pipeline([('scl', StandardScaler()), ('pca', PCA(n_components=2)), ('clf', LogisticRegression(random_state=1))]) pipe_lr.fit(X_train, y_train) print('Test Accuracy: %.3f' % pipe_lr.score(X_test, y_test)) y_pred = pipe_lr.predict(X_test) Image(filename='./images/06_01.png', width=500) """ Explanation: <br> <br> Combining transformers and estimators in a pipeline End of explanation """ Image(filename='./images/06_02.png', width=500) """ Explanation: <br> <br> Using k-fold cross validation to assess model performance ... The holdout method End of explanation """ Image(filename='./images/06_03.png', width=500) import numpy as np if Version(sklearn_version) < '0.18': from sklearn.cross_validation import StratifiedKFold else: from sklearn.model_selection import StratifiedKFold if Version(sklearn_version) < '0.18': kfold = StratifiedKFold(y=y_train, n_folds=10, random_state=1) else: kfold = StratifiedKFold(n_splits=10, random_state=1).split(X_train, y_train) scores = [] for k, (train, test) in enumerate(kfold): pipe_lr.fit(X_train[train], y_train[train]) score = pipe_lr.score(X_train[test], y_train[test]) scores.append(score) print('Fold: %s, Class dist.: %s, Acc: %.3f' % (k+1, np.bincount(y_train[train]), score)) print('\nCV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores))) if Version(sklearn_version) < '0.18': from sklearn.cross_validation import cross_val_score else: from sklearn.model_selection import cross_val_score scores = cross_val_score(estimator=pipe_lr, X=X_train, y=y_train, cv=10, n_jobs=1) print('CV accuracy scores: %s' % scores) print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores))) """ Explanation: <br> <br> K-fold cross-validation End of explanation """ Image(filename='./images/06_04.png', width=600) import matplotlib.pyplot as plt if Version(sklearn_version) < '0.18': from sklearn.learning_curve import learning_curve else: from sklearn.model_selection import learning_curve pipe_lr = Pipeline([('scl', StandardScaler()), ('clf', LogisticRegression(penalty='l2', random_state=0))]) train_sizes, train_scores, test_scores =\ learning_curve(estimator=pipe_lr, X=X_train, y=y_train, train_sizes=np.linspace(0.1, 1.0, 10), cv=10, n_jobs=1) train_mean = np.mean(train_scores, axis=1) train_std = np.std(train_scores, axis=1) test_mean = np.mean(test_scores, axis=1) test_std = np.std(test_scores, axis=1) plt.plot(train_sizes, train_mean, color='blue', marker='o', markersize=5, label='training accuracy') plt.fill_between(train_sizes, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue') plt.plot(train_sizes, test_mean, color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy') plt.fill_between(train_sizes, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green') plt.grid() plt.xlabel('Number of training samples') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.ylim([0.8, 1.0]) plt.tight_layout() # plt.savefig('./figures/learning_curve.png', dpi=300) plt.show() """ Explanation: <br> <br> Debugging algorithms with learning curves <br> <br> Diagnosing bias and variance problems with learning curves End of explanation """ if Version(sklearn_version) < '0.18': from sklearn.learning_curve import validation_curve else: from sklearn.model_selection import validation_curve param_range = [0.001, 0.01, 0.1, 1.0, 10.0, 100.0] train_scores, test_scores = validation_curve( estimator=pipe_lr, X=X_train, y=y_train, param_name='clf__C', param_range=param_range, cv=10) train_mean = np.mean(train_scores, axis=1) train_std = np.std(train_scores, axis=1) test_mean = np.mean(test_scores, axis=1) test_std = np.std(test_scores, axis=1) plt.plot(param_range, train_mean, color='blue', marker='o', markersize=5, label='training accuracy') plt.fill_between(param_range, train_mean + train_std, train_mean - train_std, alpha=0.15, color='blue') plt.plot(param_range, test_mean, color='green', linestyle='--', marker='s', markersize=5, label='validation accuracy') plt.fill_between(param_range, test_mean + test_std, test_mean - test_std, alpha=0.15, color='green') plt.grid() plt.xscale('log') plt.legend(loc='lower right') plt.xlabel('Parameter C') plt.ylabel('Accuracy') plt.ylim([0.8, 1.0]) plt.tight_layout() # plt.savefig('./figures/validation_curve.png', dpi=300) plt.show() """ Explanation: <br> <br> Addressing over- and underfitting with validation curves End of explanation """ from sklearn.svm import SVC if Version(sklearn_version) < '0.18': from sklearn.grid_search import GridSearchCV else: from sklearn.model_selection import GridSearchCV pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))]) param_range = [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000.0] param_grid = [{'clf__C': param_range, 'clf__kernel': ['linear']}, {'clf__C': param_range, 'clf__gamma': param_range, 'clf__kernel': ['rbf']}] gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=10, n_jobs=-1) gs = gs.fit(X_train, y_train) print(gs.best_score_) print(gs.best_params_) clf = gs.best_estimator_ clf.fit(X_train, y_train) print('Test accuracy: %.3f' % clf.score(X_test, y_test)) """ Explanation: <br> <br> Fine-tuning machine learning models via grid search <br> <br> Tuning hyperparameters via grid search End of explanation """ Image(filename='./images/06_07.png', width=500) gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=2) # Note: Optionally, you could use cv=2 # in the GridSearchCV above to produce # the 5 x 2 nested CV that is shown in the figure. scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5) print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores))) from sklearn.tree import DecisionTreeClassifier gs = GridSearchCV(estimator=DecisionTreeClassifier(random_state=0), param_grid=[{'max_depth': [1, 2, 3, 4, 5, 6, 7, None]}], scoring='accuracy', cv=2) scores = cross_val_score(gs, X_train, y_train, scoring='accuracy', cv=5) print('CV accuracy: %.3f +/- %.3f' % (np.mean(scores), np.std(scores))) """ Explanation: <br> <br> Algorithm selection with nested cross-validation End of explanation """ Image(filename='./images/06_08.png', width=300) from sklearn.metrics import confusion_matrix pipe_svc.fit(X_train, y_train) y_pred = pipe_svc.predict(X_test) confmat = confusion_matrix(y_true=y_test, y_pred=y_pred) print(confmat) fig, ax = plt.subplots(figsize=(2.5, 2.5)) ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.3) for i in range(confmat.shape[0]): for j in range(confmat.shape[1]): ax.text(x=j, y=i, s=confmat[i, j], va='center', ha='center') plt.xlabel('predicted label') plt.ylabel('true label') plt.tight_layout() # plt.savefig('./figures/confusion_matrix.png', dpi=300) plt.show() """ Explanation: <br> <br> Looking at different performance evaluation metrics ... Reading a confusion matrix End of explanation """ le.transform(['M', 'B']) confmat = confusion_matrix(y_true=y_test, y_pred=y_pred) print(confmat) """ Explanation: Additional Note Remember that we previously encoded the class labels so that malignant samples are the "postive" class (1), and benign samples are the "negative" class (0): End of explanation """ confmat = confusion_matrix(y_true=y_test, y_pred=y_pred) print(confmat) """ Explanation: Next, we printed the confusion matrix like so: End of explanation """ confmat = confusion_matrix(y_true=y_test, y_pred=y_pred, labels=[1, 0]) print(confmat) """ Explanation: Note that the (true) class 0 samples that are correctly predicted as class 0 (true negatives) are now in the upper left corner of the matrix (index 0, 0). In order to change the ordering so that the true negatives are in the lower right corner (index 1,1) and the true positves are in the upper left, we can use the labels argument like shown below: End of explanation """ from sklearn.metrics import precision_score, recall_score, f1_score print('Precision: %.3f' % precision_score(y_true=y_test, y_pred=y_pred)) print('Recall: %.3f' % recall_score(y_true=y_test, y_pred=y_pred)) print('F1: %.3f' % f1_score(y_true=y_test, y_pred=y_pred)) from sklearn.metrics import make_scorer scorer = make_scorer(f1_score, pos_label=0) c_gamma_range = [0.01, 0.1, 1.0, 10.0] param_grid = [{'clf__C': c_gamma_range, 'clf__kernel': ['linear']}, {'clf__C': c_gamma_range, 'clf__gamma': c_gamma_range, 'clf__kernel': ['rbf']}] gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring=scorer, cv=10, n_jobs=-1) gs = gs.fit(X_train, y_train) print(gs.best_score_) print(gs.best_params_) """ Explanation: We conclude: Assuming that class 1 (malignant) is the positive class in this example, our model correctly classified 71 of the samples that belong to class 0 (true negatives) and 40 samples that belong to class 1 (true positives), respectively. However, our model also incorrectly misclassified 1 sample from class 0 as class 1 (false positive), and it predicted that 2 samples are benign although it is a malignant tumor (false negatives). <br> <br> Optimizing the precision and recall of a classification model End of explanation """ from sklearn.metrics import roc_curve, auc from scipy import interp pipe_lr = Pipeline([('scl', StandardScaler()), ('pca', PCA(n_components=2)), ('clf', LogisticRegression(penalty='l2', random_state=0, C=100.0))]) X_train2 = X_train[:, [4, 14]] if Version(sklearn_version) < '0.18': cv = StratifiedKFold(y_train, n_folds=3, random_state=1) else: cv = list(StratifiedKFold(n_splits=3, random_state=1).split(X_train, y_train)) fig = plt.figure(figsize=(7, 5)) mean_tpr = 0.0 mean_fpr = np.linspace(0, 1, 100) all_tpr = [] for i, (train, test) in enumerate(cv): probas = pipe_lr.fit(X_train2[train], y_train[train]).predict_proba(X_train2[test]) fpr, tpr, thresholds = roc_curve(y_train[test], probas[:, 1], pos_label=1) mean_tpr += interp(mean_fpr, fpr, tpr) mean_tpr[0] = 0.0 roc_auc = auc(fpr, tpr) plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i+1, roc_auc)) plt.plot([0, 1], [0, 1], linestyle='--', color=(0.6, 0.6, 0.6), label='random guessing') mean_tpr /= len(cv) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) plt.plot(mean_fpr, mean_tpr, 'k--', label='mean ROC (area = %0.2f)' % mean_auc, lw=2) plt.plot([0, 0, 1], [0, 1, 1], lw=2, linestyle=':', color='black', label='perfect performance') plt.xlim([-0.05, 1.05]) plt.ylim([-0.05, 1.05]) plt.xlabel('false positive rate') plt.ylabel('true positive rate') plt.title('Receiver Operator Characteristic') plt.legend(loc="lower right") plt.tight_layout() # plt.savefig('./figures/roc.png', dpi=300) plt.show() pipe_lr = pipe_lr.fit(X_train2, y_train) y_labels = pipe_lr.predict(X_test[:, [4, 14]]) y_probas = pipe_lr.predict_proba(X_test[:, [4, 14]])[:, 1] # note that we use probabilities for roc_auc # the `[:, 1]` selects the positive class label only from sklearn.metrics import roc_auc_score, accuracy_score print('ROC AUC: %.3f' % roc_auc_score(y_true=y_test, y_score=y_probas)) print('Accuracy: %.3f' % accuracy_score(y_true=y_test, y_pred=y_labels)) """ Explanation: <br> <br> Plotting a receiver operating characteristic End of explanation """ pre_scorer = make_scorer(score_func=precision_score, pos_label=1, greater_is_better=True, average='micro') """ Explanation: <br> <br> The scoring metrics for multiclass classification End of explanation """
whitead/numerical_stats
unit_4/hw_2017/problem_set_2.ipynb
gpl-3.0
if 10**5 > 3**9: print('10^5 is greater') else: print('3^9 is greater') """ Explanation: Answer the following questions in Python. Do all calculations in Python. Your answers should have a pattern similar to this: python if 3 &lt; (5 * 2): print('3 is less than 5 times 2') else: print('3 is not less than 5 times 2') Which is greater: $10^5$ or $3^9$? End of explanation """ if 0.25 != 0.35: print('0.25 != 0.35') else: print('hmmm') """ Explanation: Demonstrate the $0.25 \neq 0.35$ End of explanation """ if 3 // 2 != 3 / 2: print('3 is not divisble by 2') else: print('it is divisible by 2') """ Explanation: Using the // operator, show that 3 is not divisible by 2. End of explanation """ x = -3 if x // 2 == x / 2: print('{} is even'.format(x)) else: print('{} is odd'.format(x)) if x < 0: print('{} is negative'.format(x)) elif x > 0: print('{} is positive'.format(x)) else: print('{} is 0'.format(x)) """ Explanation: Using a set of if if statements, print whether a variable is odd or even and negative or positive. Use the variable name x and demonstrate your code works using x = -3, but ensure it can handle any integer (e.g., 3, 0, -100). Make sure your print statements use the value of x, not the name of the variable. For example, you should print out -3 is negative, not x is negative. End of explanation """
chengjun/iching
iching.ipynb
mit
import random def sepSkyEarth(data): sky = random.randint(1, data-2) earth = data - sky earth -= 1 return sky , earth def getRemainder(num): rm = num % 4 if rm == 0: rm = 4 return rm def getChange(data): sky, earth = sepSkyEarth(data) skyRemainder = getRemainder(sky) earthRemainder = getRemainder(earth) change = skyRemainder + earthRemainder + 1 data = data - change return sky, earth, change, data def getYao(data): sky, earth, firstChange, data = getChange(data) sky, earth, secondChange, data = getChange(data) sky, earth, thirdChange, data = getChange(data) yao = data/4 return yao, firstChange, secondChange, thirdChange def sixYao(): yao1 = getYao(data = 50 - 1)[0] yao2 = getYao(data = 50 - 1)[0] yao3 = getYao(data = 50 - 1)[0] yao4 = getYao(data = 50 - 1)[0] yao5 = getYao(data = 50 - 1)[0] yao6 = getYao(data = 50 - 1)[0] return[yao1, yao2, yao3, yao4, yao5, yao6] def fixYao(num): if num == 6 or num == 9: print "there is a changing predict! Also run changePredict()" return num % 2 def changeYao(num): if num == 6: num = 1 elif num == 9: num = 2 num = num % 2 return(num) def fixPredict(pred): fixprd = [fixYao(i) for i in pred] fixprd = list2str(fixprd) return fixprd def list2str(l): si = '' for i in l: si = si + str(i) return si def changePredict(pred): changeprd = [changeYao(i) for i in pred] changeprd = list2str(changeprd) return changeprd def getPredict(): pred = sixYao() fixPred = fixPredict(pred) if 6 in pred or 9 in pred: changePred = changePredict(pred) else: changePred = None return fixPred, changePred def interpretPredict(now, future): dt = {'111111':'乾','011111':'夬','000000':'坤','010001':'屯','100010':'蒙','010111':'需','111010':'讼','000010':'师', '010000':'比','110111':'小畜','111011':'履','000111':'泰','111000':'否','111101':'同人','101111':'大有','000100':'谦', '001000':'豫','011001':'随','100110':'蛊','000011':'临','110000':'观','101001':'噬嗑','100101':'贲','100000':'剥', '000001':'复','111001':'无妄','100111':'大畜','100001':'颐','011110':'大过','010010':'坎','101101':'离','011100':'咸', '001110':'恒','111100':'遁','001111':'大壮','101000':'晋','000101':'明夷','110101':'家人','101011':'睽','010100':'蹇', '001010':'解','100011':'损','110001':'益','111110':'姤','011000':'萃','000110':'升','011010':'困','010110':'井', '011101':'革','101110':'鼎','001001':'震','100100':'艮','110100':'渐','001011':'归妹','001101':'丰','101100':'旅', '110110':'巽','011011':'兑','110010':'涣','010011':'节','110011':'中孚','001100':'小过','010101':'既济','101010':'未济'} if future: name = dt[now] + ' & ' + dt[future] else: name = dt[now] print name def plotTransitionRemainder(N, w): import matplotlib.cm as cm import matplotlib.pyplot as plt from collections import defaultdict changes = {} for i in range(N): sky, earth, firstChange, data = getChange(data = 50 -1) sky, earth, secondChange, data = getChange(data) sky, earth, thirdChange, data = getChange(data) changes[i]=[firstChange, secondChange, thirdChange, data/4] ichanges = changes.values() firstTransition = defaultdict(int) for i in ichanges: firstTransition[i[0], i[1]]+=1 secondTransition = defaultdict(int) for i in ichanges: secondTransition[i[1], i[2]]+=1 thirdTransition = defaultdict(int) for i in ichanges: thirdTransition[i[2], i[3]]+=1 cmap = cm.get_cmap('Accent_r', len(ichanges)) for k, v in firstTransition.iteritems(): plt.plot([1, 2], k, linewidth = v*w/N) for k, v in secondTransition.iteritems(): plt.plot([2, 3], k, linewidth = v*w/N) for k, v in thirdTransition.iteritems(): plt.plot([3, 4], k, linewidth = v*w/N) plt.xlabel(u'Time') plt.ylabel(u'Changes') """ Explanation: 蓍草卜卦 大衍之数五十,其用四十有九。分而为二以象两,挂一以象三,揲之以四以象四时,归奇于扐以象闰。五岁再闰,故再扐而后挂。天一,地二;天三,地四;天五,地六;天七,地八;天九,地十。天数五,地数五。五位相得而各有合,天数二十有五,地数三十,凡天地之数五十有五,此所以成变化而行鬼神也。乾之策二百一十有六,坤之策百四十有四,凡三百六十,当期之日。二篇之策,万有一千五百二十,当万物之数也。是故四营而成《易》,十有八变而成卦,八卦而小成。引而伸之,触类而长之,天下之能事毕矣。显道神德行,是故可与酬酢,可与祐神矣。子曰:“知变化之道者,其知神之所为乎。” 大衍之数五十,存一不用,构造天地人三者,历经三变,第一次的余数是5或9,第二次的是4或8,第三次的是4或8,剩下的数量除以4就是结果。即为一爻,算六爻要一个小时。古人构造随机数的方法太费时间啦。用Python写个程序来搞吧! End of explanation """ data = 50 - 1 """ Explanation: 大衍之数五十,存一不用 End of explanation """ sky, earth, firstChange, data = getChange(data) print sky, '\n', earth, '\n',firstChange, '\n', data """ Explanation: 一变 End of explanation """ sky, earth, secondChange, data = getChange(data) print sky, '\n', earth, '\n',secondChange, '\n', data """ Explanation: 二变 End of explanation """ sky, earth, thirdChange, data = getChange(data) print sky, '\n', earth, '\n',thirdChange, '\n', data """ Explanation: 三变 End of explanation """ getPredict() getPredict() getPredict() """ Explanation: 得到六爻及变卦 End of explanation """ fixPred, changePred = getPredict() interpretPredict(fixPred, changePred ) """ Explanation: 得到卦名 End of explanation """ #http://baike.fututa.com/zhouyi64gua/ import urllib2 from bs4 import BeautifulSoup import os # set work directory os.chdir('/Users/chengjun/github/iching/') dt = {'111111':'乾','011111':'夬','000000':'坤','010001':'屯','100010':'蒙','010111':'需','111010':'讼','000010':'师', '010000':'比','110111':'小畜','111011':'履','000111':'泰','111000':'否','111101':'同人','10111':'大有','000100':'谦', '001000':'豫','011001':'随','100110':'蛊','000011':'临','110000':'观','101001':'噬嗑','100101':'贲','100000':'剥', '000001':'复','111001':'无妄','100111':'大畜','100001':'颐','011110':'大过','010010':'坎','101101':'离','011100':'咸', '001110':'恒','111100':'遁','001111':'大壮','101000':'晋','000101':'明夷','110101':'家人','101011':'睽','010100':'蹇', '001010':'解','100011':'损','110001':'益','111110':'姤','011000':'萃','000110':'升','011010':'困','010110':'井', '011101':'革','101110':'鼎','001001':'震','100100':'艮','110100':'渐','001011':'归妹','001101':'丰','101100':'旅', '110110':'巽','011011':'兑','110010':'涣','010011':'节','110011':'中孚','001100':'小过','010101':'既济','101010':'未济'} dr = {} for i, j in dt.iteritems(): dr[unicode(j, 'utf8')]= i url = "http://baike.fututa.com/zhouyi64gua/" content = urllib2.urlopen(url).read() #获取网页的html文本 soup = BeautifulSoup(content) articles = soup.find_all('div', {'class', 'gualist'})[0].find_all('a') links = [i['href'] for i in articles] links[:2] dtext = {} from time import sleep num = 0 for j in links: sleep(0.1) num += 1 ghtml = urllib2.urlopen(j).read() #获取网页的html文本 print j, num gua = BeautifulSoup(ghtml, from_encoding = 'gb18030') guaName = gua.title.text.split('_')[1].split(u'卦')[0] guaId = dr[guaName] guawen = gua.find_all('div', {'class', 'gua_wen'}) guaText = [] for i in guawen: guaText.append(i.get_text() + '\n\n') guaText = ''.join(guaText) dtext[guaId] = guaText dtextu = {} for i, j in dtext.iteritems(): dtextu[i]= j.encode('utf-8') dtext.values()[0] import json with open("/Users/chengjun/github/iching/package_data.dat",'w') as outfile: json.dump(dtextu, outfile, ensure_ascii=False) #, encoding = 'utf-8') dat = json.load(open('package_data.dat'), encoding='utf-8') print dat.values()[1] now, future = getPredict() def ichingText(k): import json dat = json.load(open('iching/package_data.dat')) print dat[k] ichingText(future) %matplotlib inline plotTransitionRemainder(10000, w = 50) %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 10),facecolor='white') plt.subplot(2, 2, 1) plotTransitionRemainder(1000, w = 50) plt.subplot(2, 2, 2) plotTransitionRemainder(1000, w = 50) plt.subplot(2, 2, 3) plotTransitionRemainder(1000, w = 50) plt.subplot(2, 2, 4) plotTransitionRemainder(1000, w = 50) dt = {'111111':u'乾','011111':u'夬','000000':u'坤','010001':u'屯','100010':u'蒙','010111':u'需','111010':u'讼','000010':'师', '010000':u'比','110111':u'小畜','111011':u'履','000111':u'泰','111000':u'否','111101':u'同人','101111':u'大有','000100':u'谦', '001000':u'豫','011001':u'随','100110':u'蛊','000011':u'临','110000':u'观','101001':u'噬嗑','100101':u'贲','100000':'u剥', '000001':u'复','111001':u'无妄','100111':u'大畜','100001':u'颐','011110':u'大过','010010':u'坎','101101':u'离','011100':u'咸', '001110':u'恒','111100':u'遁','001111':u'大壮','101000':u'晋','000101':u'明夷','110101':u'家人','101011':u'睽','010100':u'蹇', '001010':u'解','100011':u'损','110001':u'益','111110':u'姤','011000':u'萃','000110':u'升','011010':u'困','010110':u'井', '011101':u'革','101110':u'鼎','001001':u'震','100100':u'艮','110100':u'渐','001011':u'归妹','001101':u'丰','101100':u'旅', '110110':u'巽','011011':u'兑','110010':u'涣','010011':u'节','110011':u'中孚','001100':u'小过','010101':u'既济','101010':u'未济' } for i in dt.values(): print i dtu = {} for i, j in dt.iteritems(): dtu[i] = unicode(j, 'utf-8') def ichingDate(d): import random random.seed(d) try: print 'Your birthday & your prediction time:', str(d) except: print('Your birthday & your prediction time:', str(d)) """ Explanation: 添加卦辞 End of explanation """
psychemedia/parlihacks
notebooks/Apache Drill - Hansard Demo.ipynb
mit
#Download data file !wget -P /Users/ajh59/Documents/parlidata/ https://zenodo.org/record/579712/files/senti_post_v2.csv #Install some dependencies !pip3 install pydrill !pip3 install pandas !pip3 install matplotlib #Import necessary packages import pandas as pd from pydrill.client import PyDrill #Set the notebooks up for inline plotting %matplotlib inline #Get a connection to the Apache Drill server drill = PyDrill(host='localhost', port=8047) """ Explanation: Apache Drill - Hansard Demo Download and install Apache Drill. Start Apache Drill in the Apache Drill directory: bin/drill-embedded Tweak the settings as per Querying Large CSV Files With Apache Drill so you can query against column names. End of explanation """ #Test the setup drill.query(''' SELECT * from dfs.tmp.`/senti_post_v2.parquet` LIMIT 3''').to_dataframe() """ Explanation: Make things faster We can get a speed up on querying the CSV file by converting it to the parquet format. In the Apache Drill terminal, run something like the following (change the path to the CSV file as required): CREATE TABLE dfs.tmp.`/senti_post_v2.parquet` AS SELECT * FROM dfs.`/Users/ajh59/Documents/parlidata/senti_post_v2.csv`; (Running the command from the notebook suffers a timeout?) End of explanation """ #Get Parliament session dates from Parliament API psd=pd.read_csv('http://lda.data.parliament.uk/sessions.csv?_view=Sessions&_pageSize=50') psd def getParliamentDate(session): start=psd[psd['display name']==session]['start date'].iloc[0] end=psd[psd['display name']==session]['end date'].iloc[0] return start, end getParliamentDate('2015-2016') #Check the columns in the Hansard dataset, along with example values df=drill.query(''' SELECT * from dfs.tmp.`/senti_post_v2.parquet` LIMIT 1''').to_dataframe() print(df.columns.tolist()) df.iloc[0] # Example of count of speeches by person in the dataset as a whole q=''' SELECT proper_name, COUNT(*) AS number FROM dfs.tmp.`/senti_post_v2.parquet` GROUP BY proper_name ''' df=drill.query(q).to_dataframe() df.head() # Example of count of speeches by gender in the dataset as a whole q="SELECT gender, count(*) AS `Number of Speeches` FROM dfs.tmp.`/senti_post_v2.parquet` GROUP BY gender" drill.query(q).to_dataframe() #Query within session session='2015-2016' start,end=getParliamentDate(session) q=''' SELECT '{session}' AS session, gender, count(*) AS `Number of Speeches` FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY gender '''.format(session=session, start=start, end=end) drill.query(q).to_dataframe() #Count number of speeches per person start,end=getParliamentDate(session) q=''' SELECT '{session}' AS session, gender, mnis_id, count(*) AS `Number of Speeches` FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, gender '''.format(session=session, start=start, end=end) drill.query(q).to_dataframe().head() # Example of finding the average number of speeches per person by gender in a particular session q=''' SELECT AVG(gcount) AS average, gender, session FROM (SELECT '{session}' AS session, gender, mnis_id, count(*) AS gcount FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, gender) GROUP BY gender, session '''.format(session=session, start=start, end=end) drill.query(q).to_dataframe() #Note - the average is returned as a string not a numeric #We can package that query up in a Python function def avBySession(session): start,end=getParliamentDate(session) q='''SELECT AVG(gcount) AS average, gender, session FROM (SELECT '{session}' AS session, gender, mnis_id, count(*) AS gcount FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, gender) GROUP BY gender, session '''.format(session=session, start=start, end=end) dq=drill.query(q).to_dataframe() #Make the average a numeric type... dq['average']=dq['average'].astype(float) return dq avBySession(session) #Loop through sessions and create a dataframe containing gender based averages for each one overall=pd.DataFrame() for session in psd['display name']: overall=pd.concat([overall,avBySession(session)]) #Tidy up the index overall=overall.reset_index(drop=True) overall.head() """ Explanation: The Hansard data gives the date of each speech but not the session. To search for speeches within a particular session, we need the session dates. We can get these from the Parliament data API. End of explanation """ #Reshape the dataset overall_wide = overall.pivot(index='session', columns='gender') #Flatten the column names overall_wide.columns = overall_wide.columns.get_level_values(1) overall_wide """ Explanation: The data is currently in a long (tidy) format. To make it easier to plot, we can reshape it (unmelt it) by casting it into a wide format, with one row per session and and the gender averages arranged by column. End of explanation """ overall_wide.plot(kind='barh'); overall_wide.plot(); """ Explanation: Now we can plot the data - the session axis should sort in an appropriate way (alphanumerically). End of explanation """ # Example of finding the average number of speeches per person by party in a particular session # Simply tweak the query we used for gender... q=''' SELECT AVG(gcount) AS average, party, session FROM (SELECT '{session}' AS session, party, mnis_id, count(*) AS gcount FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, party) GROUP BY party, session '''.format(session=session, start=start, end=end) drill.query(q).to_dataframe() """ Explanation: We can generalise the approach to look at a count of split by party. End of explanation """ def avByType(session,typ): start,end=getParliamentDate(session) q='''SELECT AVG(gcount) AS average, {typ}, session FROM (SELECT '{session}' AS session, {typ}, mnis_id, count(*) AS gcount FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, {typ}) GROUP BY {typ}, session '''.format(session=session, start=start, end=end, typ=typ) dq=drill.query(q).to_dataframe() #Make the average a numeric type... dq['average']=dq['average'].astype(float) return dq def avByParty(session): return avByType(session,'party') avByParty(session) # Create a function to loop through sessions and create a dataframe containing specified averages for each one # Note that this just generalises and packages up the code we had previously def pivotAndFlatten(overall,typ): #Tidy up the index overall=overall.reset_index(drop=True) overall_wide = overall.pivot(index='session', columns=typ) #Flatten the column names overall_wide.columns = overall_wide.columns.get_level_values(1) return overall_wide def getOverall(typ): overall=pd.DataFrame() for session in psd['display name']: overall=pd.concat([overall,avByType(session,typ)]) return pivotAndFlatten(overall,typ) overallParty=getOverall('party') overallParty.head() #Note that the function means it's now just as easy to query on another single column getOverall('party_group') overallParty.plot(kind='barh', figsize=(20,20)); parties=['Conservative','Labour'] overallParty[parties].plot(); """ Explanation: Make a function out of that, as we did before. End of explanation """ def avByGenderAndParty(session): start,end=getParliamentDate(session) q='''SELECT AVG(gcount) AS average, gender, party, session FROM (SELECT '{session}' AS session, gender, party, mnis_id, count(*) AS gcount FROM dfs.tmp.`/senti_post_v2.parquet` WHERE speech_date>='{start}' AND speech_date<='{end}' GROUP BY mnis_id, gender, party) GROUP BY gender, party, session '''.format(session=session, start=start, end=end) dq=drill.query(q).to_dataframe() #Make the average a numeric type... dq['average']=dq['average'].astype(float) return dq gp=avByGenderAndParty(session) gp gp_overall=pd.DataFrame() for session in psd['display name']: gp_overall=pd.concat([gp_overall,avByGenderAndParty(session)]) #Pivot table it more robust than pivot - missing entries handled with NA #Also limit what parties we are interested in gp_wide = gp_overall[gp_overall['party'].isin(parties)].pivot_table(index='session', columns=['party','gender']) #Flatten column names gp_wide.columns = gp_wide.columns.droplevel(0) gp_wide gp_wide.plot(figsize=(20,10)); gp_wide.plot(kind='barh', figsize=(20,10)); """ Explanation: We can write another query to look by gender and party. End of explanation """ # Go back to the full dataset, not filtered by party gp_wide = gp_overall.pivot_table(index='session', columns=['party','gender']) #Flatten column names gp_wide.columns = gp_wide.columns.droplevel(0) gp_wide.head() sp_wide = gp_wide.reset_index().melt(id_vars=['session']).pivot_table(index=['session','party'], columns=['gender']) #Flatten column names sp_wide.columns = sp_wide.columns.droplevel(0) sp_wide#.dropna(how='all') #Sessions when F spoke more, on average, then M #Recall, this data has been previously filtered to limit data to Con and Lab #Tweak the precision of the display pd.set_option('precision',3) sp_wide[sp_wide['Female'].fillna(0) > sp_wide['Male'].fillna(0) ] """ Explanation: Automating insight... We can automate some of the observations we might want to make, such as years when M speak more, on average, than F, within a party. End of explanation """
IST256/learn-python
content/lessons/09-Dictionaries/LAB-Dictionaries.ipynb
mit
stock = {} # empty dictionary stock['symbol'] = 'AAPL' stock['name'] = 'Apple Computer' print(stock) print(stock['symbol']) print(stock['name']) """ Explanation: In-Class Coding Lab: Dictionaries The goals of this lab are to help you understand: How to use Python Dictionaries Basic Dictionary methods Dealing with Key errors How to use lists of Dictionaries How to encode / decode python dictionaries to json. Dictionaries are Key-Value Pairs. The key is unique for each Python dictionary object and is always of type str. The value stored under the key can be any Python type. This example creates a stock variable with two keys symbol and name. We access the dictionary key with ['keyname']. End of explanation """ stock = { 'name' : 'Apple Computer', 'symbol' : 'AAPL', 'value' : 125.6 } print(f"Dictionary: {stock}") print(f"{stock['name']} ({stock['symbol']}) has a value of ${stock['value']:.2f}") """ Explanation: While Python lists are best suited for storing multiple values of the same type ( like grades ), Python dictionaries are best suited for storing hybrid values, or values with multiple attributes. In the example above we created an empty dictionary {} then assigned keys symbol and name as part of individual assignment statements. We can also build the dictionary in a single statement, like this: End of explanation """ # let's add 2 new keys print("Before changes", stock) stock['low'] = 119.85 stock['high'] = 127.0 # and update the value key stock['value'] = 126.25 print("After change", stock) """ Explanation: Dictionaries are mutable This means we can change their value. We can add and remove keys and update the value of keys. This makes dictionaries quite useful for storing data. End of explanation """ # TODO: Write code here """ Explanation: 1.1 You Code Create an empty python dictionary variable called car with the following keys make, model and price. Set appropriate values and print out the dictionary. End of explanation """ print("Keys", stock.keys()) print( stock['change'] ) """ Explanation: What Happens when the key is not there? Let's go back to our stock example. What happens when we try to read a key not present in the dictionary? The answer is that Python will report a KeyError End of explanation """ try: print( stock['change'] ) except KeyError: print("The key 'change' does not exist!") """ Explanation: No worries. We know how to handle run-time errors in Python... use try except !!! End of explanation """ print(f"KEY: name, VALUE",stock.get('name','no key')) print(f"KEY: change, VALUE",stock.get('change', 'no key')) """ Explanation: Using the get() method to avoid KeyError You can avoid KeyError using the get() dictionary method. This method will return a default value when the key does not exist. The first argument to get() is the key to get, the second argument is the value to return when the key does not exist. End of explanation """ stock = { 'name' : 'Apple Computer', 'symbol' : 'AAPL', 'value' : 125.6 } # TODO: write code here """ Explanation: 1.2 You Code Write a program to ask the user to input a key for the stock variable. If the key exists, print the value, otherwise print 'Key does not exist'. You can use the get() method or try..except Sample Run #1 Enter key: symbol KEY: symbol, VALUE: AAPL Sample Run #2 Enter key: mike KEY: mike, VALUE: Key Does Not exist End of explanation """ print("KEYS") for k in stock.keys(): print(k) print("VALUES") for v in stock.values(): print(v) """ Explanation: Enumerating keys and values You can enumerate keys and values easily, using the keys() and values() methods: End of explanation """ car = {'make': 'tesla', 'model': 'S', 'price': 69420, 'owner': 'bob', 'location' : 'Syracuse, NY', 'moving' : False } # TODO: Debug this code for key in car: value = car['key'] print(f"something") """ Explanation: 1.3 You Code: Debug The following program should loop over the keys in a dictionary and print the keys and values. Debug this code to get it working. Expected output (when code it corrected): KEY: make VALUE: tesla KEY: model VALUE: S KEY: price VALUE: 69420 KEY: owner VALUE: bob KEY: location VALUE: Syracuse, NY KEY: moving VALUE: False End of explanation """ portfolio = [ { 'symbol' : 'AAPL', 'name' : 'Apple Computer Corp.', 'value': 136.66 }, { 'symbol' : 'AMZN', 'name' : 'Amazon.com, Inc.', 'value': 845.24 }, { 'symbol' : 'MSFT', 'name' : 'Microsoft Corporation', 'value': 64.62 }, { 'symbol' : 'TSLA', 'name' : 'Tesla, Inc.', 'value': 257.00 } ] print("first stock", portfolio[0]) print("name of first stock", portfolio[0]['name']) print("last stock", portfolio[-1]) print("value of 2nd stock", portfolio[1]['value']) print("Here's a loop:") for stock in portfolio: print(f" {stock['name']} ${stock['value']}") """ Explanation: List of Dictionary The List of Dictionary object in Python allows us to create useful in-memory data structures. It's one of the features of Python that sets it apart from other programming languages. Let's use it to build a portfolio (list of 4 stocks). End of explanation """ !curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/09-Dictionaries/stocks.json -o stocks.json """ Explanation: Using json to read in lists of dictionary JSON (JavaScript Object Notation) files can be read in using Python's json module, and de-serialized into a list of dictionary. This is very powerful feature of the Python programming language and would require a lot more code to accomplish in other programming languages. It is one of the reasons Python is so attractive for the web and data manipulation. Run this code to download the json file End of explanation """ import json with open('stocks.json','r') as f: stocks = json.load(f) stocks """ Explanation: This code will read in the json file stocks.json and create a list of dictionary, stocks End of explanation """ find = 'IBM' for stock in stocks: if stock['symbol'] == find: print(f"symbol: {find} price: ${stock['price']:.2f}") """ Explanation: What is so special about a list of dictionary? You might be thinking "so what" or "big deal". Actually those 3 lines of Python code are a big deal. We created a variable stocks from a json text file. All the heavy lifting of tokenizing and parsing have been done for us! We can now use Python code to access the data. For example, to get IBM's stock price we can loop over the list until we find the symbol IBM: End of explanation """ import json with open('stocks.json','r') as f: stocks = json.load(f) # TODO Write code here: """ Explanation: 1.4 You code Build upon the code in the previous cell. Write a program to input a symbol and print out the price of that symbol. If the symbol is not in the list of stocks, print Symbol not found. Example Run #1 Enter a stock symbol: MSFT symbol: MSFT price: $212.55 Example Run #2 Enter a stock symbol: ABCD Symbol not found. End of explanation """ def loadStocks(): import json with open('stocks.json','r') as f: stocks = json.load(f) return stocks def findStock(symbolToFind): found = {} for stock in stocks: if stock['symbol'] == symbolToFind: return stock # TODO: Wwite code here """ Explanation: Putting It All Together Write a program to build out your personal stock portfolio. 1. Start with an empty list, called `mystocks` 2. call `loadStocks()` to get a list of stocks from the file. 2. loop 3. input a stock symbol, or type 'QUIT' to print your portfolio 4. if symbol equals 'QUIT' exit loop 5. call `findStock()` to locate the input symbol in the list of stocks. 6. if you found the stock (it's not an empty dictionary) then add it to the `mystocks` list. 9. time to print the portfolio: for each stock in the `mystocks` 10. print stock symbol and stock value, like this "AAPL $136.66" We will write few loadStocks() and findStocks()functions for you to make this a bit more manageable. Sample Run: Enter stock symbol to add to your portfolio or quit: IBM Enter stock symbol to add to your portfolio or quit: FB Enter stock symbol to add to your portfolio or quit: quit Your Portfolio IBM $128.39 FB $251.11 1.5 You Code End of explanation """ # run this code to turn in your work! from coursetools.submission import Submission Submission().submit() """ Explanation: Metacognition Rate your comfort level with this week's material so far. 1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below. 2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below. 3 ==> I can do this on my own without any help. 4 ==> I can do this on my own and can explain/teach how to do it to others. --== Double-Click Here then Enter a Number 1 through 4 Below This Line ==-- Questions And Comments Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class. --== Double-click Here then Enter Your Questions Below this Line ==-- End of explanation """
Patri-meteocat/Meteocat_ANL_collaboration
notebooks/Edges_dualPRF_example.ipynb
bsd-2-clause
import matplotlib.pyplot as plt import pylab as plb import matplotlib as mpl import pyart import numpy as np import scipy as sp import numpy.ma as ma from pylab import * from scipy import ndimage from matplotlib.backends.backend_pdf import PdfPages def local_valid(mask, dim, Nmin=None, **kwargs): if Nmin is None: Nmin = 1 # Count number of non-masked local values k = np.ones(dim) valid = ndimage.convolve((~mask).astype(int), k, **kwargs) mask_out = np.zeros(mask.shape) mask_out[valid<Nmin]=1 return valid, mask_out.astype(bool) def maconvolve(ma_array, weights, Nmin=None, **kwargs): ## Convolve masked array with generic kernel ## k = weights data = ma_array.data mask = ma_array.mask # Minimum number of non-masked local values required for the convolution if Nmin is None: Nmin=1 # Data convolution (replace masked values by 0) data_conv = ndimage.convolve(ma.filled(data,0), k, **kwargs) # Count number of non-masked local values valid, mask_conv = local_valid(mask, k.shape, Nmin=Nmin, **kwargs) # New mask and replace masked values by required fill value mask_out = mask_conv | mask data_out = ma.masked_array(data_conv, mask_out) # Returns a masked array return data_out def aliased_edges(radar): ## Find edges of aliased regions based on the horizontal velocity gradients ## v_field = radar.fields['velocity'] v_data = v_field['data'] # Gradient kernels in x,y (r,az) dimensions kx = np.array([[-1, 0, 1]]) ky = np.transpose(kx) g_data = v_data.copy() for nsweep, sweep_slice in enumerate(radar.iter_slice()): v_data_sw = v_data[sweep_slice] # Horizonal gradient components gx_data_sw = maconvolve(v_data_sw, kx, Nmin=3, mode='wrap') gy_data_sw = maconvolve(v_data_sw, ky, Nmin=3, mode='reflect') # Magnitude and direction of horizontal gradient g_data_sw = np.sqrt(ma.filled(gx_data_sw,0)**2 + ma.filled(gy_data_sw,0)**2) # New mask: mask values only when both gradient components are masked mask_sw = (gx_data_sw.mask) & (gy_data_sw.mask) g_data[sweep_slice] = ma.array(g_data_sw, mask=mask_sw) g_field = v_field.copy() g_field['data'] = g_data g_field['long_name'] = 'Velocity gradient module' g_field['standard_name'] = 'gradient_module' g_field['units']='m/s' # Returns magnitude of gradient shaped as radar object fields return g_field def get_dualPRF_pars(radar): pars = radar.instrument_parameters Ny = pars['nyquist_velocity']['data'][0] prt_mode = pars['prt_mode']['data'][0] Ny_H = Ny Ny_L = Ny prf_odd = None if prt_mode=='dual': f = pars['prt_ratio']['data'][0] N = round(1/(f-1)) Ny_H = Ny/N Ny_L = Ny/(N+1) prf_odd = pars['prf_flag']['data'][0] return Ny, Ny_H, Ny_L, f, N, prf_odd def dualPRF_outliers(data_ma, Ny, Ny_H, f, k_size=(3,5), Nmin=9, prf_odd=0, upper_lim_fac=1.5): ## Find outliers resulting from dual-PRF dealiasing errors ## data = data_ma.data mask = data_ma.mask # Nyquist velocities corresponding to odd and even rays Ny_L = Ny_H/f Ny_odd = Ny_H Ny_even = Ny_L if prf_odd is None: return if prf_odd==1: Ny_odd = Ny_L Ny_even = Ny_H # Footprint (region around the pixel where median is computed) k = np.ones(k_size) # Convert masked data to nan and apply median filter data_nan = np.where(np.logical_not(mask), data, np.nan) med_data = sp.ndimage.generic_filter(data_nan, np.nanmedian, mode='mirror', footprint=k) # Absolute deviation of the pixel velocity from the local median dev_data = np.abs(data_nan - med_data) dev_data[where(np.isnan(dev_data))]=0 # Separate into odd and even rays dev_odd = dev_data[1::2, :] dev_even = dev_data[0::2, :] # Outlier matrix out = np.zeros(dev_data.shape) out_odd = out[1::2, :] out_even = out[0::2, :] out_odd[ma.where((dev_odd>=Ny_odd)&(dev_odd<=upper_lim_fac*Ny))] = 1 out_even[ma.where((dev_even>=Ny_even)&(dev_even<=upper_lim_fac*Ny))] = 1 # Find local medians calculated with the required minimum number of valid values norms = sp.ndimage.convolve(np.logical_not(mask).astype(int), weights=k, mode='mirror') out[np.where(norms<Nmin)]=0 tot_out = np.sum(out)/data_ma.count() out = ma.array(out, mask=mask) # Returns fraction of outliers and masked array of outliers positions return tot_out, out """ Explanation: Finding edges of aliased regions This script contains three functions: 1) auxiliary function to convolve masked arrays, 2) a function that finds edges of aliased regions based on the horizontal velocity gradients (1dim Sobel filters) and 3) a function that creates an array used to mask outliers derived from dual-PRF dealiasing errors. These three functions are applied to a case example and the resulting edge (velocity gradient) histogram is compared before and after application of the outlier mask. End of explanation """ ## SETTINGS ##################################################################### in_path = './data/' out_path = '/Users/patriciaaltube/Desktop/figs/' filename = 'CDV130618145623.RAWCBRF' sw_sel = 2 # starts counting in 0 cmap_vel = plt.get_cmap('RdBu',31) cmap_edges = plt.get_cmap('Blues',12) cmap_out = plt.get_cmap('Reds',2) fig_vel_out = out_path + 'a_velocity_map.png' fig_edges_out = out_path + 'b_edges_map.png' fig_outliers = out_path + 'c_outliers_map.png' fig_vel = out_path + 'd_velocity_map.png' fig_edges = out_path + 'e_edges_map.png' fig_hist_out = out_path + 'f_gradient_hist.png' fig_hist = out_path + 'g_gradient_hist.png' """ Explanation: Application to case example (Creu del Vent radar; 06-18-2013 at 14:56UTC; elevation 0.6º): End of explanation """ ## DATA ########################################################################## in_file = in_path + filename radar = pyart.io.read(in_file) Ny_vel, Ny_H, Ny_L, f_ratio, N, odd = get_dualPRF_pars(radar) display = pyart.graph.RadarDisplay(radar) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) display.plot('velocity', sw_sel, vmin=-Ny_vel, vmax=Ny_vel, ax=ax, mask_outside=False, cmap=cmap_vel) display.plot_range_rings(range(25, 125, 25), lw=0.5, ls=':') display.plot_cross_hair(0.5) plt.xlim((-75, 75)) plt.ylim((-75, 75)) plt.savefig(fig_vel_out) """ Explanation: The dual-PRF data: End of explanation """ grad_mod_out = aliased_edges(radar) radar.add_field('gradient_module_out', grad_mod_out) display = pyart.graph.RadarDisplay(radar) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) display.plot('gradient_module_out', sw_sel, vmin=0, vmax=3*Ny_vel, ax=ax, mask_outside=False, cmap=cmap_edges) display.plot_range_rings(range(25, 125, 25), lw=0.5, ls=':') display.plot_cross_hair(0.5) plt.xlim((-75, 75)) plt.ylim((-75, 75)) plt.savefig(fig_edges_out) """ Explanation: Find edges in non-corrected velocity data (includes dual-PRF errors): End of explanation """ v_field = radar.fields['velocity'] v_data = v_field['data'] outliers = v_data.copy() f_out = {'sweeps':np.empty([1, radar.nsweeps]), 'data':np.empty([1, radar.nsweeps])} for nsweep, sweep_slice in enumerate(radar.iter_slice()): f_out['sweeps'][0, nsweep] = nsweep v_data_sw = v_data[sweep_slice] f_out['data'][0,nsweep], outliers[sweep_slice] = dualPRF_outliers(v_data_sw, Ny_vel, Ny_H, f_ratio, k_size=(3,5), Nmin=9, prf_odd=odd) f_out outl_f = radar.fields['velocity'].copy() outl_f['data'] = outliers outl_f['long_name'] = 'Dual-PRF outliers' outl_f['standard_name'] = 'dualPRF_outliers' outl_f['units']='' radar.add_field('dualPRF_outliers', outl_f) display = pyart.graph.RadarDisplay(radar) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) display.plot('dualPRF_outliers', sw_sel, ax=ax, vmin=0, vmax=1, mask_outside=False, cmap=cmap_out) display.plot_range_rings(range(25, 125, 25), lw=0.5, ls=':') display.plot_cross_hair(0.5) plt.xlim((-75, 75)) plt.ylim((-75, 75)) plt.savefig(fig_outliers) """ Explanation: Compute dual-PRF outlier mask (note that this process is unfortunately slow, this is due to the median filter not being implemented to deal with masked arrays so that the slower "generic filter" has to be used): <br> <img src="files/output/c_outliers_map.png" style="float: center; width: 45%"> End of explanation """ v_field = radar.fields['velocity'] mask_out = ma.mask_or(v_field['data'].mask, outl_f['data'].data.astype('bool')) v_field_noout = v_field.copy() v_field_noout['long_name'] = 'Velocity outliers removed' v_field_noout['standard_name'] = 'velocity_noout' v_field_noout['data'].mask = mask_out radar.add_field('velocity_noout', v_field_noout) display = pyart.graph.RadarDisplay(radar) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) display.plot('velocity_noout', sw_sel, vmin=-Ny_vel, vmax=Ny_vel, ax=ax, mask_outside=False, cmap=cmap_vel) display.plot_range_rings(range(25, 125, 25), lw=0.5, ls=':') display.plot_cross_hair(0.5) plt.xlim((-75, 75)) plt.ylim((-75, 75)) plt.savefig(fig_vel) """ Explanation: Mask outliers in velocity data and add the new data as a new field in the radar object: End of explanation """ grad_mod = aliased_edges(radar) radar.add_field('gradient_module', grad_mod) display = pyart.graph.RadarDisplay(radar) fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111) display.plot('gradient_module', sw_sel, vmin=0, vmax=3*Ny_vel, ax=ax, mask_outside=False, cmap=cmap_edges) display.plot_range_rings(range(25, 125, 25), lw=0.5, ls=':') display.plot_cross_hair(0.5) plt.xlim((-75, 75)) plt.ylim((-75, 75)) plt.savefig(fig_edges) """ Explanation: Find edges in masked velocity data (without dual-PRF errors): End of explanation """ plot_data_out=radar.get_field(sw_sel, 'gradient_module_out') fig = plt.figure() ax = fig.add_subplot(111) (n, bins, patches) = ax.hist(plot_data_out.compressed(), bins=round(3.5*Ny_vel), color='grey', alpha=0.8) ax.set_ylim([0,500]) ax.set_xlim([0, 3.5*Ny_vel]) ax.set_xlabel('Gradient module (m/s)') ax.set_ylabel('Pixel counts') plt.savefig(fig_hist_out) fig_hist = out_path + 'g_gradient_hist.png' plot_data=radar.get_field(sw_sel, 'gradient_module') fig = plt.figure() ax = fig.add_subplot(111) (n, bins, patches) = ax.hist(plot_data.compressed(), bins=round(3.5*Ny_vel), color='grey', alpha=0.8) ax.set_ylim([0,500]) ax.set_xlim([0, 3.5*Ny_vel]) ax.set_xlabel('Gradient module (m/s)') ax.set_ylabel('Pixel counts') plt.savefig(fig_hist) """ Explanation: Plot two histograms of velocity gradients; with and withouth dual-PRF outliers: End of explanation """
karlstroetmann/Formal-Languages
Ply/Mysterious-Conflicts.ipynb
gpl-2.0
import ply.lex as lex tokens = [ 'X' ] def t_X(t): r'x' return t literals = ['v', 'w', 'y', 'z'] t_ignore = ' \t' def t_newline(t): r'\n+' t.lexer.lineno += t.value.count('\n') def t_error(t): print(f"Illegal character '{t.value[0]}'") t.lexer.skip(1) __file__ = 'main' lexer = lex.lex() """ Explanation: Dealing with Mysterious Conflicts This file shows reduce/reduce conflicts that result from the fact that the set of LR states is compressed into the set of LALR states. Specification of the Scanner We implement a minimal scanner for arithmetic expressions. End of explanation """ import ply.yacc as yacc """ Explanation: Specification of the Parser End of explanation """ start = 's' """ Explanation: The start variable of our grammar is expr, but we dont't have to specify that. The default start variable is the first vvariable that is defined. End of explanation """ def p_s(p): """ s : 'v' a 'y' | 'w' b 'y' | 'v' b 'z' | 'w' a 'z' a : X b : X """ pass def p_error(p): if p: print(f'Syntax error at {p.value}.') else: print('Syntax error at end of input.') """ Explanation: We can specify multiple expressions in a single rule. In this case, we have used the pass statement as we just want to check whether there are reduce/reduce conflicts. End of explanation """ parser = yacc.yacc(write_tables=False, debug=True) """ Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table. End of explanation """ !cat parser.out """ Explanation: Interestingly, there are no conflicts. If Ply had indeed calculated LALR tables, there would have been conflicts. However, Ply is smart enough to not merge states if this merger would result in a reduce/reduce conflict. Below, state 6 and state 9 should have been merged since they have the same core, but as the merger would produce a reduce/reduce conflict, Ply does not merge these states. End of explanation """
miky-kr5/Presentations
EVI - 2018/EVI 04/Modulo2.ipynb
cc0-1.0
import pandas as pd pd.Series? """ Explanation: Introducción a Pandas en los cuadernos de Jupyter La estructura de Datos Serie Arreglo unidimensional con etiquetas en los ejes (incluidas series de tiempo). Los parámetros de una Serie son: data (matriz, diccionario o escalar), index (arreglo de índices), dtype (numpy.dtype o None) y copy (booleano o por defecto False) <br> Importamos la biblioteca Pandas End of explanation """ animales = ['Tigre', 'Oso', 'Camello'] pd.Series(animales) numeros = [1, 2, 3] pd.Series(numeros) animales = ['Tigre', 'Oso', None] pd.Series(animales) """ Explanation: <br> Podemos convertir una lista en una serie y pandas asigna de manera inmediata una lista de índices que empieza en 0. End of explanation """ numeros = [1, 2, None] pd.Series(numeros) """ Explanation: <br> Es importante saber como NumPy y Pandas manejan los datos faltantes. En Python tenemos el tipo None para indicar un dato faltante. Si tenemos una lista de números, Pandas automáticamente convierte este valor None en un valor designado como NaN, que significa Not a Number. End of explanation """ import numpy as np np.nan == None np.nan == np.nan """ Explanation: <br> Importamos la biblioteca NumPy. También es importante saber que NaN no es None. Cuando hacemos un test para saber si NaN es NaN tambien obtendremos False. End of explanation """ print(np.isnan(np.nan)) print(None is None) print(np.nan is np.nan) """ Explanation: <br> Se necesita la función especial isnan de NumPy para chequear la presencia de un no número en nuestros datos. End of explanation """ deportes = {'Capoeira': 'Brasil', 'Rayuela': 'Chile', 'Pelota Vasca': 'País Vasco', 'Béisbol': 'Cuba', 'Rugby': 'Gales', 'Golf': 'Escocia', 'Corrida de Toros': 'España', 'Sumo': 'Japón'} s = pd.Series(deportes) s """ Explanation: <br> ¿Cómo creamos una serie en Pandas? Podemos utilizar una estructura de datos diccionario con sus claves y convertirlo en una serie, donde los índices de la serie son las claves del diccionario. End of explanation """ s.index """ Explanation: <br> Luego, podemos chequear la lista de índices con el atributo .index End of explanation """ s = pd.Series(['Tigre', 'Oso', 'Camello'], index=['India', 'America', 'Africa']) s """ Explanation: <br> En este otro ejemplo, pasamos directamente una lista con su conjunto de índices para crear la Serie. End of explanation """ deportes = {'Capoeira': 'Brasil', 'Rayuela': 'Chile', 'Pelota Vasca': 'País Vasco', 'Béisbol': 'Cuba', 'Rugby': 'Gales', 'Golf': 'Escocia', 'Corrida de Toros': 'España', 'Sumo': 'Japón'} s = pd.Series(deportes, index=['Capoeira', 'Sumo', 'Pelota Vasca', 'Natación']) s """ Explanation: <br> Aquí tenemos un ejemplo de un elemento nuevo en la lista de índices que no tiene un valor asignado, no existe un país asociado al índice Natación y Pandas representa este valor faltante con NaN. End of explanation """ deportes = {'Capoeira': 'Brasil', 'Rayuela': 'Chile', 'Pelota Vasca': 'País Vasco', 'Béisbol': 'Cuba', 'Rugby': 'Gales', 'Golf': 'Escocia', 'Corrida de Toros': 'España', 'Sumo': 'Japón'} s = pd.Series(deportes) s """ Explanation: Búsqueda en una Serie End of explanation """ s.iloc[4] s.loc['Pelota Vasca'] """ Explanation: <br> Podemos hacer búsquedas en las series por posición de índices o por etiqueta de índices. Si queremos hacer búsqueda por ubicación numérica (empezando desde 0) utilizamos el atributo iloc. Si por otra parte, hacemos búqueda por etiqueta de índice entonces usamos el atributo loc. End of explanation """ s[4] s['Pelota Vasca'] """ Explanation: <br> Pandas trata de que el código sea más legible. Si le pasamos por parámetro un valor numérico a la Serie esta se comportará como si la búsqueda se hace con el atributo iloc, si en cambio le pasamos un objeto, hará la búsqueda por etiqueta como con el atributo loc. End of explanation """ deportes = {99: 'Brasil', 100: 'Chile', 101: 'País Vasco', 102: 'Cuba', 103: 'Gales', 104: 'Escocia', 105: 'España', 106: 'Japón'} s = pd.Series(deportes) s """ Explanation: <br> ¿Qué pasa cuando tenemos una lista de índices que son enteros? End of explanation """ s[0] #Esta instrucción no llamará s.iloc[0] como esperaríamos y va a generar un error s.iloc[0] s.loc[99] """ Explanation: <br> Cuando tenemos un caso como este es más seguro utilizar los atributos iloc o loc según sea el caso. End of explanation """ s = pd.Series([105.00, 223.00, 5, 102.00, 27, -126]) s """ Explanation: <br> Ya que sabemos hacer búsquedas en las Series, ahora vamos a trabajar con los datos (encontrar valores, resumir los datos o transformarlos). End of explanation """ total = 0 for elemento in s: total+=elemento print(total) """ Explanation: <br> Una forma de trabajar es iterar sobre un conjunto de datos e invocar una operación de interés End of explanation """ import numpy as np total = np.sum(s) print(total) """ Explanation: <br> Con NumPy podemos tener acceso a las funciones universales binarias o unarias (vectorizadas, cálculos más rápidos). En este ejemplo, np.sum hará la suma de todos los elementos en la serie. End of explanation """ s = pd.Series(np.random.randint(0,1000,10000)) print(s.head()) print(len(s)) """ Explanation: <br> También podemos generar una serie grande de números aleatorios y con el método .head() podemos desplegar un encabezado con los 5 primeros elementos de la serie y con len chequear el tamaño de la misma. End of explanation """ %%timeit -n 100 sumar = 0 for elemento in s: sumar+=elemento %%timeit -n 100 sumar = np.sum(s) """ Explanation: <br> Los cuadernos de Jupyter tienen funciones mágicas que pueden ser útiles. Una de ellas es %%timeit que nos servirá para ver cuál de los dos métodos para sumar elementos de una serie es más rápido. Basta con tipear el símbolo % y la tecla Tab para obtener una lista de las funciones mágicas de Jupyter. End of explanation """ s+=2 #Suma 2 a cada elemento de la serie usando broadcasting s.head() """ Explanation: <br> NumPy y Pandas tienen el broadcasting, se puede aplicar una operación a cada valor de la serie y modificarla. End of explanation """ for etiqueta, valor in s.iteritems(): s.set_value(etiqueta, valor+2) s.head() %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) for etiqueta, valor in s.iteritems(): s.loc[etiqueta]= valor+2 %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,10000)) s+=2 """ Explanation: <br> Una manera poco eficiente de hacer esto es iterar sobre cada elemento de la serie para hacer la suma. El método .iteritems() devuelve un iterador sobre los pares (key, value) (clave, valor) de un diccionario, en este caso de nuestra serie s. End of explanation """ import pandas as pd s = pd.Series([1, 2, 3]) s.loc['Animal'] = 'Oso' s """ Explanation: <br> Podemos agregar elementos a una serie de la siguiente forma: End of explanation """ deportes_originales = pd.Series({'Capoeira': 'Brasil', 'Rayuela': 'Chile', 'Pelota Vasca': 'País Vasco', 'Béisbol': 'Cuba', 'Rugby': 'Gales', 'Golf': 'Escocia', 'Corrida de Toros': 'España', 'Sumo': 'Japón'}) paises_que_aman_el_beisbol = pd.Series(['Venezuela', 'USA', 'Cuba', 'Puerto Rico', 'Dominicana'], index=['Béisbol', 'Béisbol', 'Béisbol', 'Béisbol', 'Béisbol']) todos_los_paises = deportes_originales.append(paises_que_aman_el_beisbol) deportes_originales paises_que_aman_el_beisbol todos_los_paises todos_los_paises.loc['Béisbol'] """ Explanation: <br> Este es un ejemplo de una serie donde los valores del conjunto de índices no son únicos. Esto hace que las tablas de datos funcionen diferente y es por ello que agregar nuevos elementos debe hacerse con el método append, que en primera instancia, no modificará la serie sino que devuelve una nueva serie con los elementos que se agregaron. End of explanation """ import pandas as pd compra_1 = pd.Series({'Nombre': 'Adelis', 'Artículo comprado': 'Libro', 'Costo': 1200}) compra_2 = pd.Series({'Nombre': 'Miguel', 'Artículo comprado': 'Raspberry pi 3', 'Costo': 15000}) compra_3 = pd.Series({'Nombre': 'Jaime', 'Artículo comprado': 'Balón', 'Costo': 5000}) df = pd.DataFrame([compra_1, compra_2, compra_3], index=['Tienda 1', 'Tienda 1', 'Tienda 2']) df.head() """ Explanation: La estructura de datos DataFrame <br> El DataFrame o Tabla de Datos es el corazón de la biblioteca Pandas. Es el objeto primario para el análisis de datos. Es una especie de arreglo bidimensional con etiquetas en los ejes. En este ejemplo, crearemos tres diccionarios que serán luego las filas de nuestro DataFrame. End of explanation """ df.loc['Tienda 2'] """ Explanation: <br> En un DataFrame también se puede extraer información usando los atributos loc y iloc. End of explanation """ type(df.loc['Tienda 2']) """ Explanation: <br> También podemos chequear el tipo de dato usando la función type de Python. End of explanation """ df.loc['Tienda 1'] """ Explanation: <br> En los DataFrame también se pueden tener listas de índices no únicos. En el ejemplo, hay dos índices con el mismo nombre Tienda 1. End of explanation """ df.loc['Tienda 1', 'Costo'] """ Explanation: <br> También podemos seleccionar columnas agregando un parámetro extra al atributo loc. End of explanation """ df.T """ Explanation: <br> Usar el atributo .T para obtener la transpuesta del DataFrame o Tabla de Datos. End of explanation """ df.T.loc['Costo'] df['Costo'] df.loc['Tienda 1']['Costo'] """ Explanation: <br> Usando .T.loc[] se puede seleccionar una columna usando como parámetro la etiqueta de su nombre. End of explanation """ df.loc[:,['Nombre', 'Costo']] """ Explanation: <br> loc también tiene soporte para rebanar o seleccionar del DataFrame con la notación [] End of explanation """ df.drop('Tienda 1') """ Explanation: <br> También podemos eliminar datos del DataFrame con la función drop(). Esta función toma un solo parámetro que es el índice del conjunto de datos que deseamos eliminar. End of explanation """ df """ Explanation: <br> Podemos ver que nuestro DataFrame original sigue intacto. Solo hicimos una extracción de información. End of explanation """ copiar_df = df.copy() copiar_df = copiar_df.drop('Tienda 1') copiar_df copiar_df.drop? """ Explanation: <br> También podemos hacer una copia del DataFrame con la función copy() para guardar la extracción de información. End of explanation """ del copiar_df['Costo'] copiar_df """ Explanation: <br> Podemos eliminar una columna de manera sencilla, usando simplemente la palabra clave del y el índice o nombre de la comuna. End of explanation """ df['Ubicación'] = ['Venezuela', 'Chile', 'Argentina'] df """ Explanation: <br> Finalmente, es muy sencillo agregar una columna al DataFrame. End of explanation """ !cat olympics.csv """ Explanation: Lectura de un DataFrame <br> Usemos !cat para leer un archivo de formato CSV. Nota: !cat funciona para Linux y Mac pero puede no funcionar para Windows :( End of explanation """ import pandas as pd df = pd.read_csv('olympics.csv') df.head() """ Explanation: <br> Pero ... no hay que preocuparse mucho por eso! Podemos leer este archivo en formato CSV en un DataFrame usando la función read_csv. End of explanation """ df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1) df.head() """ Explanation: <br> Aquí podemos ignorar la primera fila del DataFrame para dejar más limpia la tabla de información no relevante. End of explanation """ df.columns for col in df.columns: if col[:2]=='01': df.rename(columns={col:'Gold' + col[4:]}, inplace=True) if col[:2]=='02': df.rename(columns={col:'Silver' + col[4:]}, inplace=True) if col[:2]=='03': df.rename(columns={col:'Bronze' + col[4:]}, inplace=True) if col[:1]=='№': df.rename(columns={col:'#' + col[1:]}, inplace=True) df.head() """ Explanation: <br> El atributo .columns nos permite ver el nombre de las comlumnas del DataFrame y el atributo .rename modificar el nombre. End of explanation """ df['Gold'] > 0 """ Explanation: Haciendo búsquedas en un DataFrame <br> Podemos buscar en el DataFrame con una máscara Booleana qué países tienen (True) o no (False) una medalla de oro. End of explanation """ only_gold = df.where(df['Gold'] > 0) only_gold.head() """ Explanation: <br> La función .where() toma una máscara booleana como condición en el argumento, la aplica al DataFrame, y devuelve un DataFrame de la misma forma. En nuestro ejemplo, reemplaza con NaN los casos False y con su valor original, los casos True. End of explanation """ only_gold['Gold'].count() """ Explanation: <br> Podemos contar cuántas países hay medallas de oro hay en total con count() End of explanation """ df['Gold'].count() """ Explanation: Si contamos sobre los datos originales, veremos que hay 147 países. Cuenta los países para los cuales la máscara Booleana dio False >.< End of explanation """ len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)]) """ Explanation: <br> Podemos establecer otro tipo de condiciones para hacer búsquedas más complejas. Por ejemplo, buscar la cantidad de países que han ganado medalla de oro alguna vez. End of explanation """ df[(df['Gold.1'] > 0) & (df['Gold'] == 0)] """ Explanation: <br> Buscar qué países han ganado sólo medallas de oro en Invierno y nunca en Verano. End of explanation """
pastas/pasta
examples/notebooks/10_multiple_wells.ipynb
mit
import numpy as np import pandas as pd import pastas as ps import matplotlib.pyplot as plt ps.show_versions() """ Explanation: Adding Multiple Wells This notebook shows how a WellModel can be used to fit multiple wells with one response function. The influence of the individual wells is scaled by the distance to the observation point. Developed by R.C. Caljé, (Artesia Water 2020), D.A. Brakenhoff, (Artesia Water 2019), and R.A. Collenteur, (Artesia Water 2018) End of explanation """ fname = '../data/MenyanthesTest.men' meny = ps.read.MenyData(fname) """ Explanation: Load data from a Menyanthes file Menyanthes is timeseries analysis software used by many people in the Netherlands. In this example a Menyanthes-file with one observation-series is imported, and simulated. There are several stresses in the Menyanthes-file, among which are three groundwater extractions with a significant influence on groundwater head. Import the Menyanthes-file with observations and stresses. End of explanation """ # Get distances from metadata xo = meny.H["Obsevation well"]['xcoord'] yo = meny.H["Obsevation well"]['ycoord'] distances = [] extraction_names = ['Extraction 2', 'Extraction 3'] for extr in extraction_names: xw = meny.IN[extr]["xcoord"] yw = meny.IN[extr]["ycoord"] distances.append(np.sqrt((xo-xw)**2 + (yo-yw)**2)) extraction_names = [name.replace(" ", "_") for name in extraction_names] # replace spaces in names for Pastas df = pd.DataFrame(distances, index=extraction_names, columns=['Distance to observation well']) df """ Explanation: Get the distances of the extractions to the observation well. Extraction 1 is about two times as far from the observation well as extraction 2 and 3. We will use this information later in our WellModel. End of explanation """ # plot timeseries f1, axarr = plt.subplots(len(meny.IN)+1, sharex=True, figsize=(10,8)) oseries = meny.H['Obsevation well']["values"] oseries.plot(ax=axarr[0], color='k') axarr[0].set_title(meny.H['Obsevation well']["Name"]) for i, (name, data) in enumerate(meny.IN.items(), start=1): data["values"].plot(ax=axarr[i]) axarr[i].set_title(name) plt.tight_layout(pad=0) """ Explanation: Then plot the observations, together with the diferent stresses in the Menyanthes file. End of explanation """ oseries = ps.TimeSeries(meny.H['Obsevation well']['values'].dropna(), name="heads", settings="oseries") # create model ml = ps.Model(oseries) """ Explanation: Create a model with a separate StressModel for each extraction First we create a model with a separate StressModel for each groundwater extraction. First we create a model with the heads timeseries and add recharge as a stress. End of explanation """ prec = meny.IN['Precipitation']['values'] prec.index = prec.index.round("D") prec.name = "prec" evap = meny.IN['Evaporation']['values'] evap.index = evap.index.round("D") evap.name = "evap" """ Explanation: Get the precipitation and evaporation timeseries and round the index to remove the hours from the timestamps. End of explanation """ rm = ps.RechargeModel(prec, evap, ps.Exponential, 'Recharge') ml.add_stressmodel(rm) """ Explanation: Create a recharge stressmodel and add to the model. End of explanation """ stresses = [] for name in extraction_names: # get extraction timeseries s = meny.IN[name.replace("_", " ")]['values'] # convert index to end-of-month timeseries s.index = s.index.to_period("M").to_timestamp("M") # resample to daily values s_daily = ps.utils.timestep_weighted_resample_fast(s, "D") # create pastas.TimeSeries object stress = ps.TimeSeries(s_daily.dropna(), name=name, settings="well") # append to stresses list stresses.append(stress) """ Explanation: Get the extraction timeseries. End of explanation """ for stress in stresses: sm = ps.StressModel(stress, ps.Hantush, stress.name, up=False) ml.add_stressmodel(sm) """ Explanation: Add each of the extractions as a separate StressModel. End of explanation """ ml.solve(solver=ps.LmfitSolve) """ Explanation: Solve the model. Note the use of ps.LmfitSolve. This is because of an issue concerning optimization with small parameter values in scipy.least_squares. This is something that may influence models containing a WellModel (which we will be creating later) and since we want to keep the models in this Notebook as similar as possible, we're also using ps.LmfitSolve here. End of explanation """ ml.plots.decomposition(); """ Explanation: Visualize the results Plot the decomposition to see the individual influence of each of the wells. End of explanation """ for i in range(len(extraction_names)): name = extraction_names[i] sm = ml.stressmodels[name] p = ml.get_parameters(name) gain = sm.rfunc.gain(p) * 1e6 / 365.25 print(f"{name}: gain = {gain:.3f} m / Mm^3/year") df.at[name, 'gain StressModel'] = gain """ Explanation: We can calculate the gain of each extraction (quantified as the effect on the groundwater level of a continuous extraction of ~1 Mm$^3$/yr). End of explanation """ ml_wm = ps.Model(oseries, oseries.name + "_wm") rm = ps.RechargeModel(prec, evap, ps.Gamma, 'Recharge') ml_wm.add_stressmodel(rm) """ Explanation: Create a model with a WellModel We can reduce the number of parameters in the model by including the three extractions in a WellModel. This WellModel takes into account the distances from the three extractions to the observation well, and assumes constant geohydrological properties. All of the extractions now share the same response function, scaled by the distance between the extraction well and the observation well. First we create a new model and add recharge. End of explanation """ w = ps.WellModel(stresses, ps.HantushWellModel, "Wells", distances, settings="well") ml_wm.add_stressmodel(w) """ Explanation: We have all the information we need to create a WellModel: - timeseries for each of the extractions, these are passed as a list of stresses - distances from each extraction to the observation point, note that the order of these distances must correspond to the order of the stresses. Note: the WellModel only works with a special version of the Hantush response function called HantushWellModel. This is because the response function must support scaling by a distance $r$. The HantushWellModel response function has been modified to support this. The Hantush response normally takes three parameters: the gain $A$, $a$ and $b$. This special version accepts 4 parameters: it interprets that fourth parameter as the distance $r$, and uses it to scale the parameters accordingly. Create the WellModel and add to the model. End of explanation """ ml_wm.solve(solver=ps.LmfitSolve) """ Explanation: Solve the model. We are once again using ps.LmfitSolve. The user is notified about the preference for this solver in a WARNING when creating the WellModel (see above). As we can see, the fit with the measurements (EVP) is similar to the result with the previous model, with each well included separately. End of explanation """ ml_wm.plots.decomposition(); """ Explanation: Visualize the results Plot the decomposition to see the individual influence of each of the wells End of explanation """ ml_wm.plots.stacked_results(figsize=(10, 8)); """ Explanation: Plot the stacked influence of each of the individual extraction wells in the results plot End of explanation """ wm = ml_wm.stressmodels["Wells"] for i in range(len(extraction_names)): # get parameters p = wm.get_parameters(model=ml_wm, istress=i) # calculate gain gain = wm.rfunc.gain(p) * 1e6 / 365.25 name = wm.stress[i].name print(f"{name}: gain = {gain:.3f} m / Mm^3/year") df.at[name, 'gain WellModel'] = gain """ Explanation: Get parameters for each well (including the distance) and calculate the gain. The WellModel reorders the stresses from closest to the observation well, to furthest from the observation well. We have take this into account during the post-processing. The gain of extraction 1 is lower than the gain of extraction 2 and 3. This will always be the case in a WellModel when the distance from the observation well to extraction 1 is larger than the distance to extraction 2 and 3. End of explanation """ df.style.format("{:.4f}") """ Explanation: Compare individual StressModels and WellModel Compare the gains that were calculated by the individual StressModels and the WellModel. End of explanation """
jhjungCode/pytorch-tutorial
06_MINIST_Save_and_Restore.ipynb
mit
%matplotlib inline """ Explanation: Save & Restore with a minist example Minist예제를 수행하면 알겠지만, Train에 생각보다는 꽤 많은 시간이 소요됩니다. 이 이유만이 아니라 평가시에는 trainnig후에 model의 parameter를 저장했다가 평가시에는 그 parameter를 불러들여서 사용하는 것이 일반적입니다. 여기에 사용되는 함수는 torch.save, torch.load와 model.state_dict(), model.load_state_dict()입니다. 사실 4장의 tutorial의 마지막에 torch.save를 이용하여 model parameter를 저장을 했습니다. 따라서 이번 장에서는 train과정 없이 save된 file로 부터 model의 parameter를 복구하여 사용해보도록 하겠습니다. ```python training end torch.save(model.state_dict(), checkpoint_filename) evaluating start checkpoint = torch.load(checkpoint_filename) model.load_state_dict(checkpoint) ``` End of explanation """ import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision import datasets, transforms from torch.autograd import Variable import matplotlib.pyplot as plt is_cuda = torch.cuda.is_available() # cuda 사용가능시, True checkpoint_filename = 'minist.ckpt' test_loader = torch.utils.data.DataLoader( datasets.MNIST('data', train=False, transform=transforms.ToTensor()), batch_size=100, shuffle=False) """ Explanation: 1. 입력DataLoader 설정 End of explanation """ class MnistModel(nn.Module): def __init__(self): super(MnistModel, self).__init__() # input is 28x28 # padding=2 for same padding self.conv1 = nn.Conv2d(1, 32, 5, padding=2) # feature map size is 14*14 by pooling # padding=2 for same padding self.conv2 = nn.Conv2d(32, 64, 5, padding=2) # feature map size is 7*7 by pooling self.fc1 = nn.Linear(64*7*7, 1024) self.fc2 = nn.Linear(1024, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), 2) x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, 64*7*7) # reshape Variable x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) model = MnistModel() if is_cuda : model.cuda() """ Explanation: 2. 사전 설정 * model * loss (train을 하지 않으므로, 생략) * opimizer (train을 하지 않으므로, 생략) End of explanation """ checkpoint = torch.load(checkpoint_filename) model.load_state_dict(checkpoint) """ Explanation: 3. Restore model paramter from saved file End of explanation """ model.eval() correct = 0 for image, target in test_loader: if is_cuda : image, target = image.cuda(), target.cuda() image, target = Variable(image, volatile=True), Variable(target) output = model(image) prediction = output.data.max(1)[1] correct += prediction.eq(target.data).sum() print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset))) """ Explanation: 6. Predict & Evaluate train을 하지 않았음에도 이전에 학습된 model parameter를 복원하여 정확도가 98%이상인 것을 알 수 있습니다. End of explanation """ model.state_dict().keys() plt.rcParams["figure.figsize"] = [8, 4] weight = model.state_dict()['conv1.weight'] wmax, wmin = torch.max(weight), torch.min(weight) gridimg = torchvision.utils.make_grid(weight).cpu().numpy().transpose((1,2,0)) plt.imshow(gridimg[:,:,0], vmin = wmin, vmax =wmax, interpolation='nearest', cmap='seismic') # gridimg[:, :, 0]는 한 color channel을 출력 plt.rcParams["figure.figsize"] = [8, 8] weight = model.state_dict()['conv2.weight'] # 64 x 32 x 5 x 5 weight = weight[:, 0:1, :, :] # 64 x 1 x 5 x 5 wmax, wmin = torch.max(weight), torch.min(weight) gridimg = torchvision.utils.make_grid(weight).cpu().numpy().transpose((1,2,0)) plt.imshow(gridimg[:,:,0], vmin = wmin, vmax =wmax, interpolation='nearest', cmap='seismic') # gridimg[:, :, 0]는 한 color channel을 출력 """ Explanation: 5. plot weights model의 weight를 plot하여 봅니다. End of explanation """
fangohr/oommf-python
new/notebooks/standard_problem3.ipynb
bsd-2-clause
!rm -rf standard_problem3/ # Delete old result files (if any). """ Explanation: Micromagnetic standard problem 3 Author: Marijan Beg, Ryan Pepper Date: 11 May 2016 Problem specification This problem is to calculate the single domain limit of a cubic magnetic particle. This is the size $L$ of equal energy for the so-called flower state (which one may also call a splayed state or a modified single-domain state) on the one hand, and the vortex or curling state on the other hand. Geometry: A cube with edge length, $L$, expressed in units of the intrinsic length scale, $l_\text{ex} = \sqrt{A/K_\text{m}}$, where $K_\text{m}$ is a magnetostatic energy density, $K_\text{m} = \frac{1}{2}\mu_{0}M_\text{s}^{2}$. Material parameters: uniaxial anisotropy $K_\text{u}$ with $K_\text{u} = 0.1 K_\text{m}$, and with the easy axis directed parallel to a principal axis of the cube (0, 0, 1), exchange energy constant is $A = \frac{1}{2}\mu_{0}M_\text{s}^{2}l_\text{ex}^{2}$. More details about the standard problem 3 can be found in Ref. 1. Simulation End of explanation """ import sys sys.path.append('../') from sim import Sim from atlases import BoxAtlas from meshes import RectangularMesh from energies.exchange import UniformExchange from energies.demag import Demag from energies.zeeman import FixedZeeman from energies.anisotropy import UniaxialAnisotropy """ Explanation: Firstly, we import all necessary modules. End of explanation """ # Function for initiaising the flower state. def m_init_flower(pos): x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9 mx = 0 my = 2*z - 1 mz = -2*y + 1 norm_squared = mx**2 + my**2 + mz**2 if norm_squared <= 0.05: return (1, 0, 0) else: return (mx, my, mz) # Function for initialising the vortex state. def m_init_vortex(pos): x, y, z = pos[0]/1e-9, pos[1]/1e-9, pos[2]/1e-9 mx = 0 my = np.sin(np.pi/2 * (x-0.5)) mz = np.cos(np.pi/2 * (x-0.5)) return (mx, my, mz) """ Explanation: The following two functions are used for initialising the system's magnetisation [1]. End of explanation """ import numpy as np def relaxed_state(L, m_init): mu0 = 4*np.pi*1e-7 # magnetic constant (H/m) N = 16 # discretisation in one dimension cubesize = 100e-9 # cude edge length (m) cellsize = cubesize/N # discretisation in all three dimensions. lex = cubesize/L # exchange length. Km = 1e6 # magnetostatic energy density (J/m**3) Ms = np.sqrt(2*Km/mu0) # magnetisation saturation (A/m) A = 0.5 * mu0 * Ms**2 * lex**2 # exchange energy constant K1 = 0.1*Km # Uniaxial anisotropy constant axis = (0, 0, 1) # Uniaxial anisotropy easy-axis cmin = (0, 0, 0) # Minimum sample coordinate. cmax = (cubesize, cubesize, cubesize) # Maximum sample coordinate. d = (cellsize, cellsize, cellsize) # Discretisation. atlas = BoxAtlas(cmin, cmax) # Create an atlas object. mesh = RectangularMesh(atlas, d) # Create a mesh object. sim = Sim(mesh, Ms, name='standard_problem3') # Create a simulation object. sim.add(UniformExchange(A)) # Add exchange energy. sim.add(Demag()) # Add demagnetisation energy. sim.add(UniaxialAnisotropy(K1, axis)) # Add uniaxial anisotropy energy. sim.set_m(m_init) # Initialise the system. sim.relax() # Relax the magnetisation. return sim """ Explanation: The following function is used for convenience. It takes two arguments: $L$ - the cube edge length in units of $l_\text{ex}$ the function for initialising the system's magnetisation It returns the relaxed simulation object. End of explanation """ sim_vortex = relaxed_state(8, m_init_vortex) print 'The relaxed state energy is {} J'.format(sim_vortex.total_energy()) # Plot the magnetisation in the sample slice. %matplotlib inline sim_vortex.m.plot_slice('y', 50e-9, xsize=6) """ Explanation: Relaxed states Vortex state: End of explanation """ sim_flower = relaxed_state(8, m_init_flower) print 'The relaxed state energy is {} J'.format(sim_flower.total_energy()) # Plot the magnetisation in the sample slice. sim_flower.m.plot_slice('z', 50e-9, xsize=6) """ Explanation: Flower state: End of explanation """ L_array = np.linspace(8, 9, 11) # values of L for which the system is relaxed. vortex_energies = [] flower_energies = [] for L in L_array: sim_vortex = relaxed_state(L, m_init_vortex) sim_flower = relaxed_state(L, m_init_flower) vortex_energies.append(sim_vortex.total_energy()) flower_energies.append(sim_flower.total_energy()) # Plot the energy dependences. import matplotlib.pyplot as plt plt.plot(L_array, vortex_energies, 'o-', label='vortex') plt.plot(L_array, flower_energies, 'o-', label='flower') plt.xlabel('L (lex)') plt.ylabel('E') plt.grid() plt.legend() """ Explanation: Cross section Now, we can plot the energies of both vortex and flower states as a function of cube edge length. This will give us an idea where the state transition occurrs. End of explanation """ from scipy.optimize import bisect def energy_difference(L): sim_vortex = relaxed_state(L, m_init_vortex) sim_flower = relaxed_state(L, m_init_flower) return sim_vortex.total_energy() - sim_flower.total_energy() cross_section = bisect(energy_difference, 8, 9, xtol=0.1) print 'The transition between vortex and flower states occurs at {}*lex'.format(cross_section) """ Explanation: We now know that the energy crossing occurrs between $8l_\text{ex}$ and $9l_\text{ex}$, so a bisection algorithm can be used to find the exact crossing. End of explanation """
NervanaSystems/coach
tutorials/0. Quick Start Guide.ipynb
apache-2.0
# Adding module path to sys path if not there, so rl_coach submodules can be imported import os import sys import tensorflow as tf module_path = os.path.abspath(os.path.join('..')) resources_path = os.path.abspath(os.path.join('Resources')) if module_path not in sys.path: sys.path.append(module_path) if resources_path not in sys.path: sys.path.append(resources_path) from rl_coach.coach import CoachInterface """ Explanation: Getting Started Guide Table of Contents Using Coach from the Command Line Using Coach as a Library Preset based - using CoachInterface Training a preset Running each training or inference iteration manually Non-preset - using GraphManager directly Training an agent with a custom Gym environment Advanced functionality - proprietary exploration policy, checkpoint evaluation Using Coach from the Command Line When running Coach from the command line, we use a Preset module to define the experiment parameters. As its name implies, a preset is a predefined set of parameters to run some agent on some environment. Coach has many predefined presets that follow the algorithms definitions in the published papers, and allows training some of the existing algorithms with essentially no coding at all. This presets can easily be run from the command line. For example: coach -p CartPole_DQN You can find all the predefined presets under the presets directory, or by listing them using the following command: coach -l Coach can also be used with an externally defined preset by passing the absolute path to the module and the name of the graph manager object which is defined in the preset: coach -p /home/my_user/my_agent_dir/my_preset.py:graph_manager Some presets are generic for multiple environment levels, and therefore require defining the specific level through the command line: coach -p Atari_DQN -lvl breakout There are plenty of other command line arguments you can use in order to customize the experiment. A full documentation of the available arguments can be found using the following command: coach -h Using Coach as a Library Alternatively, Coach can be used a library directly from python. As described above, Coach uses the presets mechanism to define the experiments. A preset is essentially a python module which instantiates a GraphManager object. The graph manager is a container that holds the agents and the environments, and has some additional parameters for running the experiment, such as visualization parameters. The graph manager acts as the scheduler which orchestrates the experiment. Note: Each one of the examples in this section is independent, so notebook kernels need to be restarted before running it. Make sure you run the next cell before running any of the examples. End of explanation """ coach = CoachInterface(preset='CartPole_ClippedPPO', # The optional custom_parameter enables overriding preset settings custom_parameter='heatup_steps=EnvironmentSteps(5);improve_steps=TrainingSteps(3)', # Other optional parameters enable easy access to advanced functionalities num_workers=1, checkpoint_save_secs=10) coach.run() """ Explanation: Preset based - using CoachInterface The basic method to run Coach directly from python is through a CoachInterface object, which uses the same arguments as the command line invocation but allowes for more flexibility and additional control of the training/inference process. Let's start with some examples. Training a preset In this example, we'll create a very simple graph containing a Clipped PPO agent running with the CartPole-v0 Gym environment. CoachInterface has a few useful parameters such as custom_parameter that enables overriding preset settings, and other optional parameters enabling control over the training process. We'll override the preset's schedule parameters, train with a single rollout worker, and save checkpoints every 10 seconds: End of explanation """ from rl_coach.environments.gym_environment import GymEnvironment, GymVectorEnvironment from rl_coach.base_parameters import VisualizationParameters from rl_coach.core_types import EnvironmentSteps tf.reset_default_graph() coach = CoachInterface(preset='CartPole_ClippedPPO') # registering an iteration signal before starting to run coach.graph_manager.log_signal('iteration', -1) coach.graph_manager.heatup(EnvironmentSteps(100)) # training for it in range(10): # logging the iteration signal during training coach.graph_manager.log_signal('iteration', it) # using the graph manager to train and act a given number of steps coach.graph_manager.train_and_act(EnvironmentSteps(100)) # reading signals during training training_reward = coach.graph_manager.get_signal_value('Training Reward') """ Explanation: Running each training or inference iteration manually The graph manager (which was instantiated in the preset) can be accessed from the CoachInterface object. The graph manager simplifies the scheduling process by encapsulating the calls to each of the training phases. Sometimes, it can be beneficial to have a more fine grained control over the scheduling process. This can be easily done by calling the individual phase functions directly: End of explanation """ # inference env_params = GymVectorEnvironment(level='CartPole-v0') env = GymEnvironment(**env_params.__dict__, visualization_parameters=VisualizationParameters()) response = env.reset_internal_state() for _ in range(10): action_info = coach.graph_manager.get_agent().choose_action(response.next_state) print("State:{}, Action:{}".format(response.next_state,action_info.action)) response = env.step(action_info.action) print("Reward:{}".format(response.reward)) """ Explanation: Sometimes we may want to track the agent's decisions, log or maybe even modify them. We can access the agent itself through the CoachInterface as follows. Note that we also need an instance of the environment to do so. In this case we use instantiate a GymEnvironment object with the CartPole GymVectorEnvironment: End of explanation """ from rl_coach.agents.clipped_ppo_agent import ClippedPPOAgentParameters from rl_coach.environments.gym_environment import GymVectorEnvironment from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.graph_managers.graph_manager import SimpleSchedule from rl_coach.architectures.embedder_parameters import InputEmbedderParameters # Resetting tensorflow graph as the network has changed. tf.reset_default_graph() # define the environment parameters bit_length = 10 env_params = GymVectorEnvironment(level='rl_coach.environments.toy_problems.bit_flip:BitFlip') env_params.additional_simulator_parameters = {'bit_length': bit_length, 'mean_zero': True} # Clipped PPO agent_params = ClippedPPOAgentParameters() agent_params.network_wrappers['main'].input_embedders_parameters = { 'state': InputEmbedderParameters(scheme=[]), 'desired_goal': InputEmbedderParameters(scheme=[]) } graph_manager = BasicRLGraphManager( agent_params=agent_params, env_params=env_params, schedule_params=SimpleSchedule() ) graph_manager.improve() """ Explanation: Non-preset - using GraphManager directly It is also possible to invoke coach directly in the python code without defining a preset (which is necessary for CoachInterface) by using the GraphManager object directly. Using Coach this way won't allow you access functionalities such as multi-threading, but it might be convenient if you don't want to define a preset file. Training an agent with a custom Gym environment Here we show an example of how to use the GraphManager to train an agent on a custom Gym environment. We first construct a GymEnvironmentParameters object describing the environment parameters. For Gym environments with vector observations, we can use the more specific GymVectorEnvironment object. The path to the custom environment is defined in the level parameter and it can be the absolute path to its class (e.g. '/home/user/my_environment_dir/my_environment_module.py:MyEnvironmentClass') or the relative path to the module as in this example. In any case, we can use the custom gym environment without registering it. Custom parameters for the environment's __init__ function can be passed as additional_simulator_parameters. End of explanation """ from rl_coach.agents.dqn_agent import DQNAgentParameters from rl_coach.base_parameters import VisualizationParameters, TaskParameters from rl_coach.core_types import TrainingSteps, EnvironmentEpisodes, EnvironmentSteps from rl_coach.environments.gym_environment import GymVectorEnvironment from rl_coach.graph_managers.basic_rl_graph_manager import BasicRLGraphManager from rl_coach.graph_managers.graph_manager import ScheduleParameters from rl_coach.memories.memory import MemoryGranularity #################### # Graph Scheduling # #################### # Resetting tensorflow graph as the network has changed. tf.reset_default_graph() schedule_params = ScheduleParameters() schedule_params.improve_steps = TrainingSteps(4000) schedule_params.steps_between_evaluation_periods = EnvironmentEpisodes(10) schedule_params.evaluation_steps = EnvironmentEpisodes(1) schedule_params.heatup_steps = EnvironmentSteps(1000) ######### # Agent # ######### agent_params = DQNAgentParameters() # DQN params agent_params.algorithm.num_steps_between_copying_online_weights_to_target = EnvironmentSteps(100) agent_params.algorithm.discount = 0.99 agent_params.algorithm.num_consecutive_playing_steps = EnvironmentSteps(1) # NN configuration agent_params.network_wrappers['main'].learning_rate = 0.00025 agent_params.network_wrappers['main'].replace_mse_with_huber_loss = False # ER size agent_params.memory.max_size = (MemoryGranularity.Transitions, 40000) ################ # Environment # ################ env_params = GymVectorEnvironment(level='CartPole-v0') """ Explanation: Advanced functionality - proprietary exploration policy, checkpoint evaluation Agent modules, such as exploration policy, memory and neural network topology can be replaced with proprietary ones. In this example we'll show how to replace the default exploration policy of the DQN agent with a different one that is defined under the Resources folder. We'll also show how to change the default checkpoint save settings, and how to load a checkpoint for evaluation. We'll start with the standard definitions of a DQN agent solving the CartPole environment (taken from the Cartpole_DQN preset) End of explanation """ from exploration import MyExplorationParameters # Overriding the default DQN Agent exploration policy with my exploration policy agent_params.exploration = MyExplorationParameters() # Creating a graph manager to train a DQN agent to solve CartPole graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params, schedule_params=schedule_params, vis_params=VisualizationParameters()) # Resources path was defined at the top of this notebook my_checkpoint_dir = resources_path + '/checkpoints' # Checkpoints will be stored every 5 seconds to the given directory task_parameters1 = TaskParameters() task_parameters1.checkpoint_save_dir = my_checkpoint_dir task_parameters1.checkpoint_save_secs = 5 graph_manager.create_graph(task_parameters1) graph_manager.improve() """ Explanation: Next, we'll override the exploration policy with our own policy defined in Resources/exploration.py. We'll also define the checkpoint save directory and interval in seconds. Make sure the first cell at the top of this notebook is run before the following one, such that module_path and resources_path are adding to sys path. End of explanation """ import tensorflow as tf import shutil # Clearing the previous graph before creating the new one to avoid name conflicts tf.reset_default_graph() # Updating the graph manager's task parameters to restore the latest stored checkpoint from the checkpoints directory task_parameters2 = TaskParameters() task_parameters2.checkpoint_restore_path = my_checkpoint_dir graph_manager.create_graph(task_parameters2) graph_manager.evaluate(EnvironmentSteps(5)) # Clearning up shutil.rmtree(my_checkpoint_dir) """ Explanation: Last, we'll load the latest checkpoint from the checkpoint directory, and evaluate it. End of explanation """
henchc/Rediscovering-Text-as-Data
08-Classification/01-Classification.ipynb
mit
demo_tb = Table() demo_tb['Study_Hours'] = [2.0, 6.9, 1.6, 7.8, 3.1, 5.8, 3.4, 8.5, 6.7, 1.6, 8.6, 3.4, 9.4, 5.6, 9.6, 3.2, 3.5, 5.9, 9.7, 6.5] demo_tb['Grade'] = [67.0, 83.6, 35.4, 79.2, 42.4, 98.2, 67.6, 84.0, 93.8, 64.4, 100.0, 61.6, 100.0, 98.4, 98.4, 41.8, 72.0, 48.6, 90.8, 100.0] demo_tb['Pass'] = [0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1] demo_tb.show() """ Explanation: Intro to Regression A popular classification model is logistic regression. This is what Underwood and Sellers use in their article to classify whether a text was reviewed or randomly selected from HathiTrust. Today we'll look at the difference between regression and classification tasks, and how we can use a logistic regression model to classify text like Underwood and Sellers. We won't have time to go through their full code, but if you're interested I've provided a walk-through in the second notebook. To explore the regression model let's first create some dummy data: End of explanation """ demo_tb.scatter('Study_Hours','Grade') """ Explanation: Intuiting the Linear Regression Model You may have encountered linear regression in previous coursework of yours. Linear regression, in its simple form, tries to model the relationship between two continous variables as a straight line. It interprets one variable as the input, and the other as the output.: End of explanation """ demo_tb.scatter('Study_Hours','Grade', fit_line=True) """ Explanation: In the example above, we're interested in Study_Hours and Grade. This is a natural "input" "output" situation. To plot the regression line, or best-fit, we can feed in fit_line=True to the scatter method: End of explanation """ from sklearn.linear_model import LinearRegression linreg = LinearRegression() """ Explanation: The better this line fits the points, the better we can predict one's Grade based on their Study_Hours, even if we've never seen anyone put in that number of study hours before. The regression model above can be expressed as: $GRADE_i= \alpha + \beta STUDYHOURS + \epsilon_i$ The variable we want to predict (or model) is the left side y variable, here GRADE. The variable which we think has an influence on our left side variable is on the right side, the independent variable STUDYHOURS. The $\alpha$ term is the y-intercept and the $\epsilon_i$ describes the randomness. The $\beta$ coefficient on STUDYHOURS gives us the slope, in a univariate regression. That's the factor on STUDYHOURS to get GRADE. If we want to build a model for the regression, we can use the sklearn library. sklearn is by far the most popular machine learning library for Python, and its syntax is really important to learn. In the next cell we'll import the Linear Regression model and assign it to a linreg variable: End of explanation """ X = demo_tb['Study_Hours'].reshape(-1,1) X """ Explanation: Before we go any further, sklearn likes our data in a very specific format. The X must be in an array of arrays, each sub array is an observation. Because we only have one independent variable, we'll have sub arrays of len 1. We can do that with the reshape method: End of explanation """ y = demo_tb['Grade'].reshape(len(demo_tb['Grade']),) y """ Explanation: Your output, or dependent variable, is just one array with no sub arrays. End of explanation """ linreg.fit(X, y) """ Explanation: We then use the fit method to fit the model. This happens in-place, so we don't have to reassign the variable: End of explanation """ B0, B1 = linreg.intercept_, linreg.coef_[0] B0, B1 """ Explanation: We can get back the intercept_ and $\beta$ coef_ with attributes of the linreg object: End of explanation """ y_pred = linreg.predict(X) print(X) print(y_pred) plt.scatter(X, y) plt.plot(X, y_pred) """ Explanation: So this means: $GRADE_i= 42.897229302892598 + 5.9331153718275509 * STUDYHOURS + \epsilon_i$ As a linear regression this is simple to interpret. To get our grade score, we take the number of study hours and multipy it by 5.9331153718275509 then we add 42.897229302892598 and that's our prediction. If we look at our chart again but using the model we just made, that looks about right: End of explanation """ linreg.score(X, y) """ Explanation: We can evaluate how great our model is with the score method. We need to give it the X and observed y values, and it will predict its own y values and compare: End of explanation """ linreg.predict([[5]]) """ Explanation: For the Linear Regression, sklearn returns an R-squared from the score method. The R-squared tells us how much of the variation in the data can be explained by our model, .559 isn't that bad, but obviously more goes into your Grade than just Study_Hours. Nevertheless we can still predict a grade just like we did above to create that line, let's say I studied for 5 hours: End of explanation """ linreg.predict([[20]]) """ Explanation: Maybe I should study more? End of explanation """ demo_tb.scatter('Study_Hours','Pass') """ Explanation: Wow! I rocked it. Intuiting the Logistic Regression Model But what happens if one of your variables is categorical, and not continuous? Suppose we don't care about the Grade score, but we just care if you Pass or not: End of explanation """ def logistic(p): return 1 / (1 + np.exp(-p)) """ Explanation: How would we fit a line to that? That's where the logistic function can be handy. The general logistic function is: $ f(x) = \frac{1}{1 + e^{-x}} $ We can translate that to Python: End of explanation """ B0, B1 = 0, 1 """ Explanation: We'll also need to assign a couple $\beta$ coefficients for the intercept and variable just like we saw in linear regression: End of explanation """ xmin, xmax = -10,10 xlist = [float(x)/int(1e4) for x in range(xmin*int(1e4), xmax*int(1e4))] # just a lot of points on the x-axis ylist = [logistic(B0 + B1*x) for x in xlist] plt.axis([-10, 10, -0.1,1.1]) plt.plot(xlist,ylist) """ Explanation: Let's plot the logistic curve: End of explanation """ from sklearn.linear_model import LogisticRegression lr = LogisticRegression() """ Explanation: When things get complicated, however, with several independent variables, we don't want to write our own code. Someone has done that for us. We'll go back to sklearn. End of explanation """ X = demo_tb['Study_Hours'].reshape(-1,1) y = demo_tb['Pass'].reshape(len(demo_tb['Pass']),) X, y """ Explanation: We'll reshape our arrays again too, since we know how sklearn likes them: End of explanation """ lr.fit(X, y) """ Explanation: We can use the fit function again on our X and y: End of explanation """ B0, B1 = lr.intercept_[0], lr.coef_[0][0] B0, B1 """ Explanation: We can get those $\beta$ coefficients back out from sklearn for our grade data: End of explanation """ xmin, xmax = 0,10 xlist = [float(x)/int(1e4) for x in range(xmin*int(1e4), xmax*int(1e4))] ylist = [logistic(B0 + B1*x) for x in xlist] plt.plot(xlist,ylist) # add our "observed" data points plt.scatter(demo_tb['Study_Hours'],demo_tb['Pass']) """ Explanation: Then we can plot the curve just like we did earlier, and we'll add our points: End of explanation """ X_train = demo_tb.column('Study_Hours')[:-2] y_train = demo_tb.column('Pass')[:-2] X_test = demo_tb.column('Study_Hours')[-2:] y_test = demo_tb.column('Pass')[-2:] """ Explanation: How might this curve be used for a binary classification task? Logistic Classification That's great, so we can begin to see how we might use such a model to conduct binary classification. In this task, we want to get a number of study hours as an observation, and place it in one of two bins: pass or fail. To create the model though, we have to train it on the data we have. In machine learning, we also need to put some data aside as "testing data" so that we don't bias our model by using it in the training process. In Python we often see X_train, y_train and X_test, y_test: End of explanation """ print(X_test, y_test) """ Explanation: Let's see the observations we're setting aside for later: End of explanation """ lr.fit(X_train.reshape(-1,1),y_train.reshape(len(y_train),)) B0, B1 = lr.intercept_[0], lr.coef_[0] """ Explanation: Now we'll fit our model again but only on the _train data, and get out the $\beta$ coefficients: End of explanation """ fitted = [logistic(B1*th + B0) for th in X_test] fitted """ Explanation: We can send these coefficients back into the logistic function we wrote earlier to get the probability that a student would pass given our X_test values: End of explanation """ prediction = [pred >.5 for pred in fitted] prediction """ Explanation: We can take the probability and change this to a binary outcome based on probability &gt; or &lt; .5: End of explanation """ lr.predict(X_test.reshape(-1, 1)) """ Explanation: The sklearn built-in methods can make this predict process faster: End of explanation """ prediction_eval = [prediction[i]==y_test[i] for i in range(len(prediction))] float(sum(prediction_eval)/len(prediction_eval)) """ Explanation: To see how accurate our model is, we'd predict on the "unseen" _testing data and see how many we got correct. In this case there's only two, so not a whole lot to test with: End of explanation """ lr.score(X_test.reshape(-1, 1), y_test.reshape(len(y_test),)) """ Explanation: We can do this quickly in sklearn too with the score method like in the linear regression example: End of explanation """ import nltk nltk.download("movie_reviews") """ Explanation: Classification of Textual Data How can we translate this simple model of binary classification to text? I'm going to leave the more complicated model that Underwood and Sellers use for the next notebook if you're interested, today we're just going to work through the basic classfication pipeline. We'll download a pre-made corpus from nltk: End of explanation """ from nltk.corpus import movie_reviews """ Explanation: Now we import the movie_reviews object: End of explanation """ reviews = [movie_reviews.raw(fileid) for fileid in movie_reviews.fileids()] judgements = [movie_reviews.categories(fileid)[0] for fileid in movie_reviews.fileids()] """ Explanation: As you might expect, this is a corpus of IMDB movie reviews. Someone went through and read each review, labeling it as either "positive" or "negative". The task we have before us is to create a model that can accurately predict whether a never-before-seen review is positive or negative. This is analogous to Underwood and Sellers looking at whether a poem volume was reviewed or randomly selected. From the movie_reviews object let's take out the reviews and the judgement: End of explanation """ print(reviews[0]) """ Explanation: Let's read the first review: End of explanation """ print(judgements[0]) """ Explanation: Do you consider this a positive or negative review? Let's see what the human annotator said: End of explanation """ from sklearn.utils import shuffle np.random.seed(1) X, y = shuffle(reviews, judgements, random_state=0) """ Explanation: So right now we have a list of movie reviews in the reviews variable and a list of their corresponding judgements in the judgements variable. Awesome. What does this sound like to you? Independent and dependent variables? You'd be right! reviews is our X array from above. judgements is our y array from above. Let's first reassign our X and y so we're explicit about what's going on. While we're at it, we're going to set the random seed for our computer. This just makes our result reproducible. We'll also shuffle so that we randomize the order of our observations, and when we split the testing and training data it won't be in a biased order: End of explanation """ X[0], y[0] """ Explanation: If you don't believe me that all we did is reassign and shuffle: End of explanation """ LogisticRegression? from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer, TfidfTransformer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score, train_test_split text_clf = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression(random_state=0, penalty='l2', C=1000)) ]) scores = cross_val_score(text_clf, X, y, cv=5) print(scores, np.mean(scores)) """ Explanation: To get meaningful independent variables (words) we have to do some processing too (think DTM!). With sklearn's text pipelines, we can quickly build a text a classifier in only a few lines of Python: End of explanation """ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50) # get tfidf values tfidf = TfidfVectorizer() tfidf.fit(X) X_train = tfidf.transform(X_train) X_test = tfidf.transform(X_test) # build and test logit logit_class = LogisticRegression(random_state=0, penalty='l2', C=1000) model = logit_class.fit(X_train, y_train) model.score(X_test, y_test) """ Explanation: Whoa! What just happened?!? The pipeline tells us three things happened: CountVectorizer TfidfTransformer LogisticRegression Let's walk through this step by step. A count vectorizer does exactly what we did last week with tokenization. It changes all the texts to words, and then simply counts the frequency of each word occuring in the corpus for each document. The feature array for each document at this point is simply the length of all unique words in a corpus, with the count for the frequency of each. This is the most basic way to provide features for a classifier---a document term matrix. tfidf (term frequency inverse document frequency) is an algorithm that aims to find words that are important to specific documents. It does this by taking the term frequency (tf) for a specific term in a specific document, and multiplying it by the term's inverse document frequency (idf), which is the total number of documents divided by the number of documents that contain the term at least once. Thus, idf is defined as: $$idf(t, d, D)= log\left(\frac{\mid D \mid}{\mid {d \subset D : t \subset d } \mid}\right )$$ So tfidf is simply: $$tfidf(t, d, D)= f_{t,d}*log\left(\frac{\mid D \mid}{\mid {d \subset D : t \subset d } \mid}\right )$$ A tfidf value is calculated for each term for each document. The feature arrays for a document is now the tfidf values. The tfidf matrix is the exact same as our document term matrix, only now the values have been weighted according to their distribution across documents. The pipeline now sends these tfidf feature arrays to a 3. Logistic Regression, what we learned above. We add in an l2 penalization parameter because we have many more independent variables from our dtm than observations. The independent variables are the tfidf values of each word. In a simple linear model, that would look like: $$log(CLASSIFICATION_i)= \alpha + \beta DOG + \beta RABBIT + \beta JUMP + ... + \epsilon_i$$ where $\beta DOG$ is the model's $\beta$ coefficient multiplied by the tfidf value for "dog". The code below breaks this down by each step, but combines the CountVectorizer and TfidfTransformer in the TfidfVectorizer. End of explanation """ from nltk.util import ngrams ngs = ngrams("Text analysis is so cool. I can really see why classification can be a valuable tool.".split(), 2) list(ngs) """ Explanation: The concise code we first ran actually uses "cross validation", where we split up testing and training data k number of times and average our score on all of them. This is a more reliable metric than just testing the accuracy once. It's possible that you're random train/test split just didn't provide a good split, so averaging it over multiple splits is preferred. You'll also notice the ngram_range parameter in the CountVectorizer in the first cell. This expands our vocabulary document term matrix by including groups of words together. It's easier to understand an ngram by just seeing one. We'll look at a bigram (bi is for 2): End of explanation """ ngs = ngrams("Text analysis is so cool. I can really see why classification can be a valuable tool.".split(), 3) list(ngs) """ Explanation: Trigram: End of explanation """ feature_names = tfidf.get_feature_names() top10pos = np.argsort(model.coef_[0])[-10:] print("Top features for positive reviews:") print(list(feature_names[j] for j in top10pos)) print() print("Top features for negative reviews:") top10neg = np.argsort(model.coef_[0])[:10] print(list(feature_names[j] for j in top10neg)) """ Explanation: You get the point. This helps us combat this "bag of words" idea, but doesn't completely save us. For our purposes here, just as we counted the frequency of individual words, we've added counting the frequency of groups of 2s and 3s. Important Features After we train the model we can then index the tfidf matrix for the words with the most significant coefficients (remember independent variables!) to get the most helpful features: End of explanation """ new_bad_review = "This movie really sucked. I can't believe how long it dragged on. The actors are absolutely terrible. They should rethink their career paths" features = tfidf.transform([new_bad_review]) model.predict(features) new_good_review = "I loved this film! The cinematography was incredible, and Leonardo Dicarpio is flawless. Super cute BTW." features = tfidf.transform([new_good_review]) model.predict(features) """ Explanation: Prediction We can also use our model to classify new reviews, all we have to do is extract the tfidf features from the raw text and send them to the model as our features (independent variables): End of explanation """ CountVectorizer? TfidfTransformer? LogisticRegression? """ Explanation: Homework Let's examine more the three objects in the pipeline: End of explanation """ text_clf = Pipeline([('vect', CountVectorizer(ngram_range=(1, 2))), ('tfidf', TfidfTransformer()), ('clf', LogisticRegression(random_state=0)) ]) scores = cross_val_score(text_clf, X, y, cv=5) print(scores, np.mean(scores)) """ Explanation: I've copied the cell from above below. Try playing with the parameters to these objects and see if you can improve the cross_val_score for the model. End of explanation """ from sklearn.datasets import fetch_20newsgroups """ Explanation: Why do you think your score improved (or didn't)? BONUS (not assigned) We're going to download the 20 Newsgroups, a widely used corpus for demos of general texts: The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering. First we'll import the data from sklearn: End of explanation """ fetch_20newsgroups(subset="train").target_names """ Explanation: Let's see what categories they have: End of explanation """ train = fetch_20newsgroups(subset="train", categories=['sci.electronics', 'rec.autos']) """ Explanation: The subset parameter will give you training and testing data. You can also use the categories parameter to choose only certain categories. If we wanted to get the training data for sci.electronics and rec.autos we would write this: End of explanation """ train.data[0] """ Explanation: The list of documents (strings) is in the .data property, we can access the first one like so: End of explanation """ train.target[0] """ Explanation: And here is the assigment category: End of explanation """ len(train.data) """ Explanation: How many training documents are there? End of explanation """ test = fetch_20newsgroups(subset="test", categories=['sci.electronics', 'rec.autos']) test.data[0] test.target[0] len(test.data) """ Explanation: We can do the same for the testing data: End of explanation """
JoseGuzman/myIPythonNotebooks
Stochastic_systems/Fit_real_histogram.ipynb
gpl-2.0
%pylab inline from scipy.stats import norm """ Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#-Fit-real-histogram" data-toc-modified-id="-Fit-real-histogram-1"><span class="toc-item-num">1&nbsp;&nbsp;</span> Fit real histogram</a></span><ul class="toc-item"><li><span><a href="#-Create-normally-distributed-data" data-toc-modified-id="-Create-normally-distributed-data-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span> Create normally distributed data</a></span></li><li><span><a href="#-Obtain-the-fitting-to-a-normal-distribution" data-toc-modified-id="-Obtain-the-fitting-to-a-normal-distribution-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span> Obtain the fitting to a normal distribution</a></span></li><li><span><a href="#-Get-the-histogram-data-from-NumPy" data-toc-modified-id="-Get-the-histogram-data-from-NumPy-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span> Get the histogram data from NumPy</a></span></li><li><span><a href="#-Scale-the-normal-PDF-to-the-area-of-the-histogram" data-toc-modified-id="-Scale-the-normal-PDF-to-the-area-of-the-histogram-1.4"><span class="toc-item-num">1.4&nbsp;&nbsp;</span> Scale the normal PDF to the area of the histogram</a></span></li></ul></li></ul></div> <H1> Fit real histogram</H1> End of explanation """ # fake some data data = norm.rvs(loc=0.0, scale=1.0, size =150) plt.hist(data, rwidth=0.85, facecolor='black'); plt.ylabel('Number of events'); plt.xlabel('Value'); """ Explanation: <H2> Create normally distributed data</H2> End of explanation """ mean, stdev = norm.fit(data) print('Mean =%f, Stdev=%f'%(mean,stdev)) """ Explanation: <H2> Obtain the fitting to a normal distribution</H2> <P> This is simply the mean and the standard deviation of the sample data<P> End of explanation """ histdata = plt.hist(data, bins=10, color='black', rwidth=.85) # we set 10 bins counts, binedge = np.histogram(data, bins=10); print(binedge) #G et bincenters from bin edges bincenter = [0.5 * (binedge[i] + binedge[i+1]) for i in xrange(len(binedge)-1)] bincenter binwidth = (max(bincenter) - min(bincenter)) / len(bincenter) print(binwidth) """ Explanation: To adapt the normalized PDF of the normal distribution we simply have to multiply every value by the area of the histogram obtained <H2> Get the histogram data from NumPy</H2> End of explanation """ x = np.linspace( start = -4 , stop = 4, num = 100) mynorm = norm(loc = mean, scale = stdev) # Scale Norm PDF to the area (binwidth)*number of samples of the histogram myfit = mynorm.pdf(x)*binwidth*len(data) # Plot everthing together plt.hist(data, bins=10, facecolor='white', histtype='stepfilled'); plt.fill(x, myfit, 'r', alpha=.5); plt.ylabel('Number of observations'); plt.xlabel('Value'); """ Explanation: <H2> Scale the normal PDF to the area of the histogram</H2> End of explanation """
bhattacharjee/courses
CourseraDeepLearningSpecialization/1.NeuralNetworksAndDeepLearning/Week2/Exercises/.ipynb_checkpoints/Python+Basics+With+Numpy+v3-Copy1-checkpoint.ipynb
mit
### START CODE HERE ### (≈ 1 line of code) test = "Hello World" ### END CODE HERE ### print ("test: " + test) """ Explanation: Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. Instructions: - You will be using Python 3. - Avoid using for-loops and while-loops, unless you are explicitly told to do so. - Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. - After coding your function, run the cell right below it to check if your result is correct. After this assignment you will: - Be able to use iPython Notebooks - Be able to use numpy functions and numpy matrix/vector operations - Understand the concept of "broadcasting" - Be able to vectorize code Let's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter. Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below. End of explanation """ # GRADED FUNCTION: basic_sigmoid import math import numpy as np def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = math.exp(-1 * x) s = 1 / (1 + s) ### END CODE HERE ### return s basic_sigmoid(3) """ Explanation: Expected output: test: Hello World <font color='blue'> What you need to remember: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function. Reminder: $sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. <img src="images/Sigmoid.png" style="width:500px;height:228px;"> To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp(). End of explanation """ ### One reason why we use "numpy" instead of "math" in Deep Learning ### x = [1, 2, 3] basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector. """ Explanation: Expected Output: <table style = "width:40%"> <tr> <td>** basic_sigmoid(3) **</td> <td>0.9525741268224334 </td> </tr> </table> Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful. End of explanation """ import numpy as np # example of np.exp x = np.array([1, 2, 3]) print(np.exp(x)) # result is (exp(1), exp(2), exp(3)) """ Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$ End of explanation """ # example of vector operation x = np.array([1, 2, 3]) print (x + 3) """ Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x. End of explanation """ # GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) #s = np.exp(np.multiply(-1, x)) #s = np.divide(1, np.add(1, s)) s = 1 / (1 + np.exp(-x)) ### END CODE HERE ### return s x = np.array([1, 2, 3]) sigmoid(x) """ Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation. You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation. Exercise: Implement the sigmoid function using numpy. Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now. $$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \ x_2 \ ... \ x_n \ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \ \frac{1}{1+e^{-x_2}} \ ... \ \frac{1}{1+e^{-x_n}} \ \end{pmatrix}\tag{1} $$ End of explanation """ # GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """ ### START CODE HERE ### (≈ 2 lines of code) s = sigmoid(x) ds = s * (1 - s) ### END CODE HERE ### return ds x = np.array([1, 2, 3]) print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x))) """ Explanation: Expected Output: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function. Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$ You often code this function in two steps: 1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 2. Compute $\sigma'(x) = s(1-s)$ End of explanation """ # GRADED FUNCTION: image2vector def image2vector(image): """ Argument: image -- a numpy array of shape (length, height, depth) Returns: v -- a vector of shape (length*height*depth, 1) """ ### START CODE HERE ### (≈ 1 line of code) v = None ### END CODE HERE ### return v # This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values image = np.array([[[ 0.67826139, 0.29380381], [ 0.90714982, 0.52835647], [ 0.4215251 , 0.45017551]], [[ 0.92814219, 0.96677647], [ 0.85304703, 0.52351845], [ 0.19981397, 0.27417313]], [[ 0.60659855, 0.00533165], [ 0.10820313, 0.49978937], [ 0.34144279, 0.94630077]]]) print ("image2vector(image) = " + str(image2vector(image))) """ Explanation: Expected Output: <table> <tr> <td> **sigmoid_derivative([1,2,3])**</td> <td> [ 0.19661193 0.10499359 0.04517666] </td> </tr> </table> 1.3 - Reshaping arrays Two common numpy functions used in deep learning are np.shape and np.reshape(). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector. <img src="images/image2vector_kiank.png" style="width:500px;height:300;"> Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do: python v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c - Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc. End of explanation """ # GRADED FUNCTION: normalizeRows def normalizeRows(x): """ Implement a function that normalizes each row of the matrix x (to have unit length). Argument: x -- A numpy matrix of shape (n, m) Returns: x -- The normalized (by row) numpy matrix. You are allowed to modify x. """ ### START CODE HERE ### (≈ 2 lines of code) # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True) x_norm = None # Divide x by its norm. x = None ### END CODE HERE ### return x x = np.array([ [0, 3, 4], [1, 6, 4]]) print("normalizeRows(x) = " + str(normalizeRows(x))) """ Explanation: Expected Output: <table style="width:100%"> <tr> <td> **image2vector(image)** </td> <td> [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]</td> </tr> </table> 1.4 - Normalizing rows Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm). For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \ 2 & 6 & 4 \ \end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \ \sqrt{56} \ \end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \ \end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5. Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1). End of explanation """ # GRADED FUNCTION: softmax def softmax(x): """Calculates the softmax for each row of the input x. Your code should work for a row vector and also for matrices of shape (n, m). Argument: x -- A numpy matrix of shape (n,m) Returns: s -- A numpy matrix equal to the softmax of x, of shape (n,m) """ ### START CODE HERE ### (≈ 3 lines of code) # Apply exp() element-wise to x. Use np.exp(...). x_exp = None # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True). x_sum = None # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting. s = None ### END CODE HERE ### return s x = np.array([ [9, 2, 5, 0, 0], [7, 5, 0, 0 ,0]]) print("softmax(x) = " + str(softmax(x))) """ Explanation: Expected Output: <table style="width:60%"> <tr> <td> **normalizeRows(x)** </td> <td> [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]]</td> </tr> </table> Note: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation. Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization. Instructions: - $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \ \vdots & \vdots & \vdots & \ddots & \vdots \ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \ softmax\text{(second row of x)} \ ... \ softmax\text{(last row of x)} \ \end{pmatrix} $$ End of explanation """ import time x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ### tic = time.process_time() dot = 0 for i in range(len(x1)): dot+= x1[i]*x2[i] toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC OUTER PRODUCT IMPLEMENTATION ### tic = time.process_time() outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros for i in range(len(x1)): for j in range(len(x2)): outer[i,j] = x1[i]*x2[j] toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC ELEMENTWISE IMPLEMENTATION ### tic = time.process_time() mul = np.zeros(len(x1)) for i in range(len(x1)): mul[i] = x1[i]*x2[i] toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ### W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array tic = time.process_time() gdot = np.zeros(W.shape[0]) for i in range(W.shape[0]): for j in range(len(x1)): gdot[i] += W[i,j]*x1[j] toc = time.process_time() print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### VECTORIZED DOT PRODUCT OF VECTORS ### tic = time.process_time() dot = np.dot(x1,x2) toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED OUTER PRODUCT ### tic = time.process_time() outer = np.outer(x1,x2) toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED ELEMENTWISE MULTIPLICATION ### tic = time.process_time() mul = np.multiply(x1,x2) toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED GENERAL DOT PRODUCT ### tic = time.process_time() dot = np.dot(W,x1) toc = time.process_time() print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") """ Explanation: Expected Output: <table style="width:60%"> <tr> <td> **softmax(x)** </td> <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]]</td> </tr> </table> Note: - If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting. Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. <font color='blue'> What you need to remember: - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product. End of explanation """ # GRADED FUNCTION: L1 def L1(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L1 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = None ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L1 = " + str(L1(yhat,y))) """ Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful. Reminder: - The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. - L1 loss is defined as: $$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$ End of explanation """ # GRADED FUNCTION: L2 def L2(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = None ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L2 = " + str(L2(yhat,y))) """ Explanation: Expected Output: <table style="width:20%"> <tr> <td> **L1** </td> <td> 1.1 </td> </tr> </table> Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$. L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$ End of explanation """
EdwardDixon/deeplearning
facepaint/main/Image modelling with H2O.ipynb
apache-2.0
data y = "b" x = ["x","y"] train, valid, test = data.split_frame([0.75, 0.15]) from h2o.estimators import H2ODeepLearningEstimator m = H2ODeepLearningEstimator(model_id="DL_defaults", hidden=[20,20,20,20,20,20,20,20,20,20], activation='tanh',epochs=10000) m.train(x,y,train) m """ Explanation: Our Data To use it with H2O modelling, we've converted our raster image (a grid of colour values) to a list of tuples, mapping the pixel coordinates to the colour values. End of explanation """ import numpy as np import pandas as pd from h2o.frame import H2OFrame from PIL import Image IM_SIZE = 256 IM_CHANNELS = 1 def save_pixels(path_to_image_file, image_array, mode): im_out = Image.fromarray(image_array, mode ) im_out.save(path_to_image_file) # Takes care of clipping, casting to int8, etc. def save_ndarray(path_to_outfile, x, width = IM_SIZE, height = IM_SIZE, channels = IM_CHANNELS): out_arr = np.clip(x, 0, 255) if channels == 3: out_arr = np.reshape(out_arr, (width, height, channels), 1) else: assert(channels == 1) out_arr = np.reshape(out_arr, (width, height), 1) out_arr = np.rot90(out_arr, k=3) out_arr = np.fliplr(out_arr) if channels == 3: save_pixels(path_to_outfile, out_arr.astype(np.int8), 'RGB') else: save_pixels(path_to_outfile, out_arr.astype(np.int8), 'L') # Create suitable training matrix def gen_input_tuples(pixels_width, pixels_height, scale, translate_x, translate_y): image_height = pixels_height image_width = pixels_width # One row per pixel X = np.zeros((image_width * image_height, 2)) # Fill in y values X[:,1] = np.repeat(range(0, image_height), image_width, 0) # Fill in x values X[:,0] = np.tile(range(0, image_width), image_height) # Normalize X X = X - X.mean() X = X / X.var() X[:,0] += translate_x X[:,1] += translate_y X = X / scale return (X) def render(mdl, image_size, scale, tx, ty, outfile): pixel_coords = gen_input_tuples(image_size, image_size, scale, tx, ty) df_pixels_to_render = pd.DataFrame({'x':pixel_coords[:,0], 'y':pixel_coords[:,1]}) h2o_pixels = H2OFrame(df_pixels_to_render) pixel_intensities = m.predict(h2o_pixels) save_ndarray(outfile, pixel_intensities.as_data_frame().as_matrix(), image_size, image_size, 1) render(m, IM_SIZE, 1, 0, 0, "modelled_tower_bridge.png") """ Explanation: Rendering our results To see our model's output, we need to feed it the coordinates of the pixels we want rendered End of explanation """ render(m, 1024, 1/8, 0, 0, "modelled_tower_bridge_x2.png") """ Explanation: End of explanation """
jegibbs/phys202-2015-work
assignments/assignment08/InterpolationEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import numpy as np sns.set_style('white') from scipy.interpolate import griddata """ Explanation: Interpolation Exercise 2 End of explanation """ x=np.array(-5,5) x """ Explanation: Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is known at a single interior point: $f(0,0)=1.0$. The function $f$ is not known at any other points. Create arrays x, y, f: x should be a 1d array of the x coordinates on the boundary and the 1 interior point. y should be a 1d array of the y coordinates on the boundary and the 1 interior point. f should be a 1d array of the values of f at the corresponding x and y coordinates. You might find that np.hstack is helpful. End of explanation """ plt.scatter(x, y); assert x.shape==(41,) assert y.shape==(41,) assert f.shape==(41,) assert np.count_nonzero(f)==1 """ Explanation: The following plot should show the points on the boundary and the single point in the interior: End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert Fnew.shape==(100,100) """ Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew). Use cubic spline interpolation. End of explanation """ # YOUR CODE HERE raise NotImplementedError() assert True # leave this to grade the plot """ Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful. End of explanation """
vinitsamel/udacitydeeplearning
transfer-learning/Transfer_Learning_Solution.ipynb
mit
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") """ Explanation: Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from the CS231n course notes. Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. End of explanation """ import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() """ Explanation: Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial. End of explanation """ import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] """ Explanation: ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $244 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code: ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer, feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) End of explanation """ # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): images = np.concatenate(batch) feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) """ Explanation: Below I'm running images through the VGG network in batches. End of explanation """ # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) """ Explanation: Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. End of explanation """ from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit(labels) labels_vecs = lb.transform(labels) """ Explanation: Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. End of explanation """ from sklearn.model_selection import StratifiedShuffleSplit ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) train_idx, val_idx = next(ss.split(codes, labels_vecs)) half_val_len = int(len(val_idx)/2) val_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:] train_x, train_y = codes[train_idx], labels_vecs[train_idx] val_x, val_y = codes[val_idx], labels_vecs[val_idx] test_x, test_y = codes[test_idx], labels_vecs[test_idx] print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) """ Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so: ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) Then split the data with splitter = ss.split(x, y) ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide. Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. End of explanation """ inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) fc = tf.contrib.layers.fully_connected(inputs_, 256) logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits) cost = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer().minimize(cost) predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. End of explanation """ def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y """ Explanation: Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. End of explanation """ epochs = 10 iteration = 0 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in get_batches(train_x, train_y): feed = {inputs_: x, labels_: y} loss, _ = sess.run([cost, optimizer], feed_dict=feed) print("Epoch: {}/{}".format(e+1, epochs), "Iteration: {}".format(iteration), "Training loss: {:.5f}".format(loss)) iteration += 1 if iteration % 5 == 0: feed = {inputs_: val_x, labels_: val_y} val_acc = sess.run(accuracy, feed_dict=feed) print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Validation Acc: {:.4f}".format(val_acc)) saver.save(sess, "checkpoints/flowers.ckpt") """ Explanation: Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. End of explanation """ with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread """ Explanation: Testing Below you see the test accuracy. You can also see the predictions returned for images. End of explanation """ test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) """ Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. End of explanation """
aleph314/K2
EDA/EDA_MTA_Exercises.ipynb
gpl-3.0
import csv import os """ Explanation: Exploratory Data Analysis with Python We will explore the NYC MTA turnstile data set. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles (UNIT) by day in the subway system. Here is an example of what you could do with the data. James Kao investigates how subway ridership is affected by incidence of rain. Exercise 1 Download at least 2 weeks worth of MTA turnstile data (You can do this manually or via Python) Open up a file, use csv reader to read it, make a python dict where there is a key for each (C/A, UNIT, SCP, STATION). These are the first four columns. The value for this key should be a list of lists. Each list in the list is the rest of the columns in a row. For example, one key-value pair should look like{ ('A002','R051','02-00-00','LEXINGTON AVE'): [ ['NQR456', 'BMT', '01/03/2015', '03:00:00', 'REGULAR', '0004945474', '0001675324'], ['NQR456', 'BMT', '01/03/2015', '07:00:00', 'REGULAR', '0004945478', '0001675333'], ['NQR456', 'BMT', '01/03/2015', '11:00:00', 'REGULAR', '0004945515', '0001675364'], ... ] } Store all the weeks in a data structure of your choosing End of explanation """ turnstile = {} # looping through all files in data dir starting with MTA_Turnstile for filename in os.listdir('data'): if filename.startswith('MTA_Turnstile'): # reading file and writing each row in a dict with open(os.path.join('data', filename), newline='') as csvfile: mtareader = csv.reader(csvfile, delimiter=',') next(mtareader) for row in mtareader: key = (row[0], row[1], row[2], row[3]) value = [row[4], row[5], row[6], row[7], row[8], row[9], row[10].rstrip()] if key in turnstile: turnstile[key].append(value) else: turnstile[key] = [value] # test value for dict test = ('A002','R051','02-00-00','59 ST') turnstile[test]#[:2] """ Explanation: Field Description C/A = Control Area (A002) UNIT = Remote Unit for a station (R051) SCP = Subunit Channel Position represents an specific address for a device (02-00-00) STATION = Represents the station name the device is located at LINENAME = Represents all train lines that can be boarded at this station. Normally lines are represented by one character. LINENAME 456NQR repersents train server for 4, 5, 6, N, Q, and R trains. DIVISION = Represents the Line originally the station belonged to BMT, IRT, or IND DATE = Represents the date (MM-DD-YY) TIME = Represents the time (hh:mm:ss) for a scheduled audit event DESC = Represent the "REGULAR" scheduled audit event (Normally occurs every 4 hours) Audits may occur more that 4 hours due to planning, or troubleshooting activities. Additionally, there may be a "RECOVR AUD" entry: This refers to a missed audit that was recovered. ENTRIES = The comulative entry register value for a device EXIST = The cumulative exit register value for a device End of explanation """ import numpy as np import datetime from dateutil.parser import parse # With respect to the solutions I converted the cumulative entries in the number of entries in the period # That's ok I think since it is required below to do so... turnstile_timeseries = {} # looping through each key in dict, parsing the date and calculating the difference between previous and current count for key in turnstile: prev = np.nan value = [] for el in turnstile[key]: value.append([parse(el[2] + ' ' + el[3]), int(el[5]) - prev]) prev = int(el[5]) if key in turnstile_timeseries: turnstile_timeseries[key].append(value) else: turnstile_timeseries[key] = value turnstile_timeseries[test]#[:5] # ('R305', 'R206', '01-00-00','125 ST') """ Explanation: Exercise 2 Let's turn this into a time series. For each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries. This basically means keeping only the date, time, and entries fields in each list. You can convert the date and time into datetime objects -- That is a python class that represents a point in time. You can combine the date and time fields into a string and use the dateutil module to convert it into a datetime object. Your new dict should look something like { ('A002','R051','02-00-00','LEXINGTON AVE'): [ [datetime.datetime(2013, 3, 2, 3, 0), 3788], [datetime.datetime(2013, 3, 2, 7, 0), 2585], [datetime.datetime(2013, 3, 2, 12, 0), 10653], [datetime.datetime(2013, 3, 2, 17, 0), 11016], [datetime.datetime(2013, 3, 2, 23, 0), 10666], [datetime.datetime(2013, 3, 3, 3, 0), 10814], [datetime.datetime(2013, 3, 3, 7, 0), 10229], ... ], .... } End of explanation """ # In the solutions there's a check for abnormal values, I added it in the exercises below # because I found out about the problem later in the analysis turnstile_daily = {} # looping through each key in the timeseries, tracking if the date change while cumulating partial counts for key in turnstile_timeseries: value = [] prev_date = '' daily_entries = 0 for el in turnstile_timeseries[key]: curr_date = el[0].date() daily_entries += el[1] # if the current date differs from the previous I write the value in the dict and reset the other data # I check that the date isn't empty to avoid writing the initial values for each key if prev_date != curr_date: if prev_date != '': value.append([prev_date, daily_entries]) daily_entries = 0 prev_date = curr_date # I write the last value of the loop in each case, this is the closing value of the period value.append([prev_date, daily_entries]) if key in turnstile_daily: turnstile_daily[key].append(value) else: turnstile_daily[key] = value turnstile_daily[test] """ Explanation: Exercise 3 These counts are cumulative every n hours. We want total daily entries. Now make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline # using list comprehension, there are other ways such as dict.keys() and dict.items() dates = [el[0] for el in turnstile_daily[test]] counts = [el[1] for el in turnstile_daily[test]] fig = plt.figure(figsize=(14, 5)) ax = plt.axes() ax.plot(dates, counts) plt.grid('on'); """ Explanation: Exercise 4 We will plot the daily time series for a turnstile. In ipython notebook, add this to the beginning of your next cell: %matplotlib inline This will make your matplotlib graphs integrate nicely with the notebook. To plot the time series, import matplotlib with import matplotlib.pyplot as plt Take the list of [(date1, count1), (date2, count2), ...], for the turnstile and turn it into two lists: dates and counts. This should plot it: plt.figure(figsize=(10,3)) plt.plot(dates,counts) End of explanation """ temp = {} # for each key I form the new key and check if it's already in the new dict # I append the date in this temp dict to make it easier to sum the values # then I create a new dict with the required keys for key in turnstile_daily: new_key = list(key[0:2]) + list(key[-1:]) for el in turnstile_daily[key]: # setting single negative values to 0: # possible causes: # strange things in data such as totals that lessen each hour going forward # also setting single values over 10.000.000 to 0 to avoid integer overflow: # possible causes: # data recovery value = np.int64(el[1]) if value < 0 or value > 10000000: value = 0 # Maybe nan is a better choice... if tuple(new_key + [el[0]]) in temp: temp[tuple(new_key + [el[0]])] += value else: temp[tuple(new_key + [el[0]])] = value ca_unit_station = {} for key in temp: new_key = key[0:3] date = key[-1] if new_key in ca_unit_station: ca_unit_station[new_key].append([date, temp[key]]) else: ca_unit_station[new_key] = [[date, temp[key]]] ca_unit_station[('R305', 'R206', '125 ST')] """ Explanation: Exercise 5 So far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values for the SCP column. We want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo. End of explanation """ temp = {} # for each key I form the new key and check if it's already in the new dict # I append the date in this temp dict to make it easier to sum the values # then I create a new dict with the required keys for key in turnstile_daily: new_key = key[-1] for el in turnstile_daily[key]: # setting single negative values to 0: # possible causes: # strange things in data such as totals that lessen each hour going forward # also setting single values over 10.000.000 to 0 to avoid integer overflow: # possible causes: # data recovery value = np.int64(el[1]) if value < 0 or value > 10000000: value = 0 if (new_key, el[0]) in temp: temp[(new_key, el[0])] += value else: temp[(new_key, el[0])] = value station = {} for key in temp: new_key = key[0] date = key[-1] if new_key in station: station[new_key].append([date, temp[key]]) else: station[new_key] = [[date, temp[key]]] station['59 ST'] """ Explanation: Exercise 6 Similarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station. End of explanation """ test_station = '59 ST' dates = [el[0] for el in station[test_station]] counts = [el[1] for el in station[test_station]] fig = plt.figure(figsize=(14, 5)) ax = plt.axes() ax.plot(dates, counts) plt.grid('on'); """ Explanation: Exercise 7 Plot the time series for a station End of explanation """ fig = plt.figure(figsize=(16, 6)) ax = plt.axes() n = len(station[test_station]) # creating a list with all the counts for the station all_counts = [el[1] for el in station[test_station]] # splitting counts every 7 values to get weekly data for i in range(int(np.floor(n/7))): ax.plot(all_counts[i*7: 7 + i*7]) ax.set_xticklabels(['', 'Saturday', 'Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday']) plt.grid('on'); """ Explanation: Exercise 8 Make one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts. Make the same list for another week, and another week, and another week. plt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly commute numbers on top of each other. End of explanation """ total_ridership = {} # just looping through keys and summing all elements inside the dict for key in station: for el in station[key]: if key in total_ridership: total_ridership[key] += el[1] else: total_ridership[key] = el[1] import operator sorted(total_ridership.items(), key=operator.itemgetter(1), reverse=True) """ Explanation: Exercise 9 Over multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate End of explanation """ fig = plt.figure(figsize=(16, 10)) ax = plt.axes() ax.hist(list(total_ridership.values())); fig = plt.figure(figsize=(16, 10)) ax = plt.axes() ax.bar(range(len(total_ridership)), sorted(list(total_ridership.values()))); """ Explanation: Exercise 10 Make a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. This should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small bars. Additional Hint: If you want to see which stations take the meat of the traffic, you can sort the total ridership counts and make a plt.bar graph. For this, you want to have two lists: the indices of each bar, and the values. The indices can just be 0,1,2,3,..., so you can do indices = range(len(total_ridership_values)) plt.bar(indices, total_ridership_values) End of explanation """
karlstroetmann/Artificial-Intelligence
Python/7 Neural Networks/Neural-Network-Keras.ipynb
gpl-2.0
import gzip import pickle import numpy as np import keras import tensorflow as tf """ Explanation: Building a Neural Network with Keras End of explanation """ %env KMP_DUPLICATE_LIB_OK=TRUE """ Explanation: The following magic command is necessary to prevent the Python kernel to die because of linkage problems. End of explanation """ def vectorized_result(d): e = np.zeros((10, ), dtype=np.float32) e[d] = 1.0 return e """ Explanation: The function $\texttt{vectorized_result}(d)$ converts the digit $d \in {0,\cdots,9}$ and returns a NumPy vector $\mathbf{x}$ of shape $(10, 1)$ such that $$ \mathbf{x}[i] = \left{ \begin{array}{ll} 1 & \mbox{if $i = j$;} \ 0 & \mbox{otherwise.} \end{array} \right. $$ This function is used to convert a digit $d$ into the expected output of a neural network that has an output unit for every digit. End of explanation """ def load_data(): with gzip.open('../mnist.pkl.gz', 'rb') as f: train, validate, test = pickle.load(f, encoding="latin1") X_train = np.array([np.reshape(x, (784, )) for x in train[0]]) X_test = np.array([np.reshape(x, (784, )) for x in test [0]]) Y_train = np.array([vectorized_result(y) for y in train[1]]) Y_test = np.array([vectorized_result(y) for y in test [1]]) return (X_train, X_test, Y_train, Y_test) X_train, X_test, Y_train, Y_test = load_data() """ Explanation: The function $\texttt{load_data}()$ returns a pair of the form $$ (\texttt{training_data}, \texttt{test_data}) $$ where <ul> <li> $\texttt{training_data}$ is a list containing 60,000 pairs $(\textbf{x}, \textbf{y})$ s.t. $\textbf{x}$ is a 784-dimensional `numpy.ndarray` containing the input image and $\textbf{y}$ is a 10-dimensional `numpy.ndarray` corresponding to the correct digit for x.</li> <li> $\texttt{test_data}$ is a list containing 10,000 pairs $(\textbf{x}, y)$. In each case, $\textbf{x}$ is a 784-dimensional `numpy.ndarray` containing the input image, and $y$ is the corresponding digit value. </ul> End of explanation """ X_train.shape, X_test.shape, Y_train.shape, Y_test.shape """ Explanation: Let us see what we have read: End of explanation """ model = keras.models.Sequential() model.add(keras.layers.Dense( 80, activation='relu', input_dim=784)) model.add(keras.layers.Dense( 40, activation='relu' )) model.add(keras.layers.Dense( 40, activation='relu' )) model.add(keras.layers.Dense( 10, activation='softmax' )) model.compile(loss = 'categorical_crossentropy', optimizer = tf.keras.optimizers.SGD(lr=0.3), metrics = ['accuracy']) model.summary() %%time history = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=100, verbose=1) """ Explanation: Below, we create a neural network with two hidden layers. - The first hidden layer has 60 nodes and uses the <a href="https://en.wikipedia.org/wiki/Rectifier_(neural_networks)">ReLU function</a> as activation function. - The second hidden layer uses 30 nodes and also uses the ReLu function. - The output layer uses the <a href="https://en.wikipedia.org/wiki/Softmax_function">softmax function</a> as activation function. This function is defined as follows: $$ \sigma(\mathbf{z})i := \frac{e^{z_i}}{\sum\limits{d=0}^{9} e^{z_d}} $$ Here, $N$ is the number of output nodes and $z_i$ is the sum of the inputs of the $i$-th output neuron. This function guarantees that the outputs of the 10 output nodes can be interpreted as probabilities, since there sum is equal to $1$. - The <em style="color:blue">loss function</em> used is the <em style="color:blue">cross-entropy</em>. If a neuron outputs the value $a$, when it should output the value $y \in {0,1}$, the cross entropy cost of this neuron is defined as $$ C(a, y) := - y \cdot \ln(a) - (1-y)\cdot \ln(1-a). $$ - The cost function is minimized using stochastic gradient descent with a learning rate of $0.3$. End of explanation """
BorisPolonsky/LearningTensorFlow
RNN101/Customized RNN.ipynb
mit
import tensorflow as tf import numpy as np """ Explanation: Customized RNN Brief Learning to define operations in rnn cells under TensorFlow API r1.3. End of explanation """ class MyRnnCell(tf.nn.rnn_cell.RNNCell): def __init__(self, state_size, dtype): self._state_size = state_size self._dtype = dtype self._W_xh = tf.get_variable(shape=[self._state_size, self._state_size], dtype=self._dtype, name="W_xh", initializer=tf.truncated_normal_initializer()) self._W_hh = tf.get_variable(shape=[self._state_size, self._state_size], dtype=self._dtype, name="W_hh", initializer=tf.truncated_normal_initializer()) self._W_ho = tf.get_variable(shape=[self._state_size, self._state_size], dtype=self._dtype, name="W_ho", initializer=tf.truncated_normal_initializer()) self._b_o = tf.get_variable(shape=[self._state_size], dtype=self._dtype, name="b_o", initializer=tf.truncated_normal_initializer()) def __call__(self, _input, state, scope=None): new_state = tf.tanh(tf.matmul(_input, self._W_xh)+tf.matmul(state, self._W_hh)) new_output = tf.tanh(tf.matmul(new_state, self._W_ho)+self._b_o) return new_output, new_state @property def output_size(self): return self._state_size @property def state_size(self): return self._state_size """ Explanation: Define MyRnnCell The following property/methods should be correctly defined for an RNN cell. * _call_ (method) * output_size (property) * state_size (property) End of explanation """ tf.reset_default_graph() test_cell = MyRnnCell(2, tf.float64) """ Explanation: Create an instance of the RNN cell End of explanation """ sample_seq = np.array([[1,0],[0,1],[0,1]],dtype=np.float64) sample_seq = np.concatenate([sample_seq]*(30), axis=0) print("Sample sequence:\n{}".format(sample_seq)) train_input = sample_seq[0:5,:] train_output = sample_seq[1:6,:] test_input = sample_seq[:-1,:] test_output = sample_seq[1:,:] """ Explanation: Create sample sequence End of explanation """ #state = np.zeros([1, 2]) inputs = tf.placeholder(shape=[None, 2], dtype=tf.float64) targets = tf.placeholder(shape=[None, 2], dtype=tf.float64) # One batch only batch_inputs = tf.reshape(inputs, shape=np.array([1, -1, 2])) outputs, final_state = tf.nn.dynamic_rnn(test_cell, batch_inputs, dtype=tf.float64) # de-batch outputs = tf.reshape(outputs, shape=[-1, 2]) loss = tf.nn.softmax_cross_entropy_with_logits(labels=targets, logits=outputs) optimize_op = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss) print("Training network") with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(20000): sess.run([optimize_op, outputs], feed_dict={inputs: train_input, targets: train_output}) print("Testing network with input:\n{}".format(test_input)) print("Expected outputs:\n{}\nNetwork activations:\n{}".format(test_output, sess.run(outputs, feed_dict={inputs: test_input}))) """ Explanation: Training & Testing End of explanation """
charlesll/RamPy
examples/Mixing_spectra.ipynb
gpl-2.0
%matplotlib inline import numpy as np import matplotlib.pyplot as plt import rampy as rp """ Explanation: Example of the mixing_sp() function Author: Charles Le Losq This function allows one to mix two endmembers spectra, $ref1$ and $ref2$, to an observed one $obs$: $obs = ref1 * F1 + ref2 * (1-F1)$ . The calculation is done with performing least absolute regression, which presents advantages compared to least squares to fit problems with outliers as well as non-Gaussian character (see wikipedia for instance). End of explanation """ x = np.arange(0,100,1.0) # a dummy x axis ref1 = 50.0*np.exp(-1/2*((x-40)/20)**2) + np.random.randn(len(x)) # a gaussian with added noise ref2 = 70.0*np.exp(-1/2*((x-60)/15)**2) + np.random.randn(len(x)) # a gaussian with added noise plt.figure() plt.plot(x,ref1,label="ref1") plt.plot(x,ref2,label="ref2") plt.xlabel("X") plt.ylabel("Y") plt.legend() """ Explanation: Problem setting We will setup a simple problem in which we mix two Gaussian peaks in different ratios. The code below is going to create those peaks, and to plot them for reference. End of explanation """ F1_true = np.array([0.80,0.60,0.40,0.20]) obs = np.dot(ref1.reshape(-1,1),F1_true.reshape(1,-1)) + np.dot(ref2.reshape(-1,1),(1-F1_true.reshape(1,-1))) plt.figure() plt.plot(x,obs) plt.xlabel("X") plt.ylabel("Y") plt.title("Observed signals") """ Explanation: We now create 4 intermediate $obs$ signals, with $F1$ = 20%,40%,60% and 80% of ref1. End of explanation """ F1_meas = rp.mixing_sp(obs,ref1,ref2) plt.figure() plt.plot(F1_true,F1_meas,'ro',label="Measurements") plt.plot([0,1],[0,1],'k-',label="1:1 line") plt.xlabel("True $F1$ value") plt.ylabel("Determined $F1$ value") plt.legend() """ Explanation: Now we can use rp.mixing_sp() to retrieve $F1$. We suppose here that we have some knowledge of $ref1$ and $ref2$. End of explanation """
InsightSoftwareConsortium/SimpleITK-Notebooks
Python/64_Registration_Memory_Time_Tradeoff.ipynb
apache-2.0
import SimpleITK as sitk import numpy as np %matplotlib inline import matplotlib.pyplot as plt # utility method that either downloads data from the Girder repository or # if already downloaded returns the file name for reading from disk (cached data) %run update_path_to_download_script from downloaddata import fetch_data as fdata import registration_utilities as ru from ipywidgets import interact, fixed def register_images(fixed_image, moving_image, initial_transform, interpolator): registration_method = sitk.ImageRegistrationMethod() registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50) registration_method.SetMetricSamplingStrategy(registration_method.REGULAR) registration_method.SetMetricSamplingPercentage(0.01) registration_method.SetInterpolator(interpolator) registration_method.SetOptimizerAsGradientDescent( learningRate=1.0, numberOfIterations=1000 ) registration_method.SetOptimizerScalesFromPhysicalShift() registration_method.SetInitialTransform(initial_transform, inPlace=False) final_transform = registration_method.Execute(fixed_image, moving_image) return (final_transform, registration_method.GetOptimizerStopConditionDescription()) """ Explanation: Registration: Memory-Time Trade-off <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F64_Registration_Memory_Time_Tradeoff.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a> When developing a registration algorithm or when selecting parameter value settings for an existing algorithm our choices are dictated by two, often opposing, constraints: <ul> <li>Required accuracy.</li> <li>Allotted time.</li> </ul> As the goal of registration is to align multiple data elements into the same coordinate system, it is only natural that the primary focus is on accuracy. In most cases the reported accuracy is obtained without constraining the algorithm's execution time. Don't forget to provide the running times even if they are not critical for your particular application as they may be critical for others. With regard to the emphasis on execution time, on one end of the spectrum we have longitudinal studies where time constraints are relatively loose. In this setting a registration taking an hour may be perfectly acceptable. At the other end of the spectrum we have intra-operative registration. In this setting, registration is expected to complete within seconds or minutes. The underlying reasons for the tight timing constraints in this setting have to do with the detrimental effects of prolonged anesthesia and with the increased costs of operating room time. While short execution times are important, simply completing the registration on time without sufficient accuracy is also unacceptable. This notebook illustrates a straightforward approach for reducing the computational complexity of registration for intra-operative use via preprocessing and increased memory usage, a case of the memory-time trade-off. The computational cost of registration is primarily associated with interpolation, required for evaluating the similarity metric. Ideally we would like to use the fastest possible interpolation method, nearest neighbor. Unfortunately, nearest neighbor interpolation most often yields sub-optimal results. A straightforward solution is to pre-operatively create a super-sampled version of the moving-image using higher order interpolation*. We then perform registration using the super-sampled image, with nearest neighbor interpolation. Tallying up time and memory usage we see that: <table> <tr><td></td> <td><b>time</b></td><td><b>memory</b></td></tr> <tr><td><b>pre-operative</b></td> <td>increase</td><td>increase</td></tr> <tr><td><b>intra-operative</b></td> <td>decrease</td><td>increase</td></tr> </table> <br><br> <font size="-1">*A better approach is to use single image super resolution techniques such as the one described in A. Rueda, N. Malpica, E. Romero,"Single-image super-resolution of brain MR images using overcomplete dictionaries", <i>Med Image Anal.</i>, 17(1):113-132, 2013.</font> End of explanation """ fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32) moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32) fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth( fdata("ct_T1.standard") ) R, t = ru.absolute_orientation_m(fixed_fiducial_points, moving_fiducial_points) reference_transform = sitk.Euler3DTransform() reference_transform.SetMatrix(R.flatten()) reference_transform.SetTranslation(t) # Generate a reference dataset from the reference transformation (corresponding points in the fixed and moving images). fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=1000) moving_points = [reference_transform.TransformPoint(p) for p in fixed_points] interact( lambda image1_z, image2_z, image1, image2: ru.display_scalar_images( image1_z, image2_z, image1, image2 ), image1_z=(0, fixed_image.GetSize()[2] - 1), image2_z=(0, moving_image.GetSize()[2] - 1), image1=fixed(fixed_image), image2=fixed(moving_image), ); """ Explanation: Load data We use the the training data from the Retrospective Image Registration Evaluation (<a href="http://www.insight-journal.org/rire/">RIRE</a>) project. The RIRE reference, ground truth, data consists of a set of corresponding points in the fixed and moving coordinate systems. These points were obtained from fiducials embedded in the patient's skull and are thus sparse (eight points). We use these to compute the rigid transformation between the two coordinate systems, and then generate a dense reference. This generated reference data is more similar to the data you would use for registration evaluation. End of explanation """ # Isotropic voxels with 1mm spacing. new_spacing = [1.0] * moving_image.GetDimension() # Create resampled image using new spacing and size. original_size = moving_image.GetSize() original_spacing = moving_image.GetSpacing() resampled_image_size = [ int(spacing / new_s * size) for spacing, size, new_s in zip(original_spacing, original_size, new_spacing) ] resampled_moving_image = sitk.Image(resampled_image_size, moving_image.GetPixelID()) resampled_moving_image.SetSpacing(new_spacing) resampled_moving_image.SetOrigin(moving_image.GetOrigin()) resampled_moving_image.SetDirection(moving_image.GetDirection()) # Resample original image using identity transform and the BSpline interpolator. resample = sitk.ResampleImageFilter() resample.SetReferenceImage(resampled_moving_image) resample.SetInterpolator(sitk.sitkBSpline) resample.SetTransform(sitk.Transform()) resampled_moving_image = resample.Execute(moving_image) print(f"Original image size and spacing: {original_size} {original_spacing}") print( f"Resampled image size and spacing: {resampled_moving_image.GetSize()} {resampled_moving_image.GetSpacing()}" ) print( f"Memory ratio: 1 : {(np.array(resampled_image_size)/np.array(original_size).astype(float)).prod()}" ) """ Explanation: Invest time and memory in exchange for future time savings We now resample our moving image to a finer spatial resolution. End of explanation """ initial_transform = sitk.CenteredTransformInitializer( fixed_image, moving_image, sitk.Euler3DTransform(), sitk.CenteredTransformInitializerFilter.GEOMETRY, ) """ Explanation: Another option for resampling an image, without any transformation, is to use the ExpandImageFilter or in its functional form SimpleITK::Expand. This filter accepts the interpolation method and an integral expansion factor. This is less flexible than the resample filter as we have less control over the resulting image's spacing. On the other hand this requires less effort from the developer, a single line of code as compared to the cell above: resampled_moving_image = sitk.Expand(moving_image, [int(original_s/new_s + 0.5) for original_s, new_s in zip(original_spacing, new_spacing)], sitk.sitkBSpline) Registration Initial Alignment We will use the same initial alignment for both registrations. End of explanation """ %%timeit -r1 -n1 # The arguments to the timeit magic specify that this cell should only be run once. # We define this variable as global so that it is accessible outside of the cell (timeit wraps the code in the cell # making all variables local, unless explicitly declared global). global original_resolution_errors final_transform, optimizer_termination = register_images( fixed_image, moving_image, initial_transform, sitk.sitkLinear ) ( final_errors_mean, final_errors_std, _, final_errors_max, original_resolution_errors, ) = ru.registration_errors(final_transform, fixed_points, moving_points) print(optimizer_termination) print( f"After registration, errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}" ) """ Explanation: Original Resolution For this registration we use the original resolution and linear interpolation. End of explanation """ %%timeit -r1 -n1 # The arguments to the timeit magic specify that this cell should only be run once. # We define this variable as global so that it is accessible outside of the cell (timeit wraps the code in the cell # making all variables local, unless explicitly declared global). global resampled_resolution_errors final_transform, optimizer_termination = register_images( fixed_image, resampled_moving_image, initial_transform, sitk.sitkNearestNeighbor ) ( final_errors_mean, final_errors_std, _, final_errors_max, resampled_resolution_errors, ) = ru.registration_errors(final_transform, fixed_points, moving_points) print(optimizer_termination) print( f"After registration, errors in millimeters, mean(std): {final_errors_mean:.2f}({final_errors_std:.2f}), max: {final_errors_max:.2f}" ) """ Explanation: Higher Resolution For this registration we use the higher resolution image and nearest neighbor interpolation. End of explanation """ plt.hist( original_resolution_errors, bins=20, alpha=0.5, label="original resolution", color="blue", ) plt.hist( resampled_resolution_errors, bins=20, alpha=0.5, label="higher resolution", color="green", ) plt.legend() plt.title("TRE histogram"); """ Explanation: Compare the error distributions To fairly compare the two registration above we look at their running times (see results above) and their error distributions (plotted below). End of explanation """
ggljzr/mi-ddw
Task 3 - Text Mining/task3.ipynb
mit
import nltk import numpy as np import wikipedia import re """ Explanation: Text mining In this task we will use nltk package to recognize named entities and classify in a given text (in this case article about American Revolution from Wikipedia). nltk.ne_chunk function can be used for both recognition and classification of named entities. We will aslo implement custom NER function to recognize entities, and custom function to classify named entities using their Wikipedia articles. End of explanation """ import warnings warnings.filterwarnings('ignore') """ Explanation: Suppress wikipedia package warnings. End of explanation """ def count_entites(entity, text): s = entity if type(entity) is tuple: s = entity[0] return len(re.findall(s, text)) def get_top_n(entities, text, n): a = [ (e, count_entites(e, text)) for e in entities] a.sort(key=lambda x: x[1], reverse=True) return a[0:n] # For a list of entities found by nltk.ne_chunks: # returns (entity, label) if it is a single word or # concatenates multiple word named entities into single string def get_entity(entity): if isinstance(entity, tuple) and entity[1][:2] == 'NE': return entity if isinstance(entity, nltk.tree.Tree): text = ' '.join([word for word, tag in entity.leaves()]) return (text, entity.label()) return None """ Explanation: Helper functions to process output of nltk.ne_chunk and to count frequency of named entities in a given text. End of explanation """ # returns list of named entities in a form [(entity_text, entity_label), ...] def extract_entities(chunk): data = [] for entity in chunk: d = get_entity(entity) if d is not None and d[0] not in [e[0] for e in data]: data.append(d) return data """ Explanation: Since nltk.ne_chunks tends to put same named entities into more classes (like 'American' : 'ORGANIZATION' and 'American' : 'GPE'), we would want to filter these duplicities. End of explanation """ def custom_NER(tagged): entities = [] entity = [] for word in tagged: if word[1][:2] == 'NN' or (entity and word[1][:2] == 'IN'): entity.append(word) else: if entity and entity[-1][1].startswith('IN'): entity.pop() if entity: s = ' '.join(e[0] for e in entity) if s not in entities and s[0].isupper() and len(s) > 1: entities.append(s) entity = [] return entities """ Explanation: Our custom NER functio from example here. End of explanation """ text = None with open('text', 'r') as f: text = f.read() text = re.sub(r'\[[0-9]*\]', '', text) """ Explanation: Loading processed article, approximately 500 sentences. Regex substitution removes reference links (e.g. [12]) End of explanation """ tokens = nltk.word_tokenize(text) tagged = nltk.pos_tag(tokens) ne_chunked = nltk.ne_chunk(tagged, binary=False) ex = extract_entities(ne_chunked) ex_custom = custom_NER(tagged) top_ex = get_top_n(ex, text, 20) top_ex_custom = get_top_n(ex_custom, text, 20) print('ne_chunked:') for e in top_ex: print('{} count: {}'.format(e[0], e[1])) print() print('custom NER:') for e in top_ex_custom: print('{} count: {}'.format(e[0], e[1])) """ Explanation: Now we try to recognize entities with both nltk.ne_chunk and our custom_NER function and print 10 most frequent entities. Yielded results seem to be fairly similar. nltk.ne_chunk function also added basic classification tags. End of explanation """ def get_noun_phrase(entity, sentence): t = nltk.pos_tag([word for word in nltk.word_tokenize(sentence)]) phrase = [] stage = 0 for word in t: if word[0] in ('is', 'was', 'were', 'are', 'refers') and stage == 0: stage = 1 continue elif stage == 1: if word[1] in ('NN', 'JJ', 'VBD', 'CD', 'NNP', 'NNPS', 'RBS', 'IN', 'NNS'): phrase.append(word) elif word[1] in ('DT', ',', 'CC', 'TO', 'POS'): continue else: break if len(phrase) > 1 and phrase[-1][1] == 'IN': phrase.pop() phrase = ' '.join([ word[0] for word in phrase ]) if phrase == '': phrase = 'Thing' return {entity : phrase} def get_wiki_desc(entity, wiki='en'): wikipedia.set_lang(wiki) try: fs = wikipedia.summary(entity, sentences=1) except wikipedia.DisambiguationError as e: fs = wikipedia.summary(e.options[0], sentences=1) except wikipedia.PageError: return {entity : 'Thing'} #fs = nltk.sent_tokenize(page.summary)[0] return get_noun_phrase(entity, fs) """ Explanation: Next we would want to do our own classification, using Wikipedia articles for each named entity. Idea is to find article matching entity string (for example 'America') and then create a noun phrase from its first sentence. When no suitable article or description is found, entity classification will be 'Thing'. End of explanation """ for entity in top_ex: print(get_wiki_desc(entity[0][0])) for entity in top_ex_custom: print(get_wiki_desc(entity[0])) """ Explanation: Obivously this classification is way more specific than tags used by nltk.ne_chunk. We can also see that both NER methods mistook common words for entities unrelated to the article (for example 'New'). Since custom_NER function relies on uppercase letters to recognize entities, this can be commonly caused by first words in sentences. The lack of description for entity 'America' is caused by simple way get_noun_phrase function constructs description. It looks for basic words like 'is', so more advanced language can throw it off. This could be fixed by searching simple english Wikipedia or using it as a fallback when no suitable phrase is found on normal english Wikipedia (for example compare article about Americas on simple and normal wiki). I also tried to search for more general verb (presen tense verb, tag 'VBZ'), but this yielded worse results. Other improvement could be simply expanding the verb list in get_noun_phrase with other suitable verbs. When no exact match for pair (entity, article) is found, wikipedia module raises DisambiguationError, which (same as disambiguation page on Wikipedia) offers possible matching pages. When this happens, first suggested page is picked. This however does not have to be the best page for given entity. End of explanation """ get_wiki_desc('Americas', wiki='simple') """ Explanation: When searching simple wiki, entity 'Americas' gets fairly reasonable description. However there seems to be an issue with handling DisambiguationError in some cases when looking for first page in DisambiguationError.options raises another DisambiguationError (even if pages from .options should be guaranteed hit). End of explanation """
saudijack/unfpyboot
TestInstall/BootCampTestInstall.ipynb
mit
success = True # We'll use this to keep track of the various tests failures = [] try: import numpy as np import scipy print "numpy and scipy imported -- success!" except: success = False msg = "* There was a problem importing numpy or scipy. You will definitely need these!" print msg failures.append(msg) try: import matplotlib import matplotlib.pyplot as plt %matplotlib inline except: success = False msg = "* There was a problem importing matplotlib. You will definitely need this" failures.append(msg) """ Explanation: Welcome to the 2015 UNF-Physics Python BootCamp! Our objective here is to make sure that python is installed properly on your system, in particular all of the various modules that we will be using for our lectures. We therefore strongly recommend that you attempt to run this notebook well before the start of the BootCamp. If you see a "Success" message at the bottom, then you should be good to go! If not, please contact us at pythonbootcamp@bigbang.gsfc.nasa.gov (including the error message) and we will try to help you track down any problems. In order to "run" this notebook, click on the "Cell" tab along the top of this window, and select "Run All" End of explanation """ plt.plot([1, 2, 3], [1, 4, 9], "ro--") try: import pandas, PyQt4, enaml print "pandas, PyQt4, and enaml imported -- success!" except: success = False msg = "* There was a problem importing pandas, pyqt, or enaml. You will need these for Days 2 and 3." print msg failures.append(msg) try: import h5py from mpl_toolkits.basemap import Basemap print "h5py and Basemap imported -- success!" except: success = False msg = "* There was a problem with h5py and/or Basemap. You will need these for Day 2." failures.append(msg) """ Explanation: You should see a simple plot below the next cell. End of explanation """ # Basemap Test try: f = plt.figure(1, figsize=(14.0, 10.0)) f.suptitle("Basemap - First Map") f.text(0.05, 0.95, "Mollewide") f.subplots_adjust(left=0.05, right=0.95, top=0.80, bottom=0.05, wspace=0.2, hspace=0.4) f.add_subplot(1, 1, 1) b = Basemap(projection="moll", lon_0=0, resolution='c') b.drawcoastlines() b.drawparallels(np.arange( -90.0, 90.0, 20.0)) b.drawmeridians(np.arange(-180.0, 181.0, 20.0)) except: success = False msg = "* There was a problem creating a Basemap plot. You will need this for Day 2." failures.append(msg) if success: print """Congratulations! Your python environment seems to be working properly. We look forward to seeing you at the Boot Camp!""" elif failures: print """The following problems occurred: %s. Please contact us and we will try to help you fix things.""" % ("\n".join(failures)) else: print """There was a problem with your python environment -- please contact us and we will try to help you figure out what happened.""" """ Explanation: There should be a Basemap plot displayed below this cell. End of explanation """
dsevilla/jisbd17-nosql
talk.ipynb
mit
%load extra/utils/functions.py ds(1,2) ds(3) yoda(u"Una guerra SQL vs. NoSQL no debes empezar") """ Explanation: Tecnologías NoSQL -- Tutorial en JISBD 2017 Toda la información de este tutorial está disponible en https://github.com/dsevilla/jisbd17-nosql. Diego Sevilla Ruiz, dsevilla@um.es. End of explanation """ ds(4) %%bash sudo docker pull mongo pip install --upgrade pymongo !sudo docker run --rm -d --name mongo -p 27017:27017 mongo import pymongo from pymongo import MongoClient client = MongoClient("localhost", 27017) client """ Explanation: http://www.nosql-vs-sql.com/ End of explanation """ db = client.presentations """ Explanation: Creamos una base de datos presentations: End of explanation """ jisbd17 = db.jisbd17 jisbd17 jisbd17.insert_one({'_id' : 'jisbd17-000', 'title': 'blah', 'text' : '', 'image': None, 'references' : [{'type' : 'web', 'ref' : 'http://nosql-database.org'}, {'type' : 'book', 'ref' : 'Sadalage, Fowler. NoSQL Distilled'} ], 'xref' : ['jisbd17-010', 'jisbd17-002'], 'notes': 'blah blah' }) client.database_names() DictTable(jisbd17.find_one()) """ Explanation: Y la colección jisbd17: End of explanation """ import os import os.path import glob files = glob.glob(os.path.join('slides','slides-dir','*.png')) """ Explanation: Voy a añadir todas las imágenes de la presentación a la base de datos Primero se buscan todos los ficheros, y después se utiliza la función update_one() para añadir o actualizar los valores de la base de datos (ya habíamos metido información parcial para jisbd17-000). End of explanation """ from bson.binary import Binary for file in files: img = load_img(file) img_to_thumbnail(img) slidename = os.path.basename(os.path.splitext(file)[0]) jisbd17.update_one({'_id': slidename}, {'$set' : {'image': Binary(img_to_bytebuffer(img))}}, True) for slide in jisbd17.find(): print(slide['_id'], slide.get('title')) slide0 = jisbd17.find_one({'_id': 'jisbd17-000'}) img_from_bytebuffer(slide0['image']) """ Explanation: Añadiendo las imágenes a la base de datos... End of explanation """ presentations = db.presentations slides = [r['_id'] for r in jisbd17.find({'_id' : {'$regex' : '^jisbd17-'}},projection={'_id' : True}).sort('_id', 1)] presentations.insert_one({'name' : 'Tecnologías NoSQL. JISBD 2017', 'slides' : slides }) presentations.find_one() yoda(u'Modelado de datos tú no hacer...') inciso_slide = 9 ds(inciso_slide,3) jisbd17.find_one({'_id': 'jisbd17-000'}) """ Explanation: Añado la presentación de JISBD 2017 a la colección presentations. End of explanation """ ds(13,7) ds(20,5) """ Explanation: Introducción a NoSQL End of explanation """ ds(25,6) """ Explanation: ¡Escalabilidad! End of explanation """ ds(31,6) ds(37) """ Explanation: Schemaless End of explanation """ ds(38,11) """ Explanation: Modelado de datos en NoSQL End of explanation """ ds(49,6) """ Explanation: Eficiencia raw End of explanation """ import re def read_slides(): in_slide = False slidetitle = '' slidetext = '' slidenum = 0 with open('slides/slides.tex', 'r') as f: for line in f: # Remove comments line = line.split('%')[0] if not in_slide: if '\\begin{frame}' in line: in_slide = True elif '\\frametitle' in line: q = re.search('\\\\frametitle{([^}]+)',line) slidetitle = q.group(1) continue elif '\\framebreak' in line or re.match('\\\\only<[^1]',line) or '\\end{frame}' in line: # Añadir la diapositiva a la lista slideid = 'jisbd17-{:03d}'.format(slidenum) print(slideid) jisbd17.update_one({'_id': slideid}, {'$set' : {'title': slidetitle, 'text' : slidetext }}, True) # Next slidetext = '' slidenum += 1 if '\\end{frame}' in line: in_slide = False slidetitle = '' else: slidetext += line # Llamar a la función read_slides() """ Explanation: Tipos de Sistemas NoSQL MongoDB (documentos) Base de datos documental que usaremos como ejemplo. Una de las más extendidas: Modelo de documentos JSON (BSON, en binario, usado para eficiencia) Map-Reduce para transformaciones de la base de datos y consultas Lenguaje propio de manipulación de la base de datos llamado "de agregación" (aggregate) Soporta sharding (distribución de partes de la BD en distintos nodos) Soporta replicación (copias sincronizadas master-slave en distintos nodos) No soporta ACID La transacción se realiza a nivel de DOCUMENTO Usaremos pymongo desde Python. Para instalarlo: sudo pip install --upgrade pymongo Texto y título de las diapositivas Como ya tenemos populada la colección jisbd17, podemos actualizar los documentos para añadir el título y el texto de cada diapositiva. Lo extraeremos del fichero slides.tex. End of explanation """ slides = jisbd17.find(filter={},projection={'text': True}) df = pd.DataFrame([len(s.get('text','')) for s in slides]) df.plot() """ Explanation: Para usar el shell de mongo en Javascript: docker exec -it mongo mongo Consultas sencillas Distribución del tamaño del texto de las transparencias. End of explanation """ jisbd17.find_one({'text': {'$regex' : '[Mm]ongo'}})['_id'] """ Explanation: La función find() tiene un gran número de posibilidades para especificar la búsqueda. Se pueden utilizar cualificadores complejos como: $and $or $not Estos calificadores unen "objetos", no valores. Por otro lado, hay otros calificadores que se refieren a valores: $lt (menor) $lte (menor o igual) $gt (mayor) $gte (mayor o igual) $regex (expresión regular) End of explanation """ jisbd17.find({'title' : 'jisbd17-001'}).explain() """ Explanation: También permite mostrar el plan de ejecución: End of explanation """ jisbd17.create_index([('title', pymongo.HASHED)]) jisbd17.find({'title' : 'jisbd17-001'}).explain() """ Explanation: Se puede crear un índice si la búsqueda por ese campo va a ser crítica. Se pueden crear más índices, de tipos ASCENDING, DESCENDING, HASHED, y otros geoespaciales. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.create_index End of explanation """ ds(59,9) """ Explanation: Map-Reduce End of explanation """ from bson.code import Code map = Code( '''function () { if ('text' in this) emit(this.text.length, 1) else emit(0,1) }''') reduce = Code( '''function (key, values) { return Array.sum(values); }''') results = jisbd17.map_reduce(map, reduce, "myresults") results = list(results.find()) results """ Explanation: Mongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce Histograma de tamaño del texto de las diapositivas Con Map-Reduce se muestra el tamaño del texto de cada diapositiva, y el número de diapositiva que tienen ese tamaño de texto. End of explanation """ df = pd.DataFrame(data = [int(r['value']) for r in results], index = [int(r['_id']) for r in results], columns=['posts per length']) df.plot(kind='bar',figsize=(30,10)) """ Explanation: Como un plot: End of explanation """ df.hist() """ Explanation: O un histograma: End of explanation """ list(jisbd17.aggregate( [ {'$project' : { 'Id' : 1 }}, {'$limit': 20} ])) nposts_by_length = jisbd17.aggregate( [ #{'$match': { 'text' : {'$regex': 'HBase'}}}, {'$project': { 'text' : {'$ifNull' : ['$text', '']} }}, {'$project' : { 'id' : {'$strLenBytes': '$text'}, 'value' : {'$literal' : 1} } }, {'$group' : { '_id' : '$id', 'count' : {'$sum' : '$value'} } }, {'$sort' : { '_id' : 1}} ]) list(nposts_by_length) """ Explanation: Framework de Agregación Framework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152 End of explanation """ list(jisbd17.aggregate( [ {'$lookup' : { "from": "jisbd17", "localField": "xref", "foreignField": "_id", "as": "xrefTitles" }}, {'$project' : { '_id' : True, 'xref' : True, 'xrefTitles.title' : True }} ])) """ Explanation: Simulación de JOIN: $lookup El framework de agregación introdujo también una construcción equivalente a JOIN de SQL. Por ejemplo, se puede mostrar los títulos de las transparencias referenciadas además de los identificadores: End of explanation """ %%bash cd /tmp && git clone https://github.com/dsevilla/hadoop-hbase-docker.git ds(84,11) ds(97,2) """ Explanation: HBase (wide-column) Usaré la imagen docker de HBase a partir de aquí: https://github.com/krejcmat/hadoop-hbase-docker, ligeramente modificada. Para iniciar los contenedores (un master y dos "slave"): git clone https://github.com/dsevilla/hadoop-hbase-docker.git cd hadoop-hbase-docker ./start-container.sh latest 2 # Un conenedor máster, 2 slave, simulan un clúster distribuido de tres nodos # Los contenedores arrancan, el shell entra en el master: ./configure-slaves.sh ./start-hadoop.sh hbase-daemon.sh start thrift # Servidor para conexión externo ./start-hbase.sh Ahora ya podemos conectar a la base de datos. Dentro del contenedor, ejecutando hbase shell nos vuelve a mostrar el shell. En él, podemos ejecutar consultas, creación de tablas, etc.: status # Crear tabla # Put # Consultas sencillas End of explanation """ !pip install --upgrade happybase import happybase happybase.__version__ host = '127.0.0.1' hbasecon = happybase.Connection(host) hbasecon.tables() ds(103,3) try: hbasecon.create_table( "jisbd17", { 'slide': dict(bloom_filter_type='ROW',max_versions=1), 'image' : dict(compression='GZ',max_versions=1), 'text' : dict(compression='GZ',max_versions=1), 'xref' : dict(bloom_filter_type='ROWCOL',max_versions=1) }) except: print ("Database slides already exists.") pass hbasecon.tables() """ Explanation: También se puede conectar de forma remota. Usaremos, desde Python, el paquete happybase: End of explanation """ h_jisbd17 = hbasecon.table('jisbd17') with h_jisbd17.batch(batch_size=100) as b: for doc in jisbd17.find(): b.put(doc['_id'], { 'slide:title' : doc.get('title',''), 'slide:notes' : doc.get('notes',''), 'text:' : doc.get('text', ''), 'image:' : str(doc.get('image','')) }) """ Explanation: Copiar la tabla jisbd17 de mongo Se hará respetando las familias de columnas creadas. En particular, se dejará por ahora el campo xref, del que se verá después una optimización. End of explanation """ with h_jisbd17.batch(batch_size=100) as b: for doc in jisbd17.find(): if 'xref' in doc: for ref in doc['xref']: b.put(doc['_id'], { 'xref:'+ref : '' }) list(h_jisbd17.scan(columns=['xref'])) """ Explanation: Para el caso de xref usaremos una optimización posible en HBase: Las filas pueden crecer tanto como se quiera también en columnas El filtro Bloom ROWCOL hace muy eficiente buscar por una columna en particular IDEA: Usar los elementos del array como nombres de las columnas. Convierte automáticamente a esa columna en un índice inverso: End of explanation """ list(h_jisbd17.scan(columns=['xref:jisbd17-002'])) """ Explanation: Y finalmente el índice inverso. Es muy eficiente ya que para esa familia de columnas xref se ha usado el filtro Bloom ROWCOL. End of explanation """ h_jisbd17.row('jisbd17-001') """ Explanation: Finalmente, en HBase, un scan es una pérdida de tiempo. Se debería precomputar la referencia inversa e incluirla en cada slide. La búsqueda así es O(1). Obtención de una fila con happybase End of explanation """ ds(114) list(h_jisbd17.scan(filter="KeyOnlyFilter()")) list(h_jisbd17.scan(filter="PrefixFilter('jisbd17-0')",limit=5)) list(h_jisbd17.scan(filter="ColumnPrefixFilter('t')")) list(h_jisbd17.scan(filter="RowFilter(<,'binary:jisbd17-1')",limit=5)) list(h_jisbd17.scan(filter="SingleColumnValueFilter('slide', 'title', =,'binary:HBase')")) """ Explanation: Ejemplos de filtros con happybase End of explanation """ %%bash sudo docker pull neo4j sudo docker run -d --rm --name neo4j -p 7474:7474 -p 7687:7687 --env NEO4J_AUTH=none neo4j """ Explanation: Neo4j (Grafos) Se puede utilizar el propio interfaz de Neo4j también en la dirección http://127.0.0.1:7474. End of explanation """ %%bash pip install ipython-cypher pip install py2neo ds(135,3) ds(139,2) ds(148) ds(150,4) from py2neo import Graph graph = Graph('http://localhost:7474/db/data/') graph.delete_all() """ Explanation: Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja. He iniciado la imagen de Neo4j sin autenticación, para pruebas locales. Utilizaremos una extensión de Jupyter Notebook que se llama ipython-cypher. Está instalada en la máquina virtual. Si no, se podría instalar con: pip install ipython-cypher Después, todas las celdas que comiencen por %%cypher y todas las instrucciones Python que comiencen por %cypher se enviarán a Neo4j para su interpretación. También usaremos la librería py2neo para crear el grafo: pip install py2neo End of explanation """ from py2neo import Node for doc in jisbd17.find(): node = Node("Slide", name = doc.get('_id'), title = doc.get('title',''), notes = doc.get('notes',''), text = doc.get('text', '')) graph.create(node) graph.find_one('Slide', property_key='name', property_value='jisbd17-001') from py2neo import NodeSelection NodeSelection(graph,conditions=["_.name='jisbd17-001'"]).first()['title'] """ Explanation: Importamos todas las diapositivas de MongoDB: End of explanation """ from py2neo import Relationship for i in range(jisbd17.count() - 1): slide_pre = NodeSelection(graph,conditions=[ '_.name = \'jisbd17-{:03d}\''.format(i)]).first() slide_next = NodeSelection(graph).where( '_.name = \'jisbd17-{:03d}\''.format(i+1)).first() graph.create(Relationship(slide_pre, "NEXT", slide_next)) """ Explanation: Crearemos la relación :NEXT para indicar la siguiente diapositiva. Ahora se hará con py2neo y después con ipython-cypher. End of explanation """ ds(154,7) """ Explanation: El lenguaje Cypher End of explanation """ %load_ext cypher %config CypherMagic.auto_html=False %config CypherMagic.auto_pandas=True %%cypher match (n) return n; %%cypher match (n) return n.name; """ Explanation: Ipython-cypher End of explanation """ import random nslides = jisbd17.count() for doc in jisbd17.find(): for ref in doc.get('xref',['jisbd17-{:03d}'.format(random.randint(1,nslides))]): slide_from = doc['_id'] slide_to = ref %cypher MATCH (f:Slide {name: {slide_from}}), (t:Slide {name: {slide_to}}) MERGE (f)-[:REF]->(t) %config CypherMagic.auto_networkx=False %config CypherMagic.auto_pandas=False %%cypher MATCH p=shortestPath( (s:Slide {name:"jisbd17-004"})-[*]->(r:Slide {name:"jisbd17-025"}) ) RETURN p # Tópicos de slides con expresiones regulares import cypher cypher.run("MATCH (n) RETURN n") !sudo docker stop neo4j !sudo docker stop mongo """ Explanation: Vamos a añadir las relaciones xref que haya en las diapositivas. Por ahora sólo había unas puestas a mano. Para las diapositivas que no tengan referencias, añado una al azar. End of explanation """ ds(164,6) """ Explanation: Nuestro trabajo de investigación End of explanation """
smorton2/think-stats
code/chap11ex.ipynb
gpl-3.0
from __future__ import print_function, division %matplotlib inline import numpy as np import pandas as pd import random import thinkstats2 import thinkplot """ Explanation: Examples and Exercises from Think Stats, 2nd Edition http://thinkstats2.com Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation """ import first live, firsts, others = first.MakeFrames() """ Explanation: Multiple regression Let's load up the NSFG data again. End of explanation """ import statsmodels.formula.api as smf formula = 'totalwgt_lb ~ agepreg' model = smf.ols(formula, data=live) results = model.fit() results.summary() """ Explanation: Here's birth weight as a function of mother's age (which we saw in the previous chapter). End of explanation """ inter = results.params['Intercept'] slope = results.params['agepreg'] inter, slope """ Explanation: We can extract the parameters. End of explanation """ slope_pvalue = results.pvalues['agepreg'] slope_pvalue """ Explanation: And the p-value of the slope estimate. End of explanation """ results.rsquared """ Explanation: And the coefficient of determination. End of explanation """ diff_weight = firsts.totalwgt_lb.mean() - others.totalwgt_lb.mean() diff_weight """ Explanation: The difference in birth weight between first babies and others. End of explanation """ diff_age = firsts.agepreg.mean() - others.agepreg.mean() diff_age """ Explanation: The difference in age between mothers of first babies and others. End of explanation """ slope * diff_age """ Explanation: The age difference plausibly explains about half of the difference in weight. End of explanation """ live['isfirst'] = live.birthord == 1 formula = 'totalwgt_lb ~ isfirst' results = smf.ols(formula, data=live).fit() results.summary() """ Explanation: Running a single regression with a categorical variable, isfirst: End of explanation """ formula = 'totalwgt_lb ~ isfirst + agepreg' results = smf.ols(formula, data=live).fit() results.summary() """ Explanation: Now finally running a multiple regression: End of explanation """ live['agepreg2'] = live.agepreg**2 formula = 'totalwgt_lb ~ isfirst + agepreg + agepreg2' results = smf.ols(formula, data=live).fit() results.summary() """ Explanation: As expected, when we control for mother's age, the apparent difference due to isfirst is cut in half. If we add age squared, we can control for a quadratic relationship between age and weight. End of explanation """ import nsfg live = live[live.prglngth>30] resp = nsfg.ReadFemResp() resp.index = resp.caseid join = live.join(resp, on='caseid', rsuffix='_r') """ Explanation: When we do that, the apparent effect of isfirst gets even smaller, and is no longer statistically significant. These results suggest that the apparent difference in weight between first babies and others might be explained by difference in mothers' ages, at least in part. Data Mining We can use join to combine variables from the preganancy and respondent tables. End of explanation """ def GoMining(df): """Searches for variables that predict birth weight. df: DataFrame of pregnancy records returns: list of (rsquared, variable name) pairs """ variables = [] for name in df.columns: try: if df[name].var() < 1e-7: continue formula = 'totalwgt_lb ~ agepreg + ' + name formula = formula.encode('ascii') model = smf.ols(formula, data=df) if model.nobs < len(df)/2: continue results = model.fit() except (ValueError, TypeError): continue except patsy.PatsyError: raise ValueError(MESSAGE) variables.append((results.rsquared, name)) return variables variables = GoMining(join) """ Explanation: And we can search for variables with explanatory power. Because we don't clean most of the variables, we are probably missing some good ones. End of explanation """ import re def ReadVariables(): """Reads Stata dictionary files for NSFG data. returns: DataFrame that maps variables names to descriptions """ vars1 = thinkstats2.ReadStataDct('2002FemPreg.dct').variables vars2 = thinkstats2.ReadStataDct('2002FemResp.dct').variables all_vars = vars1.append(vars2) all_vars.index = all_vars.name return all_vars def MiningReport(variables, n=30): """Prints variables with the highest R^2. t: list of (R^2, variable name) pairs n: number of pairs to print """ all_vars = ReadVariables() variables.sort(reverse=True) for r2, name in variables[:n]: key = re.sub('_r$', '', name) try: desc = all_vars.loc[key].desc if isinstance(desc, pd.Series): desc = desc[0] print(name, r2, desc) except KeyError: print(name, r2) """ Explanation: The following functions report the variables with the highest values of $R^2$. End of explanation """ MiningReport(variables) """ Explanation: Some of the variables that do well are not useful for prediction because they are not known ahead of time. End of explanation """ formula = ('totalwgt_lb ~ agepreg + C(race) + babysex==1 + ' 'nbrnaliv>1 + paydu==1 + totincr') results = smf.ols(formula, data=join).fit() results.summary() """ Explanation: Combining the variables that seem to have the most explanatory power. End of explanation """ y = np.array([0, 1, 0, 1]) x1 = np.array([0, 0, 0, 1]) x2 = np.array([0, 1, 1, 1]) """ Explanation: Logistic regression Example: suppose we are trying to predict y using explanatory variables x1 and x2. End of explanation """ beta = [-1.5, 2.8, 1.1] """ Explanation: According to the logit model the log odds for the $i$th element of $y$ is $\log o = \beta_0 + \beta_1 x_1 + \beta_2 x_2 $ So let's start with an arbitrary guess about the elements of $\beta$: End of explanation """ log_o = beta[0] + beta[1] * x1 + beta[2] * x2 log_o """ Explanation: Plugging in the model, we get log odds. End of explanation """ o = np.exp(log_o) o """ Explanation: Which we can convert to odds. End of explanation """ p = o / (o+1) p """ Explanation: And then convert to probabilities. End of explanation """ likes = np.where(y, p, 1-p) likes """ Explanation: The likelihoods of the actual outcomes are $p$ where $y$ is 1 and $1-p$ where $y$ is 0. End of explanation """ like = np.prod(likes) like """ Explanation: The likelihood of $y$ given $\beta$ is the product of likes: End of explanation """ import first live, firsts, others = first.MakeFrames() live = live[live.prglngth>30] live['boy'] = (live.babysex==1).astype(int) """ Explanation: Logistic regression works by searching for the values in $\beta$ that maximize like. Here's an example using variables in the NSFG respondent file to predict whether a baby will be a boy or a girl. End of explanation """ model = smf.logit('boy ~ agepreg', data=live) results = model.fit() results.summary() """ Explanation: The mother's age seems to have a small effect. End of explanation """ formula = 'boy ~ agepreg + hpagelb + birthord + C(race)' model = smf.logit(formula, data=live) results = model.fit() results.summary() """ Explanation: Here are the variables that seemed most promising. End of explanation """ endog = pd.DataFrame(model.endog, columns=[model.endog_names]) exog = pd.DataFrame(model.exog, columns=model.exog_names) """ Explanation: To make a prediction, we have to extract the exogenous and endogenous variables. End of explanation """ actual = endog['boy'] baseline = actual.mean() baseline """ Explanation: The baseline prediction strategy is to guess "boy". In that case, we're right almost 51% of the time. End of explanation """ predict = (results.predict() >= 0.5) true_pos = predict * actual true_neg = (1 - predict) * (1 - actual) sum(true_pos), sum(true_neg) """ Explanation: If we use the previous model, we can compute the number of predictions we get right. End of explanation """ acc = (sum(true_pos) + sum(true_neg)) / len(actual) acc """ Explanation: And the accuracy, which is slightly higher than the baseline. End of explanation """ columns = ['agepreg', 'hpagelb', 'birthord', 'race'] new = pd.DataFrame([[35, 39, 3, 2]], columns=columns) y = results.predict(new) y """ Explanation: To make a prediction for an individual, we have to get their information into a DataFrame. End of explanation """ import first live, firsts, others = first.MakeFrames() live = live[live.prglngth>30] """ Explanation: This person has a 51% chance of having a boy (according to the model). Exercises Exercise: Suppose one of your co-workers is expecting a baby and you are participating in an office pool to predict the date of birth. Assuming that bets are placed during the 30th week of pregnancy, what variables could you use to make the best prediction? You should limit yourself to variables that are known before the birth, and likely to be available to the people in the pool. End of explanation """ import statsmodels.formula.api as smf model = smf.ols('prglngth ~ birthord==1 + race==2 + nbrnaliv>1', data=live) results = model.fit() results.summary() """ Explanation: The following are the only variables I found that have a statistically significant effect on pregnancy length. End of explanation """ import regression join = regression.JoinFemResp(live) # Solution goes here # Solution goes here # Solution goes here """ Explanation: Exercise: The Trivers-Willard hypothesis suggests that for many mammals the sex ratio depends on “maternal condition”; that is, factors like the mother’s age, size, health, and social status. See https://en.wikipedia.org/wiki/Trivers-Willard_hypothesis Some studies have shown this effect among humans, but results are mixed. In this chapter we tested some variables related to these factors, but didn’t find any with a statistically significant effect on sex ratio. As an exercise, use a data mining approach to test the other variables in the pregnancy and respondent files. Can you find any factors with a substantial effect? End of explanation """ # Solution goes here # Solution goes here """ Explanation: Exercise: If the quantity you want to predict is a count, you can use Poisson regression, which is implemented in StatsModels with a function called poisson. It works the same way as ols and logit. As an exercise, let’s use it to predict how many children a woman has born; in the NSFG dataset, this variable is called numbabes. Suppose you meet a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000. How many children would you predict she has born? End of explanation """ # Solution goes here """ Explanation: Now we can predict the number of children for a woman who is 35 years old, black, and a college graduate whose annual household income exceeds $75,000 End of explanation """ # Solution goes here """ Explanation: Exercise: If the quantity you want to predict is categorical, you can use multinomial logistic regression, which is implemented in StatsModels with a function called mnlogit. As an exercise, let’s use it to guess whether a woman is married, cohabitating, widowed, divorced, separated, or never married; in the NSFG dataset, marital status is encoded in a variable called rmarital. Suppose you meet a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000. What is the probability that she is married, cohabitating, etc? End of explanation """ # Solution goes here """ Explanation: Make a prediction for a woman who is 25 years old, white, and a high school graduate whose annual household income is about $45,000. End of explanation """
ozak/CompEcon
notebooks/IntroPython.ipynb
gpl-3.0
1+1-2 3*2 3**2 -1**2 3*(3-2) 3*3-2 """ Explanation: Introduction to <img src="https://www.python.org/static/community_logos/python-logo-inkscape.svg" alt="Python" width=200/> and <img src="https://ipython.org/_static/IPy_header.png" alt="IPython" width=250/> using <img src="https://raw.githubusercontent.com/adebar/awesome-jupyter/master/logo.png" alt="Jupyter" width=200/> Python is a powerful and easy to use programming language. It has a large community of developers and given its open source nature, you can find many solutions, scripts, and help all over the web. It is easy to learn and code, and faster than other high-level programming languages...and did I mention it is free because it is open-source IPython is a very powerful extension to Python that provides: Powerful interactive shells (terminal, Qt-based and Notebooks based on Jupyter). A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into your own projects. Easy to use, high performance tools for parallel computing. Jupyter is an open-source project that provides open-standards, and services for interactive computing across dozens of programming languages, including Python, R, Stata and many others used by economists. Getting Python, IPython, R, and Jupyter You can download and install Python and its packages for free for your computer from Python.org. While this is the official site, which offers the basic installer and you can try do add any packages you require yourself, a much easier approach, which is almost foolproof is to use Continuum Anaconda or Enthought Canopy. Both of these distributions offer academic licenses (Canopy), which allow you to use a larger set of packages. Similarly, you can download R from the r-project website. I personally have switched to using Continuum Anaconda since it make installing all the packages and software I use much easier. You can follow the instructions below or better yet follow the instructions on the Computation Page of my Economic Growth and Comparative Development Course. Installing (I)Python & Jupyter The easiest and most convenient way to install a working version of IPython with all the required packages and tools is using Continuum's Anaconda Distribution. You can install following the instructions in that website, or if you can just run this script (Mac/Linux). After installing the latest version of Anaconda, add the Anaconda/bin directory to your PATH variable. To create an environment useful for these notebooks, in your terminal execute bash conda create --name GeoPython3env -c conda-forge -c r -c mro --override-channels python=3.9 georasters geopandas pandas spatialpandas statsmodels xlrd networkx ipykernel ipyparallel ipython ipython_genutils ipywidgets jupyter jupyterlab kiwisolver matplotlib-base matplotlib scikit-image scikit-learn scipy seaborn geoplot geopy geotiff pycountry nb_conda_kernels stata_kernel nltk This should create an environment with most of the packages we need. We can always install others down the road. To start using one of the environment you will need to exectute the following command bash source activate GeoPython3env Note I assume you have followed the steps above and have installed Anaconda. Everything that is done should work on any distribution that has the required packages, since the Python scripts should run (in principle) on any of these distributions. We will use IPython as our computing environment. Let's get started Once you have your Python distribution installed you'll be ready to start working. You have various options: Open the Canopy program and work there Open Anaconda Navigator and open one of the apps from there (python, ipython, jupyter console, jupyter notebook, jupyter lab, R, Stata From the Terminal prompt (command-line in Windows) execute one of the following commands: ipython jupyter console jupyter qtconsole jupyter notebook jupyter lab While theses last are all using IPython, each has its advantages and disadvantages. You should play with them to get a better feeling of which you want to use for which purpose. In my own research I usually use a text editor (TextMate, Atom, Sublime) and the jupyter qtconsole or the jupyter notebook. To see the power of Jupter notebooks (see this excellent and in-depth presentation by its creators). As you will see, this might prove an excellent environment to do research, homework, replicate papers, etc. Note You can pass some additional commands to ipython in order to change colors and rendering of plots. I usually use jupyter qtconsole --color=linux --pylab=inline. You can create profiles to manage many options within IPython and JuPyter. First steps Let's start by running some simple commands at the prompt to do some simple computations. End of explanation """ 1/2 """ Explanation: Notice that Python obeys the usual orders for operators, so exponentiation before multiplication/division, etc. End of explanation """ from __future__ import division 1/2 """ Explanation: If you are in Python 2.7 you will notice that this answer is wrong if $1,2\in\mathbb{R}$, but Python thinks they are integers, so it forces and integer. In order to have a more natural behavior of division we need End of explanation """ ? help() """ Explanation: Note It is a good idea to include this among the packages to be imported by default Getting help So what else can we do? Where do we start if we are new? You can use ? or help() to get help. End of explanation """ help(sum) sum? sum?? """ Explanation: If you want information about a command, say mycommand you can use help(mycommand), mycommand? or mycommand?? to get information about how it is used or even see its code. End of explanation """ print('Hello World!') """ Explanation: Variables, strings, and other objects We can print information End of explanation """ a = 1 b = 2 a+b """ Explanation: We can also create variables, which can be of various types End of explanation """ c = [1, 2] d = [[1, 2], [3, 4]] print('c=%s' % c) print('d=%s' % d) """ Explanation: a and b now hold numerical values we can use for computing End of explanation """ print(' a * c = %s' % (a * c)) print(' b * d = %s' % (b * d)) c*d """ Explanation: Notice that we have used %s and % to let Python know we are passing a string to the print function. What kind of variables are c and d? They look like vectors and matrices, but... End of explanation """ type(c) type(d) type(a) """ Explanation: Actually, Python does not have vectors or matrices directly available. Instead it has lists, sets, arrays, etc., each with its own set of operations. We defined c and d as list objects End of explanation """ %pylab? """ Explanation: Luckily Python has a powerful package for numerical computing called Numpy. Extending Python's Functionality with Packages In order to use a package in Python or IPython, say mypackage, you need to import it, by executing import mypackage After executing this command, you will have access to the functions and objects defined in mypackage. For example, if mypackage has a function squared that takes a real number x and computes its square, we can use this function by calling mypackage.squared(x). Since the name of some packages might be too long, your can give them a nickname by importing them instead as import mypackage as myp so now we could compute the square of x by calling myp.squared(x). We will see various packages that will be useful to do computations, statistics, plots, etc. IPython has a command that imports Numpy and Matplotlib (Python's main plotting package). Numpy is imported as np and Matplotlib as plt. One could import these by hand by executing import numpy as np import matplotlib as plt but the creators of IPython have optimized the interaction between these packages by running the following command: %pylab End of explanation """ %matplotlib? %pylab --no-import-all %matplotlib inline np? """ Explanation: I do recommend using the --no-import-all option in order to ensure you do not contaminate the namespace. Instead it might be best to use %pylab --no-import-all %matplotlib End of explanation """ ca = np.array(c) da = np.array(d) print('c = %s' % c) print('d = %s' % d) print('ca = %s' % ca) print('da = %s' % da) """ Explanation: Let us now recreate c and d, but as Numpy arrays instead. End of explanation """ cm = np.matrix(c) dm = np.matrix(d) print('cm = %s' % cm) print('dm = %s' % dm) """ Explanation: We could have created them as matrices intead. Again how you want to cerate them depends on what you will be doing with them. See here for an explanation of the differences between Numpy arrays and matrices. End of explanation """ cm.shape ca.shape dm.diagonal() da.cumsum() """ Explanation: Let's see some information about these...(this is a good moment to show tab completion...a wonderful feature of IPython, which is not avalable if Python) End of explanation """ cm*dm ca da ca*da ca.dot(da) """ Explanation: Let's try again some operations on our new arrays and matrices End of explanation """ print(np.ones((3,4))) print(np.zeros((2,2))) print(np.eye(2)) print(np.ones_like(cm)) np.random.uniform(-1,1,10) #np.random.seed(123456) x0 = 0 x = [x0] [x.append(x[-1] + np.random.normal() ) for i in range(500)] plt.plot(x) plt.title('A simple random walk') plt.xlabel('Period') plt.ylabel('Log Income') plt.show() """ Explanation: We can create special matrices using Numpy's functions and classes End of explanation """ def u(c, sigma): '''This function returns the value of utility when the CRRA coefficient is sigma. I.e. u(c,sigma)=(c**(1-sigma)-1)/(1-sigma) if sigma!=1 and u(c,sigma)=ln(c) if sigma==1 Usage: u(c,sigma) ''' if sigma!=1: u = (c**(1-sigma) - 1) / (1-sigma) else: u = np.log(c) return u """ Explanation: Extending Capabilities with Functions We have used some of the functions in Python, Numpy and Matplotlib. But what if we wanted to create our own functions? It is very easy to do so in Python. There are two ways to define functions. Let's use them to define the CRRA utility function $u(c)=\frac{c^{1-\sigma}-1}{1-\sigma}$ and the production function $f(k)=Ak^\alpha$. The first method is as follows: End of explanation """ # Create vector c = np.linspace(0.1, 5, 100) # Evaluate utilities for different CRRA parameters u1 = u(c, .5) u2 = u(c, 1) u3 = u(c, 1.5) # Plot plt.plot(c, u1, label=r'$\sigma=.5$') plt.plot(c, u2, label=r'$\sigma=1$') plt.plot(c, u3, label=r'$\sigma=1.5$') plt.xlabel(r'$c_t$') plt.ylabel(r'$u(c_t)$') plt.title('CRRA Utility function') plt.legend(loc=4) plt.savefig('./CRRA.jpg', dpi=150) plt.savefig('./CRRA.pdf', dpi=150) plt.savefig('./CRRA.png', dpi=150) plt.show() """ Explanation: This defined the utility function. Let's plot it for $0< c\le5$ and $\sigma\in{0.5,1,1.5}$ End of explanation """ def u(c, sigma=1): '''This function returns the value of utility when the CRRA coefficient is sigma. I.e. u(c,sigma)=(c**(1-sigma)-1)/(1-sigma) if sigma!=1 and u(c,sigma)=ln(c) if sigma==1 Usage: u(c,sigma=value), where sigma=1 is the default ''' if sigma!=1: u = (c**(1-sigma) - 1) / (1-sigma) else: u = np.log(c) return u sigma1 = .25 sigma3 = 1.25 u1 = u(c, sigma=sigma1) u2 = u(c) u3 = u(c, sigma=sigma3) plt.plot(c, u1, label=r'$\sigma='+str(sigma1)+'$') plt.plot(c, u2, label=r'$\sigma=1$') plt.plot(c, u3, label=r'$\sigma='+str(sigma3)+'$') plt.xlabel(r'$c_t$') plt.ylabel(r'$u(c_t)$') plt.title('CRRA Utility function') plt.legend(loc=4) plt.show() """ Explanation: While this is nice, it requires us to always have to put a value for the CRRA coefficient. Furthermore, we need to remember if $c$ is the first or second argument. Since we tend to use log-utilities a lot, let us change the definition of the utility function so that it has a default value for $\sigma$ equal to 1 End of explanation """ squared = lambda x: x**2 squared(2) """ Explanation: Exercise Write the function for the Cobb-Douglas production function. Can you generalize it so that we can use it for aggregate, per capita, and per efficiency units without having to write a function for each? Remember aggregate production is $$ Y = F(K, AL) = K^\alpha (A L)^{1-\alpha}, $$ per capita is $$ \hat y=\frac{F(K, AL)}{L} = \frac{K^\alpha (A L)^{1-\alpha}}{L} = Ak^\alpha, $$ and per effective worker $$ y = \frac{F(K, AL)}{AL} = \frac{K^\alpha (A L)^{1-\alpha}}{AL} = k^\alpha, $$ where $k=K/AL$. The second method is to use the lambda notation, which allows you to define functions in one line or without giving the function a name. End of explanation """ %%file? %%file helloworld.py #!/usr/bin/env python # coding=utf-8 ''' My First script in Python Author: Me E-mail: me@me.com Website: http://me.com GitHub: https://github.com/me Date: Today This code computes Random Walks and graphs them ''' ''' from __future__ import division import numpy as np import matplotlib.pyplot as plt ''' print('Hello World!') """ Explanation: Our first script Let's write a script that prints "Hello World!" End of explanation """ %run helloworld.py """ Explanation: Let's run that script End of explanation """ import randomwalk as rw rw.randomwalk(0, 500, 0, 1) from randomwalk import randomwalk randomwalk?? import time time.sleep(10) print("It's time") """ Explanation: Exercise Write a simple script randomwalk.py that simulates and plots random walks. In particular, create a function randomwalk(x0, T, mu, sigma) that simulates the random walk starting at $x_0=x0$ until $t=T$ where the shock is distributed $\mathcal{N}(\mu,\sigma^2)$ End of explanation """