repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
choderalab/yank
Yank/reports/YANK_Health_Report_Template.ipynb
mit
[ "YANK Simulation Health Report\nGeneral Settings\nMandatory Settings\n\nstore_directory: Location where the experiment was run. This has an analysis.yaml file and two .nc files.\n\nOptional Settings\n\ndecorrelation_threshold: When number of decorrelated samples is less than this percent of the total number of samples, raise a warning. Default: 0.1.\nmixing_cutoff: Minimal level of mixing percent from state i to j that will be plotted. Default: 0.05.\nmixing_warning_threshold: Level of mixing where transition from state i to j generates a warning based on percent of total swaps. Default: 0.90.\nphase_stacked_replica_plots: Boolean to set if the two phases' replica mixing plots should be stacked one on top of the other or side by side. If True, every replica will span the whole notebook, but the notebook will be longer. If False, the two phases' plots will be next to each other for a shorter notebook, but a more compressed view. Default False.", "# Mandatory Settings\nstore_directory = 'STOREDIRBLANK'\nanalyzer_kwargs = ANALYZERKWARGSBLANK\n\n# Optional Settings\ndecorrelation_threshold = 0.1\nmixing_cutoff = 0.05\nmixing_warning_threshold = 0.90\nphase_stacked_replica_plots = False", "Data Imports\nThese are the imports and files which will be referenced for the report", "from matplotlib import pyplot as plt\nfrom yank.reports import notebook\n%matplotlib inline\nreport = notebook.HealthReportData(store_directory, **analyzer_kwargs)\nreport.report_version()", "General Simulation Data:\nReports the number of iterations, states, and atoms in each phase. If no checkpoint file is found, the number of atoms is reported as No Cpt. as this information is inferred from the checkpoint file. All other information comes from the analysis file.", "report.general_simulation_data()", "Equilibration\nHow to interpret these plots\nShown is the potential energy added up across all replicas (black dots), the moving average (red line), and where we have auto-detected the equilibration (blue line) for each phase. Finally, the total number of decorrelated samples for each phase is attached to each plot.\nYou want to see a majority of samples to the right of the blue line and the red line converging to a constant value. If you do not see these trends or you think there are insufficient samples, please consider running for longer. \nFor additional information on the theory of these charts, please see the Equilibration Primer at the Appendix of the report\nSee Something Odd?\n\nThe scatter plot Y scale looks large and the equilibrium line is at Iteration 0\n\nThis normally happens when the energy from index 0 comes from the energy minimized configuration. Because this configuration is technically not from the equilibrium distribution, it can have large energies far from the true mean equilibrium energy. This can cause the detectEquilibration algorithm we use to think that the jump in energy from the minimized to the equilibrated is the scale of the energy fluctuations, and therefore all other fluctuations appear as though they are equilibrated. Look close at the first few points: is there are there a few points which are a large shift on the first few steps? If so, consider removing those first few points from the timeseries.\nSolution: Increase discard_from_start\nWarning: Some simulations (frequently solvent simulations) are often equilibrated starting at iteration 0. These simulations are usually scattered over the entire height of the figure. You should only consider discarding samples if the samples are not distributed over the height of the figure. \nOptions\n\ndiscard_from_start: Integer. Number of samples to discard from the start of the data. This is helpful for simulations where the minimized energy configuration throws off the equilibration detection.", "sams_weights_figure = report.generate_sams_weights_plots()\n\nequilibration_figure = report.generate_equilibration_plots(discard_from_start=1)", "Additional Decorrelation Analysis\nThe following Pie Charts show you the breakdown of how many samples were kept, and how many were lost to either equilibration or decorrelation. Warnings are shown when below a threshold (originally written to be 10%)", "decorrelation_figure = report.generate_decorrelation_plots(decorrelation_threshold=0.1)", "RMSD Analysis\nTrace the RMSD from the initial frame to the end of the simulaton for both the ligand and receptor.\nThis is an experimental feature and has been commented out due to instability", "#rmsd_figure = report.compute_rmsds()", "Mixing statistics\nWe can analyze the \"mixing statistics\" of the equilibrated part of the simulation to ensure that the $(X,S)$ chain is mixing reasonably well among the various alchemical states.\nFor information on how this is computed, including how to interpret the Perron Eigenvalue, please see the Mixing Statistics Primer at the end of the report.\nWhat do you want to see?\nYou want a replica to mix into other replicas, so you want a diffusion of configurations shown by a spread out color map in the figure. What you don't want to see is highly concentrated replicas that do not mix at all. The graphs will show red and generate a warning if there are replicas that do not mix well.\nFor the Perron/subdominant eigenvalue, you want to see a value smaller than one 1. The further away, the better. This number gives you an estimate of how many iterations it will take to equilibrate the current data. Keep in mind that this analysis only runs on the already equilibrated data and is therefor an estimate of how long it takes the system to relax in state and configuration space from this point.\nSeeing something odd?\n\nThe diagonal is very dark, but everything else is white\n\nYou probably have poor mixing between states. This happens when there is insufficient phase space overlap between states and the probability of two replicas at different states swapping configurations approaches zero. If you have set the mixing_warning_cutoff, many of these states will be highlighted as warnings.\nSolution: Add additional states to your simulation near the states which are not mixing well. Provide a more gradual change of energy from the state to improve replica exchange from that state.\n\nGraph is mostly white!\n\nThis can happen if you have too good of mixing alongside too many states. In this case, mixing between all states is happening so regularly that there is no concentration of configurations in one state.\nSolution: Reduce mixing_cutoff.\n\nIts still way too white\n\nThat is a limitation of the custom colormap. You can try un-commenting the line cmap = plt.get_cmap(\"Blues\") below to get a blue-scale colormap which has a far smaller white level so you can better see the diffusion in blue. You will lose the red warning color of states with too low a swap rate, but you can always comment the line back out to see those. The warning message will still be generated.\nSolution: Override the custom colormap that the function uses by setting cmap_override=\"Blues\" or any other registered matplotlib colormap name.\nOptions\nYou can adjust the mixing_cutoff options to control what threshold to display mixing. Anything below the cutoff will be shown as a blank. Defaults to 0.05. Accepts either a float from [0,1] or None (None and 0 yield the same result)\nThe mixing_warning_threshold is the level at which you determine there were insufficient number of swaps between states. Consider adding additional states between the warnings and adjacent states to improve mixing. Accepts a float between (mixing_cutoff,1] (must be larger than mixing_cutoff). Defaults to 0.9 but this should be tuned based on the number of states.", "mixing_figure = report.generate_mixing_plot(mixing_cutoff=mixing_cutoff, \n mixing_warning_threshold=mixing_warning_threshold, \n cmap_override=None)", "Replica Pseudorandom Walk Examination\nThis section checks to see if all the replicas are exchanging states over the whole thermodynamic state space. This is different from tracking states as any replica is a continuous trajectory of configurations, just undergoing different forces at different times.\nWhat do I want to see here?\nEach plot is its own replica, the line in each plot shows which state a given replica is in at time. The ideal scenario is that all replicas visit all states numerous times. If you see a line that is relatively flat, then you can surmise that very little mixing is occurring from that replica and you may wish to consider adding more states around the stuck region to \"un-stick\" it.\nSomething seem odd?\n\nAll I see is black with some white dots mixed in (uncommon)\n\nThis is a good thing! It means the replicas are well mixed and are rapidly changing states. There may be some phases which were redundant though, which is not necessarily a bad thing since it just adds more samples at the given state, but it may mean you did extra work. An example of this is decoupling the steric forces of a ligand once electrostatics have been annihilated in implicit solvent. Since there is no change to the intra-molecular interactions at this point and the most solvent models are based on partial charges (which are now 0), all changes to the sterics are the same state.\n\nSome or All of my replicas stayed in the same state\n\nA sign of very poor mixing. Consider adding additional states (see the Mixing Statistics section above for ideas on where). There may be other factors such as a low number of attempted replica swaps between each iteration.", "replica_mixing_figure = report.generate_replica_mixing_plot(phase_stacked_replica_plots=phase_stacked_replica_plots)", "Free Energy Difference\nThe free energy difference is shown last as the quality of this estimate should be gauged with the earlier sections. Although MBAR provides an estimate of the free energy difference and its error, it is still only an estimate. You should consider if you have a sufficient number of decorrelated samples, sufficient mixing/phase space overlap between states, and sufficient replica random walk to gauge the quality of this estimate.", "report.generate_free_energy()", "Free Energy Trace for Equilibrium Stability\nThe free energy difference alone, even with all the additional information previously, may still be an underestimate of the true free energy. One way to check this is to drop samples from the start and end of the simulation, and re-run the free energy estimate. Ideally, you would want to see the forward and reverse analysis be roughly converged for when more than 80% of the samples are kept, divergence when only 10-30% of the samples are kept is expected behavior. \nImportant: The 100% kept samples free energy WILL be different than the free energy difference above. The data analyzed here is not subsampled as this is an equlibrium test only. This is also only for sampled states where as the free energy difference from above includes the unsampled states.\nSee Klimovich, Shirts, and Mobley (J Comput Aided Mol Des., 29(5) https://dx.doi.org/10.1007%2Fs10822-015-9840-9) for more information on this analysis\nWhat do I want to see here?\nThere are three plots: one for each phase, and the combination. You want the two traces to be on top of each other for at least some of the larger kept samples. The horizontal band is the 2 standard deviations of the free energy estimate when all 100% of the samples are kept and can be used as reference as the esimtate diverges at smaller numbers of kept samples. Error bars are shown as 2 standard deviations", "free_energy_trace_figure = report.free_energy_trace(discard_from_start=1, n_trace=10)", "Radially-symmetric restraint energy and distance distributions\nThis plot is generated only if the simulation employs a radially-symmetric restraint (e.g. harmonic, flat-bottom), and the unbias_restraint option of the analyzer was set.\nWhat do I want to see here?\nWhen unbiasing the restraint, it is important to verify that the cutoffs do not remove too many configurations sampled from the bound state. Almost all the density of the bound state should be on the left of an eventual cutoff (red line).\nIn general, we expect the distribution in the bound state to be narrower than in the non-interacting state. If this is not the case, then either the binder is weak and it has left the binding site during the simulation, or the restraint might be too tight and limiting the conformational space explored by the ligand.", "restraint_distribution_figure = report.restraint_distributions_plot()", "Execute this block to write out serialized data\nThis is left commented out in the template to prevent it from auto-running with everything else", "#report.dump_serial_data('SERIALOUTPUT')", "Primers\nEquilibration Primer\nIs equilibration necessary?\nIn principle, we don't need to discard initial \"unequilibrated\" data; the estimate over a very long trajectory will converge to the correct free energy estimate no matter what---we simply need to run long enough. Some MCMC practitioners, like Geyer, feel strongly enough about this to throw up a webpage in defense of this position:\nhttp://users.stat.umn.edu/~geyer/mcmc/burn.html\nIn practice, if the initial conditions are very atypical of equilibrium (which is often the case in molecular simulation), it helps a great deal to discard an initial part of the simulation to equilibration. But how much? How do we decide?\nDetermining equilibration in a replica-exchange simulation\nFor a standard molecular dynamics simulation producing a trajectory $x_t$, it's reasonably straightforward to decide approximately how much to discard if human intervention is allowed. We simply look at some property $A_t = A(x_t)$ over the course of the simulation---ideally, a property that we know has some slow behavior that may affect the quantities we are interested in computing ($A(x)$ is a good choice if we're interested in the expectation $<A>$) and find the point where $A_t$ seems to have \"settled in\" to typical equilibrium behavior.\nIf we're interested in a free energy, which is computed from the potential energy differences, let's suppose the potential energy $U(x)$ may be a good quantity to examine.\nBut in a replica-exchange simulation, there are K replicas that execute nonphysical walks on many potential energy functions $U_k(x)$. What quantity do we look at here?\nLet's work by analogy. In a single simulation, we would plot some quantity related to the potential energy $U(x)$, or its reduced version $u(x) = \\beta U(x)$. This is actually the negative logarithm of the probability density $\\pi(x)$ sampled, up to an additive constant:\n$$\\pi(x) = Z^{-1} e^{-u(x)}$$\n$$u(x) = -\\ln \\pi(x) + c$$\nFor a replica-exchange simulation, the sampler state is given by the pair $(X,S)$, where $X = {x_1, x_2, \\ldots, x_K }$ are the replica configurations and $S = {s_1, s_2, \\ldots, s_K}$ is the vector of state index permutations associated with the replicas. The total probability sampled is\n$$\\Pi(X,S) = \\prod_{k=1}^K \\pi_{s_k}(x_k) = (Z_1 \\cdots Z_K) \\exp\\left[-\\sum_{k=1}^K u_{s_k}(x_k)\\right] = Q^{-1} e^{-u_*(X)}$$\nwhere the pseudoenergy $u_*(X)$ for the replica-exchange simulation is defined as\n$$u_*(X) \\equiv \\sum_{k=1}^K u_{s_k}(x_k)$$\nThat is, $u_*(X)$ is the sum of the reduced potential energies of each replica configuration at the current thermodynamic state it is visiting.\nMixing Statistics Primer\nHow we compute the mixing ratios\nIn practice, this is done by recording the number of times a replica transitions from alchemical state $i$ to state $j$ in a single iteration. Because the overall chain must obey detailed balance, we count each transition as contributing 0.5 counts toward the $i \\rightarrow j$ direction and 0.5 toward the $j \\rightarrow i$ direction. This has the advantage of ensuring that the eigenvalues of the resulting transition matrix among alchemical states are purely real.\nInterpreting the Perron (subdominant/second) Eigenvalue\nIf the subdominant eigenvalue would have been unity, then the chain would be decomposable, meaning that it completely separated into two separate sets of alchemical states that did not mix. This would have been an indication of poor phase space overlap between some alchemical states.\nIn practice, it's a great idea to monitor these statistics as the simulation is running, even if no data is discarded to equilibration at that point. They give not only a good idea of whether sufficient mixing is occurring, but it provides a lower bound on the mixing time in configuration space.\nIf the configuration $x$ sampling is infinitely fast so that $x$ can be considered to be at equilibrium given the instantaneous permutation $S$ of alchemical state assignments, the subdominant eigenvalue $\\lambda_2 \\in [0,1]$ gives an estimate of the mixing time of the overall $(X,S)$ chain:\n$$\\tau = \\frac{1}{1 - \\lambda_2}$$\nNow, in most cases, the configuration $x$ sampling is not infinitely fast, but at least we can use $\\tau$ to get a very crude estimate of how quickly each replica relaxes in $(X,S)$ space.\nGelman-Rubin Convergence Primer\nIn 1992, Gelman and Rubin proposed a very clever idea for a convergence diagnostic in the case that multiple MCMC samplers are run from different initial sampler states:\nhttp://dx.doi.org/10.1214/ss/1177011136\nThe idea is simple: Each chain gives an individual estimate for some computed expectation or property, and the whole collection of chains give a (presumably more precise) estimate. We can simply compare the individual estimates to the overall estimate to determine whether the chains have been run long enough to see concordance between the individual and global estimates, to within appropriate statistical error. If not, then the samplers have not yet run long enough to sample all of the important modes of the density.\nWe can apply a similar idea here, especially if we have initialized our replicas with different configurations (e.g. different docked ligand conformations, and potentially different protein conformations as well).\nNote: This feature has not yet been added" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CivicKnowledge/metatab-packages
census.gov/census.gov-pums-20165/notebooks/Extract.ipynb
mit
[ "import seaborn as sns\nimport metapack as mp\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import display \n\n%matplotlib inline\nsns.set_context('notebook')\n\n\npkg = mp.jupyter.open_package()\n#pkg = mp.jupyter.open_source_package()\npkg\n\nhs = pkg.resource('housing').dataframe()\npop = pkg.resource('population').dataframe()\n\ncols = [e.lower() for e in ['SERIALNO', 'SPORDER', 'PUMA','SEX', 'AGEP', \n 'RAC1P', 'HISP', 'POVPIP', 'PINCP', 'SCHL']]\nweight_cols = [ c for c in pop.columns if c.startswith('pwgtp')]\n\ndfx = pop[cols+weight_cols]", "Recode Race and Ethnicity\nRAC1P \nRecoded detailed race code \n1 .White alone \n2 .Black or African American alone \n3 .American Indian alone \n4 .Alaska Native alone \n5 .American Indian and Alaska Native tribes specified; or \n .American Indian or Alaska Native, not specified and no \n .other races \n6 .Asian alone \n7 .Native Hawaiian and Other Pacific Islander alone \n8 .Some Other Race alone \n9 .Two or More Races", "rac1p_map = {\n 1: 'white',\n 2: 'black',\n 3: 'amind',\n 4: 'alaskanat',\n 5: 'aian',\n 6: 'asian',\n 7: 'nhopi',\n 8: 'other',\n 9: 'many'\n}\npop['race'] = pop.rac1p.astype('category')\npop['race'] = pop.race.cat.rename_categories(rac1p_map)\n\n\n# The raceeth variable is the race varaiable, but with 'white' replaced\n# with 'hisp' for records that have both is_hsip and white set. So, for \n# raceeth, 'white' means 'non-hispanic white'\npop['is_hisp'] = pop.hisp != 1\npop['raceeth'] = pop['race'].mask(((pop.is_hisp == True) & (pop.race == 'white')), 'hisp')\n\npop[['rac1p','race','is_hisp','raceeth']].head()\n\npop[pop.raceeth == 'white'].agep.hist()\n\npop[pop.raceeth == 'hisp'].agep.hist()", "Recode Age\nAge groups from CHIS:\n18-25 YEARS 1906\n26-29 YEARS 867\n30-34 YEARS 1060\n35-39 YEARS 1074\n40-44 YEARS 1062\n45-49 YEARS 1302\n50-54 YEARS 1621\n55-59 YEARS 1978\n60-64 YEARS 2343\n65-69 YEARS 2170\n70-74 YEARS 1959\n75-79 YEARS 1525\n80-84 YEARS 1125\n85+ YEARS 1161", "ages = ['18-25 YEARS',\n '26-29 YEARS',\n '30-34 YEARS',\n '35-39 YEARS',\n '40-44 YEARS',\n '45-49 YEARS',\n '50-54 YEARS',\n '55-59 YEARS',\n '60-64 YEARS',\n '65-69 YEARS',\n '70-74 YEARS',\n '75-79 YEARS',\n '80-84 YEARS',\n '85+ YEARS']\n\ndef extract_age(v):\n if v.startswith('85'):\n return pd.Interval(left=85, right=120, closed='both')\n else:\n l,h,_ = v.replace('-',' ').split()\n return pd.Interval(left=int(l), right=int(h), closed='both')\n \nage_ranges = [ (extract_age(v), v) for v in ages]\n\nage_index = pd.IntervalIndex(list(ar[0] for ar in age_ranges))\n \npop['age_group'] = pd.cut(pop.agep,age_index).astype('category')\npop['age_group'].cat.rename_categories(dict(age_ranges), inplace=True)\npop[['agep','age_group']].head()", "Recode Poverty Level", "povlvls = ['0-99% FPL', '100-199% FPL', '200-299% FPL', '300% FPL AND ABOVE']\npov_index = pd.IntervalIndex(\n [pd.Interval(left=0, right=99, closed='both'),\n pd.Interval(left=100, right=199, closed='both'),\n pd.Interval(left=200, right=299, closed='both'),\n pd.Interval(left=300, right=501, closed='both')]\n)\n\npop.povpip.describe()\n\npop['pov_group'] = pd.cut(pop.povpip,pov_index).astype('category')\npop['pov_group'].cat.rename_categories(dict(zip(pov_index, povlvls)), inplace=True)\npop[['povpip','pov_group']].head()\n\npop.groupby('puma').pwgtp5.sum().sum()\n\ndfx = pop[cols+['age_group','pov_group','race','is_hisp','raceeth']+weight_cols]\ndfx.head(20).T\nlen(dfx)", "Build the full population set", "def build_set(df, rep_no):\n \n new_rows = []\n for row in df.iterrows():\n repl = row[1].at['pwgtp'+str(rep_no)]\n if repl > 1:\n new_rows.extend([row]*(repl-1))\n \n return new_rows\n\n\n%time new_rows = build_set(dfx, 1)\n\n%time t = dfx.copy().append(new_rows, ignore_index = True)\n\nlen(t)\n\nt\n\nfrom publicdata import parse_app_url\n\nurl = parse_app_url('census://2015/5/CA/140/B17001')\ndfc = url.geoframe()\n\ndfc.plot()\n\n# The puma files moved, so the publicdata package is wrong. \nurl = parse_app_url('shape+ftp://ftp2.census.gov/geo/tiger/TIGER2018/PUMA/tl_2018_06_puma10.zip')\npumas = url.get_resource().geoframe()\n\npumas.plot()\n\nurl = parse_app_url('census://2015/5/CA/county/B17001')\nurl.geo_url.shape_url\n\ncounties_pkg = mp.open_package('http://library.metatab.org/census.gov-counties-2017-2.csv')\ncounties = counties_pkg.resource('counties').geoframe()\n\n\nsd = counties[counties.name == 'San Diego']\n\n#import geopandas as gpd\n#gpd.sjoin(pumas, sd)\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
pombredanne/gensim
docs/notebooks/doc2vec-lee.ipynb
lgpl-2.1
[ "Doc2Vec Tutorial on the Lee Dataset", "import gensim\nimport os\nimport collections\nimport random", "What is it?\nDoc2Vec is an NLP tool for representing documents as a vector and is a generalizing of the Word2Vec method. This tutorial will serve as an introduction to Doc2Vec and present ways to train and assess a Doc2Vec model.\nResources\n\nWord2Vec Paper\nDoc2Vec Paper\nDr. Michael D. Lee's Website\nLee Corpus\nIMDB Doc2Vec Tutorial\n\nGetting Started\nTo get going, we'll need to have a set of documents to train our doc2vec model. In theory, a document could be anything from a short 140 character tweet, a single paragraph (i.e., journal article abstract), a news article, or a book. In NLP parlance a collection or set of documents is often referred to as a <b>corpus</b>. \nFor this tutorial, we'll be training our model using the Lee Background Corpus included in gensim. This corpus contains 314 documents selected from the Australian Broadcasting\nCorporation’s news mail service, which provides text e-mails of headline stories and covers a number of broad topics.\nAnd we'll test our model by eye using the much shorter Lee Corpus which contains 50 documents.", "# Set file names for train and test data\ntest_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data'])\nlee_train_file = test_data_dir + os.sep + 'lee_background.cor'\nlee_test_file = test_data_dir + os.sep + 'lee.cor'", "Define a Function to Read and Preprocess Text\nBelow, we define a function to open the train/test file (with latin encoding), read the file line-by-line, pre-process each line using a simple gensim pre-processing tool (i.e., tokenize text into individual words, remove punctuation, set to lowercase, etc), and return a list of words. Note that, for a given file (aka corpus), each continuous line constitutes a single document and the length of each line (i.e., document) can vary. Also, to train the model, we'll need to associate a tag/number with each document of the training corpus. In our case, the tag is simply the zero-based line number.", "def read_corpus(fname, tokens_only=False):\n with open(fname, encoding=\"iso-8859-1\") as f:\n for i, line in enumerate(f):\n if tokens_only:\n yield gensim.utils.simple_preprocess(line)\n else:\n # For training data, add tags\n yield gensim.models.doc2vec.TaggedDocument(gensim.utils.simple_preprocess(line), [i])\n\ntrain_corpus = list(read_corpus(lee_train_file))\ntest_corpus = list(read_corpus(lee_test_file, tokens_only=True))", "Let's take a look at the training corpus", "train_corpus[:2]", "And the testing corpus looks like this:", "print(test_corpus[:2])", "Notice that the testing corpus is just a list of lists and does not contain any tags.\nTraining the Model\nInstantiate a Doc2Vec Object\nNow, we'll instantiate a Doc2Vec model with a vector size with 50 words and iterating over the training corpus 10 times. We set the minimum word count to 2 in order to give higher frequency words more weighting. Model accuracy can be improved by increasing the number of iterations but this generally increases the training time.", "model = gensim.models.doc2vec.Doc2Vec(size=50, min_count=2, iter=10)", "Build a Vocabulary", "model.build_vocab(train_corpus)", "Essentially, the vocabulary is a dictionary (accessible via model.vocab) of all of the unique words extracted from the training corpus along with the count (e.g., model.vocab['penalty'].count for counts for the word penalty).\nTime to Train\nThis should take no more than 2 minutes", "%time model.train(train_corpus)", "Inferring a Vector\nOne important thing to note is that you can now infer a vector for any piece of text without having to re-train the model by passing a list of words to the model.infer_vector function. This vector can then be compared with other vectors via cosine similarity.", "model.infer_vector(['only', 'you', 'can', 'prevent', 'forrest', 'fires'])", "Assessing Model\nTo assess our new model, we'll first infer new vectors for each document of the training corpus, compare the inferred vectors with the training corpus, and then returning the rank of the document based on self-similarity. Basically, we're pretending as if the training corpus is some new unseen data and then seeing how they compare with the trained model. The expectation is that we've likely overfit our model (i.e., all of the ranks will be less than 2) and so we should be able to find similar documents very easily. Additionally, we'll keep track of the second ranks for a comparison of less similar documents.", "ranks = []\nsecond_ranks = []\nfor doc_id in range(len(train_corpus)):\n inferred_vector = model.infer_vector(train_corpus[doc_id].words)\n sims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))\n rank = [docid for docid, sim in sims].index(doc_id)\n ranks.append(rank)\n \n second_ranks.append(sims[1])", "Let's count how each document ranks with respect to the training corpus", "collections.Counter(ranks) #96% accuracy", "Basically, greater than 95% of the inferred documents are found to be most similar to itself and about 5% of the time it is mistakenly most similar to another document. This is great and not entirely surprising. We can take a look at an example:", "print('Document ({}): «{}»\\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: «%s»\\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))", "Notice above that the most similar document is has a similarity score of ~80% (or higher). However, the similarity score for the second ranked documents should be significantly lower (assuming the documents are in fact different) and the reasoning becomes obvious when we examine the text itself", "# Pick a random document from the test corpus and infer a vector from the model\ndoc_id = random.randint(0, len(train_corpus))\n\n# Compare and print the most/median/least similar documents from the train corpus\nprint('Train Document ({}): «{}»\\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))\nsim_id = second_ranks[doc_id]\nprint('Similar Document {}: «{}»\\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))", "Testing the Model\nUsing the same approach above, we'll infer the vector for a randomly chosen test document, and compare the document to our model by eye.", "# Pick a random document from the test corpus and infer a vector from the model\ndoc_id = random.randint(0, len(test_corpus))\ninferred_vector = model.infer_vector(test_corpus[doc_id])\nsims = model.docvecs.most_similar([inferred_vector], topn=len(model.docvecs))\n\n# Compare and print the most/median/least similar documents from the train corpus\nprint('Test Document ({}): «{}»\\n'.format(doc_id, ' '.join(test_corpus[doc_id])))\nprint(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\\n' % model)\nfor label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:\n print(u'%s %s: «%s»\\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))", "Wrapping Up\nThat's it! Doc2Vec is a great way to explore relationships between documents." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course5-rnn/Week 1/Dinosaur Island -- Character-level language model/Dinosaurus Island -- Character level language model final - v3.ipynb
mit
[ "Character level language model - Dinosaurus land\nWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! \n<table>\n<td>\n<img src=\"images/dino.jpg\" style=\"width:250;height:300px;\">\n\n</td>\n\n</table>\n\nLuckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this dataset. (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! \nBy completing this assignment you will learn:\n\nHow to store text data for processing using an RNN \nHow to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit\nHow to build a character-level text generation recurrent neural network\nWhy clipping the gradients is important\n\nWe will begin by loading in some functions that we have provided for you in rnn_utils. Specifically, you have access to functions such as rnn_forward and rnn_backward which are equivalent to those you've implemented in the previous assignment.", "import numpy as np\nfrom utils import *\nimport random", "1 - Problem Statement\n1.1 - Dataset and Preprocessing\nRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.", "data = open('dinos.txt', 'r').read()\ndata= data.lower()\nchars = list(set(data))\ndata_size, vocab_size = len(data), len(chars)\nprint('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))", "The characters are a-z (26 characters) plus the \"\\n\" (or newline character), which in this assignment plays a role similar to the &lt;EOS&gt; (or \"End of sentence\") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, char_to_ix and ix_to_char are the python dictionaries.", "char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }\nix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }\nprint(ix_to_char)", "1.2 - Overview of the model\nYour model will have the following structure: \n\nInitialize parameters \nRun the optimization loop\nForward propagation to compute the loss function\nBackward propagation to compute the gradients with respect to the loss function\nClip the gradients to avoid exploding gradients\nUsing the gradients, update your parameter with the gradient descent update rule.\n\n\nReturn the learned parameters \n\n<img src=\"images/rnn.png\" style=\"width:450;height:300px;\">\n<caption><center> Figure 1: Recurrent Neural Network, similar to what you had built in the previous notebook \"Building a RNN - Step by Step\". </center></caption>\nAt each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is a list of characters in the training set, while $Y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$ is such that at every time-step $t$, we have $y^{\\langle t \\rangle} = x^{\\langle t+1 \\rangle}$. \n2 - Building blocks of the model\nIn this part, you will build two important blocks of the overall model:\n- Gradient clipping: to avoid exploding gradients\n- Sampling: a technique used to generate characters\nYou will then apply these two functions to build the model.\n2.1 - Clipping the gradients in the optimization loop\nIn this section you will implement the clip function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not \"exploding,\" meaning taking on overly large values. \nIn the exercise below, you will implement a function clip that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a maxValue (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. \n<img src=\"images/clip.png\" style=\"width:400;height:150px;\">\n<caption><center> Figure 2: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight \"exploding gradient\" problems. </center></caption>\nExercise: Implement the function below to return the clipped gradients of your dictionary gradients. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this hint for examples of how to clip in numpy. You will need to use the argument out = ....", "### GRADED FUNCTION: clip\n\ndef clip(gradients, maxValue):\n '''\n Clips the gradients' values between minimum and maximum.\n \n Arguments:\n gradients -- a dictionary containing the gradients \"dWaa\", \"dWax\", \"dWya\", \"db\", \"dby\"\n maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue\n \n Returns: \n gradients -- a dictionary with the clipped gradients.\n '''\n \n dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']\n \n ### START CODE HERE ###\n # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)\n for gradient in [dWax, dWaa, dWya, db, dby]:\n np.clip(gradient, a_max=maxValue, a_min=-maxValue, out=gradient)\n ### END CODE HERE ###\n \n gradients = {\"dWaa\": dWaa, \"dWax\": dWax, \"dWya\": dWya, \"db\": db, \"dby\": dby}\n \n return gradients\n\nnp.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, 10)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])", "Expected output:\n<table>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2] **\n </td>\n <td> \n 10.0\n </td>\n</tr>\n\n<tr>\n <td> \n **gradients[\"dWax\"][3][1]**\n </td>\n <td> \n -10.0\n </td>\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> \n0.29713815361\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> \n[ 10.]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td> \n[ 8.45833407]\n </td>\n</tr>\n\n</table>\n\n2.2 - Sampling\nNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:\n<img src=\"images/dinos3.png\" style=\"width:500;height:300px;\">\n<caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\\langle 1\\rangle} = \\vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>\nExercise: Implement the sample function below to sample characters. You need to carry out 4 steps:\n\n\nStep 1: Pass the network the first \"dummy\" input $x^{\\langle 1 \\rangle} = \\vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\\langle 0 \\rangle} = \\vec{0}$\n\n\nStep 2: Run one step of forward propagation to get $a^{\\langle 1 \\rangle}$ and $\\hat{y}^{\\langle 1 \\rangle}$. Here are the equations:\n\n\n$$ a^{\\langle t+1 \\rangle} = \\tanh(W_{ax} x^{\\langle t \\rangle } + W_{aa} a^{\\langle t \\rangle } + b)\\tag{1}$$\n$$ z^{\\langle t + 1 \\rangle } = W_{ya} a^{\\langle t + 1 \\rangle } + b_y \\tag{2}$$\n$$ \\hat{y}^{\\langle t+1 \\rangle } = softmax(z^{\\langle t + 1 \\rangle })\\tag{3}$$\nNote that $\\hat{y}^{\\langle t+1 \\rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\\hat{y}^{\\langle t+1 \\rangle}_i$ represents the probability that the character indexed by \"i\" is the next character. We have provided a softmax() function that you can use.\n\nStep 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\\hat{y}^{\\langle t+1 \\rangle }$. This means that if $\\hat{y}^{\\langle t+1 \\rangle }_i = 0.16$, you will pick the index \"i\" with 16% probability. To implement it, you can use np.random.choice.\n\nHere is an example of how to use np.random.choice():\npython\nnp.random.seed(0)\np = np.array([0.1, 0.0, 0.7, 0.2])\nindex = np.random.choice([0, 1, 2, 3], p = p.ravel())\nThis means that you will pick the index according to the distribution: \n$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.\n\nStep 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\\langle t \\rangle }$, with the value of $x^{\\langle t + 1 \\rangle }$. You will represent $x^{\\langle t + 1 \\rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\\langle t + 1 \\rangle }$ in Step 1 and keep repeating the process until you get a \"\\n\" character, indicating you've reached the end of the dinosaur name.", "# GRADED FUNCTION: sample\n\ndef sample(parameters, char_to_ix, seed):\n \"\"\"\n Sample a sequence of characters according to a sequence of probability distributions output of the RNN\n\n Arguments:\n parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. \n char_to_ix -- python dictionary mapping each character to an index.\n seed -- used for grading purposes. Do not worry about it.\n\n Returns:\n indices -- a list of length n containing the indices of the sampled characters.\n \"\"\"\n \n # Retrieve parameters and relevant shapes from \"parameters\" dictionary\n Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']\n vocab_size = by.shape[0]\n n_a = Waa.shape[1]\n \n ### START CODE HERE ###\n # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)\n x = np.zeros((vocab_size, 1))\n # Step 1': Initialize a_prev as zeros (≈1 line)\n a_prev = np.zeros((n_a, 1))\n \n # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)\n indices = []\n \n # Idx is a flag to detect a newline character, we initialize it to -1\n idx = -1 \n \n # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append \n # its index to \"indices\". We'll stop if we reach 50 characters (which should be very unlikely with a well \n # trained model), which helps debugging and prevents entering an infinite loop. \n counter = 0\n newline_character = char_to_ix['\\n']\n \n while (idx != newline_character and counter != 50):\n \n # Step 2: Forward propagate x using the equations (1), (2) and (3)\n a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b)\n z = np.dot(Wya, a) + by\n y = softmax(z)\n \n # for grading purposes\n np.random.seed(counter+seed) \n \n # Step 3: Sample the index of a character within the vocabulary from the probability distribution y\n idx = np.random.choice(range(vocab_size), p = y.ravel())\n\n # Append the index to \"indices\"\n indices.append(idx)\n \n # Step 4: Overwrite the input character as the one corresponding to the sampled index.\n x = np.zeros((vocab_size, 1))\n x[idx] = 1\n \n # Update \"a_prev\" to be \"a\"\n a_prev = a\n \n # for grading purposes\n seed += 1\n counter +=1\n \n ### END CODE HERE ###\n\n if (counter == 50):\n indices.append(char_to_ix['\\n'])\n \n return indices\n\nnp.random.seed(2)\n_, n_a = 20, 100\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\n\n\nindices = sample(parameters, char_to_ix, 0)\nprint(\"Sampling:\")\nprint(\"list of sampled indices:\", indices)\nprint(\"list of sampled characters:\", [ix_to_char[i] for i in indices])", "Expected output:\n<table>\n<tr>\n <td> \n **list of sampled indices:**\n </td>\n <td> \n [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>\n 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]\n </td>\n </tr><tr>\n <td> \n **list of sampled characters:**\n </td>\n <td> \n ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>\n 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>\n 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\\n', '\\n']\n </td>\n\n\n\n</tr>\n</table>\n\n3 - Building the language model\nIt is time to build the character-level language model for text generation. \n3.1 - Gradient descent\nIn this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:\n\nForward propagate through the RNN to compute the loss\nBackward propagate through time to compute the gradients of the loss with respect to the parameters\nClip the gradients if necessary \nUpdate your parameters using gradient descent \n\nExercise: Implement this optimization process (one step of stochastic gradient descent). \nWe provide you with the following functions: \n```python\ndef rnn_forward(X, Y, a_prev, parameters):\n \"\"\" Performs the forward propagation through the RNN and computes the cross-entropy loss.\n It returns the loss' value as well as a \"cache\" storing values to be used in the backpropagation.\"\"\"\n ....\n return loss, cache\ndef rnn_backward(X, Y, parameters, cache):\n \"\"\" Performs the backward propagation through time to compute the gradients of the loss with respect\n to the parameters. It returns also all the hidden states.\"\"\"\n ...\n return gradients, a\ndef update_parameters(parameters, gradients, learning_rate):\n \"\"\" Updates parameters using the Gradient Descent Update Rule.\"\"\"\n ...\n return parameters\n```", "# GRADED FUNCTION: optimize\n\ndef optimize(X, Y, a_prev, parameters, learning_rate = 0.01):\n \"\"\"\n Execute one step of the optimization to train the model.\n \n Arguments:\n X -- list of integers, where each integer is a number that maps to a character in the vocabulary.\n Y -- list of integers, exactly the same as X but shifted one index to the left.\n a_prev -- previous hidden state.\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n b -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n learning_rate -- learning rate for the model.\n \n Returns:\n loss -- value of the loss function (cross-entropy)\n gradients -- python dictionary containing:\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)\n db -- Gradients of bias vector, of shape (n_a, 1)\n dby -- Gradients of output bias vector, of shape (n_y, 1)\n a[len(X)-1] -- the last hidden state, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Forward propagate through time (≈1 line)\n loss, cache = rnn_forward(X, Y, a_prev, parameters)\n \n # Backpropagate through time (≈1 line)\n gradients, a = rnn_backward(X, Y, parameters, cache)\n \n # Clip your gradients between -5 (min) and 5 (max) (≈1 line)\n gradients = clip(gradients, 5)\n \n # Update parameters (≈1 line)\n parameters = update_parameters(parameters, gradients, learning_rate)\n \n ### END CODE HERE ###\n \n return loss, gradients, a[len(X)-1]\n\nnp.random.seed(1)\nvocab_size, n_a = 27, 100\na_prev = np.random.randn(n_a, 1)\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\nX = [12,3,5,11,22,3]\nY = [4,14,11,22,25, 26]\n\nloss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\nprint(\"Loss =\", loss)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"np.argmax(gradients[\\\"dWax\\\"]) =\", np.argmax(gradients[\"dWax\"]))\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])\nprint(\"a_last[4] =\", a_last[4])", "Expected output:\n<table>\n\n\n<tr>\n <td> \n **Loss **\n </td>\n <td> \n 126.503975722\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2]**\n </td>\n <td> \n 0.194709315347\n </td>\n<tr>\n <td> \n **np.argmax(gradients[\"dWax\"])**\n </td>\n <td> 93\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> -0.007773876032\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> [-0.06809825]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td>[ 0.01538192]\n </td>\n</tr>\n<tr>\n <td> \n **a_last[4]**\n </td>\n <td> [-1.]\n </td>\n</tr>\n\n</table>\n\n3.2 - Training the model\nGiven the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. \nExercise: Follow the instructions and implement model(). When examples[index] contains one dinosaur name (string), to create an example (X, Y), you can use this:\npython\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]] \n Y = X[1:] + [char_to_ix[\"\\n\"]]\nNote that we use: index= j % len(examples), where j = 1....num_iterations, to make sure that examples[index] is always a valid statement (index is smaller than len(examples)).\nThe first entry of X being None will be interpreted by rnn_forward() as setting $x^{\\langle 0 \\rangle} = \\vec{0}$. Further, this ensures that Y is equal to X but shifted one step to the left, and with an additional \"\\n\" appended to signify the end of the dinosaur name.", "# GRADED FUNCTION: model\n\ndef model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):\n \"\"\"\n Trains the model and generates dinosaur names. \n \n Arguments:\n data -- text corpus\n ix_to_char -- dictionary that maps the index to a character\n char_to_ix -- dictionary that maps a character to an index\n num_iterations -- number of iterations to train the model for\n n_a -- number of units of the RNN cell\n dino_names -- number of dinosaur names you want to sample at each iteration. \n vocab_size -- number of unique characters found in the text, size of the vocabulary\n \n Returns:\n parameters -- learned parameters\n \"\"\"\n \n # Retrieve n_x and n_y from vocab_size\n n_x, n_y = vocab_size, vocab_size\n \n # Initialize parameters\n parameters = initialize_parameters(n_a, n_x, n_y)\n \n # Initialize loss (this is required because we want to smooth our loss, don't worry about it)\n loss = get_initial_loss(vocab_size, dino_names)\n \n # Build list of all dinosaur names (training examples).\n with open(\"dinos.txt\") as f:\n examples = f.readlines()\n examples = [x.lower().strip() for x in examples]\n \n # Shuffle list of all dinosaur names\n np.random.seed(0)\n np.random.shuffle(examples)\n \n # Initialize the hidden state of your LSTM\n a_prev = np.zeros((n_a, 1))\n \n # Optimization loop\n for j in range(num_iterations):\n \n ### START CODE HERE ###\n \n # Use the hint above to define one training example (X,Y) (≈ 2 lines)\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]]\n Y = X[1:] + [char_to_ix[\"\\n\"]]\n \n # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters\n # Choose a learning rate of 0.01\n curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\n \n ### END CODE HERE ###\n \n # Use a latency trick to keep the loss smooth. It happens here to accelerate the training.\n loss = smooth(loss, curr_loss)\n\n # Every 2000 Iteration, generate \"n\" characters thanks to sample() to check if the model is learning properly\n if j % 2000 == 0:\n \n print('Iteration: %d, Loss: %f' % (j, loss) + '\\n')\n \n # The number of dinosaur names to print\n seed = 0\n for name in range(dino_names):\n \n # Sample indices and print them\n sampled_indices = sample(parameters, char_to_ix, seed)\n print_sample(sampled_indices, ix_to_char)\n \n seed += 1 # To get the same result for grading purposed, increment the seed by one. \n \n print('\\n')\n \n return parameters", "Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.", "parameters = model(data, ix_to_char, char_to_ix)", "Conclusion\nYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like maconucon, marloralus and macingsersaurus. Your model hopefully also learned that dinosaur names tend to end in saurus, don, aura, tor, etc.\nIf your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, dromaeosauroides is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! \nThis assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!\n<img src=\"images/mangosaurus.jpeg\" style=\"width:250;height:300px;\">\n4 - Writing like Shakespeare\nThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. \nA similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. \n<img src=\"images/shakespeare.jpg\" style=\"width:500;height:400px;\">\n<caption><center> Let's become poets! </center></caption>\nWe have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.", "from __future__ import print_function\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Model, load_model, Sequential\nfrom keras.layers import Dense, Activation, Dropout, Input, Masking\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\nfrom shakespeare_utils import *\nimport sys\nimport io", "To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called \"The Sonnets\". \nLet's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run generate_output, which will prompt asking you for an input (&lt;40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try \"Forsooth this maketh no sense \" (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.", "print_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])\n\n# Run this cell to try with different inputs without having to re-train the model \ngenerate_output()", "The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:\n- LSTMs instead of the basic RNN to capture longer-range dependencies\n- The model is a deeper, stacked LSTM model (2 layer)\n- Using Keras instead of python to simplify the code \nIf you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.\nCongratulations on finishing this notebook! \nReferences:\n- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's blog post.\n- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ10 legacy Custom Training Prebuilt Container SKLearn.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex SDK: Train and deploy an SKLearn model with pre-built containers (formerly hosted runtimes)\nInstallation\nInstall the Google cloud-storage library as well.", "! pip3 install google-cloud-storage", "Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"AUTORUN\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex services", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.", "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION gs://$BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al gs://$BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.", "import json\nimport os\nimport sys\nimport time\n\nfrom googleapiclient import discovery", "Vertex constants\nSetup up the following constants for Vertex:\n\nPARENT: The Vertex location root path for dataset, model and endpoint resources.", "# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.", "client = discovery.build(\"ml\", \"v1\")", "Prepare a trainer script\nPackage assembly", "# Make folder for python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\\ntag_build =\\n\\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\\nsetuptools.setup(\\n\\\n install_requires=[\\n\\\n ],\\n\\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\\nName: Custom Census Income\\n\\\nVersion: 0.0.0\\n\\\nSummary: Demonstration training script\\n\\\nHome-page: www.google.com\\n\\\nAuthor: Google\\n\\\nAuthor-email: aferlitsch@google.com\\n\\\nLicense: Public\\n\\\nDescription: Demo\\n\\\nPlatform: Vertex AI\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py", "Task.py contents", "%%writefile custom/trainer/task.py\n# Single Instance Training for Census Income\n\nfrom sklearn.ensemble import RandomForestClassifier\nimport joblib\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.pipeline import FeatureUnion\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import LabelBinarizer\nimport datetime\nimport pandas as pd\n\nfrom google.cloud import storage\n\nimport numpy as np\nimport argparse\nimport os\nimport sys\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\n\n# Public bucket holding the census data\nbucket = storage.Client().bucket('cloud-samples-data')\n\n# Path to the data inside the public bucket\nblob = bucket.blob('ai-platform/sklearn/census_data/adult.data')\n# Download the data\nblob.download_to_filename('adult.data')\n\n# Define the format of your input data including unused columns (These are the columns from the census data files)\nCOLUMNS = (\n 'age',\n 'workclass',\n 'fnlwgt',\n 'education',\n 'education-num',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'capital-gain',\n 'capital-loss',\n 'hours-per-week',\n 'native-country',\n 'income-level'\n)\n\n# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn\nCATEGORICAL_COLUMNS = (\n 'workclass',\n 'education',\n 'marital-status',\n 'occupation',\n 'relationship',\n 'race',\n 'sex',\n 'native-country'\n)\n\n\n# Load the training census dataset\nwith open('./adult.data', 'r') as train_data:\n raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)\n\n# Remove the column we are trying to predict ('income-level') from our features list\n# Convert the Dataframe to a lists of lists\ntrain_features = raw_training_data.drop('income-level', axis=1).values.tolist()\n# Create our training labels list, convert the Dataframe to a lists of lists\ntrain_labels = (raw_training_data['income-level'] == ' >50K').values.tolist()\n\n# Since the census data set has categorical features, we need to convert\n# them to numerical values. We'll use a list of pipelines to convert each\n# categorical column and then use FeatureUnion to combine them before calling\n# the RandomForestClassifier.\ncategorical_pipelines = []\n\n# Each categorical column needs to be extracted individually and converted to a numerical value.\n# To do this, each categorical column will use a pipeline that extracts one feature column via\n# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.\n# A scores array (created below) will select and extract the feature column. The scores array is\n# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.\nfor i, col in enumerate(COLUMNS[:-1]):\n if col in CATEGORICAL_COLUMNS:\n # Create a scores array to get the individual categorical column.\n # Example:\n # data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',\n # 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']\n # scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\n #\n # Returns: [['State-gov']]\n # Build the scores array.\n scores = [0] * len(COLUMNS[:-1])\n # This column is the categorical column we want to extract.\n scores[i] = 1\n skb = SelectKBest(k=1)\n skb.scores_ = scores\n # Convert the categorical column to a numerical value\n lbn = LabelBinarizer()\n r = skb.transform(train_features)\n lbn.fit(r)\n # Create the pipeline to extract the categorical feature\n categorical_pipelines.append(\n ('categorical-{}'.format(i), Pipeline([\n ('SKB-{}'.format(i), skb),\n ('LBN-{}'.format(i), lbn)])))\n \n# Create pipeline to extract the numerical features\nskb = SelectKBest(k=6)\n# From COLUMNS use the features that are numerical\nskb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]\ncategorical_pipelines.append(('numerical', skb))\n\n# Combine all the features using FeatureUnion\npreprocess = FeatureUnion(categorical_pipelines)\n\n# Create the classifier\nclassifier = RandomForestClassifier()\n\n# Transform the features and fit them to the classifier\nclassifier.fit(preprocess.transform(train_features), train_labels)\n\n# Create the overall model as a single pipeline\npipeline = Pipeline([\n ('union', preprocess),\n ('classifier', classifier)\n])\n\n# Split path into bucket and subdirectory\nbucket = args.model_dir.split('/')[2]\nsubdir = args.model_dir.split('/')[-1]\n\n# Write model to a local file\njoblib.dump(pipeline, 'model.joblib')\n\n# Upload the model to GCS\nbucket = storage.Client().bucket(bucket)\nblob = bucket.blob(subdir + '/model.joblib')\nblob.upload_from_filename('model.joblib')\n", "Store training script on your Cloud Storage bucket", "! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz gs://$BUCKET_NAME/census.tar.gz", "Train a model\nprojects.jobs.create\nRequest", "JOB_NAME = \"custom_job_SKL\" + TIMESTAMP\n\ntraining_input = {\n \"scaleTier\": \"BASIC\",\n \"packageUris\": [\"gs://\" + BUCKET_NAME + \"/census.tar.gz\"],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\"--model-dir=\" + \"gs://{}/{}\".format(BUCKET_NAME, JOB_NAME)],\n \"region\": REGION,\n \"runtimeVersion\": \"2.4\",\n \"pythonVersion\": \"3.7\",\n}\n\nbody = {\"jobId\": JOB_NAME, \"trainingInput\": training_input}\n\nrequest = client.projects().jobs().create(parent=\"projects/\" + PROJECT_ID)\nrequest.body = body\n\nprint(json.dumps(json.loads(request.to_json()), indent=2))\n\nrequest = client.projects().jobs().create(parent=\"projects/\" + PROJECT_ID, body=body)", "Example output:\n{\n \"uri\": \"https://ml.googleapis.com/v1/projects/migration-ucaip-training/jobs?alt=json\",\n \"method\": \"POST\",\n \"body\": {\n \"jobId\": \"custom_job_SKL20210302140139\",\n \"trainingInput\": {\n \"scaleTier\": \"BASIC\",\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210302140139/census.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\"\n ],\n \"region\": \"us-central1\",\n \"runtimeVersion\": \"2.4\",\n \"pythonVersion\": \"3.7\"\n }\n },\n \"headers\": {\n \"accept\": \"application/json\",\n \"accept-encoding\": \"gzip, deflate\",\n \"user-agent\": \"(gzip)\",\n \"x-goog-api-client\": \"gdcl/1.12.8 gl-python/3.7.8\"\n },\n \"methodId\": \"ml.projects.jobs.create\",\n \"resumable\": null,\n \"response_callbacks\": [],\n \"_in_error_state\": false,\n \"body_size\": 0,\n \"resumable_uri\": null,\n \"resumable_progress\": 0\n}\nCall", "result = request.execute()", "Response", "print(json.dumps(result, indent=2))", "Example output:\n{\n \"jobId\": \"custom_job_SKL20210302140139\",\n \"trainingInput\": {\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210302140139/census.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\"\n ],\n \"region\": \"us-central1\",\n \"runtimeVersion\": \"2.4\",\n \"pythonVersion\": \"3.7\"\n },\n \"createTime\": \"2021-03-02T14:09:33Z\",\n \"state\": \"QUEUED\",\n \"trainingOutput\": {},\n \"etag\": \"YQ/mo0C8EUg=\"\n}", "# The short numeric ID for the custom training job\ncustom_training_short_id = result[\"jobId\"]\n# The full unique ID for the custom training job\ncustom_training_id = \"projects/\" + PROJECT_ID + \"/jobs/\" + result[\"jobId\"]\n\nprint(custom_training_id)", "projects.jobs.get\nCall", "request = client.projects().jobs().get(name=custom_training_id)\n\nresult = request.execute()", "Response", "print(json.dumps(result, indent=2))", "Example output:\n{\n \"jobId\": \"custom_job_SKL20210302140139\",\n \"trainingInput\": {\n \"packageUris\": [\n \"gs://migration-ucaip-trainingaip-20210302140139/census.tar.gz\"\n ],\n \"pythonModule\": \"trainer.task\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\"\n ],\n \"region\": \"us-central1\",\n \"runtimeVersion\": \"2.4\",\n \"pythonVersion\": \"3.7\"\n },\n \"createTime\": \"2021-03-02T14:09:33Z\",\n \"state\": \"PREPARING\",\n \"trainingOutput\": {},\n \"etag\": \"/X2Bt4OWbWU=\"\n}", "while True:\n response = client.projects().jobs().get(name=custom_training_id).execute()\n\n if response[\"state\"] != \"SUCCEEDED\":\n print(\"Training job has not completed:\", response[\"state\"])\n if response[\"state\"] == \"FAILED\":\n break\n else:\n break\n time.sleep(60)\n\n# model artifact output directory on Google Cloud Storage\nmodel_artifact_dir = response[\"trainingInput\"][\"args\"][0].split(\"=\")[-1]\nprint(\"artifact location \" + model_artifact_dir)", "Deploy the model\nprojects.models.create\nRequest", "body = {\"name\": \"custom_job_SKL\" + TIMESTAMP}\n\nrequest = client.projects().models().create(parent=\"projects/\" + PROJECT_ID)\nrequest.body = json.loads(json.dumps(body, indent=2))\n\nprint(json.dumps(json.loads(request.to_json()), indent=2))\n\nrequest = client.projects().models().create(parent=\"projects/\" + PROJECT_ID, body=body)", "Example output:\n{\n \"uri\": \"https://ml.googleapis.com/v1/projects/migration-ucaip-training/models?alt=json\",\n \"method\": \"POST\",\n \"body\": {\n \"name\": \"custom_job_SKL20210302140139\"\n },\n \"headers\": {\n \"accept\": \"application/json\",\n \"accept-encoding\": \"gzip, deflate\",\n \"user-agent\": \"(gzip)\",\n \"x-goog-api-client\": \"gdcl/1.12.8 gl-python/3.7.8\"\n },\n \"methodId\": \"ml.projects.models.create\",\n \"resumable\": null,\n \"response_callbacks\": [],\n \"_in_error_state\": false,\n \"body_size\": 0,\n \"resumable_uri\": null,\n \"resumable_progress\": 0\n}\nCall", "result = request.execute()", "Response", "print(json.dumps(result, indent=2))", "Example output:\n{\n \"name\": \"projects/migration-ucaip-training/models/custom_job_SKL20210302140139\",\n \"regions\": [\n \"us-central1\"\n ],\n \"etag\": \"Lmd8u9MSSIA=\"\n}", "model_id = result[\"name\"]", "projects.models.versions.create\nRequest", "version = {\n \"name\": \"custom_job_SKL\" + TIMESTAMP,\n \"deploymentUri\": model_artifact_dir,\n \"runtimeVersion\": \"2.1\",\n \"framework\": \"SCIKIT_LEARN\",\n \"pythonVersion\": \"3.7\",\n \"machineType\": \"mls1-c1-m2\",\n}\n\nrequest = client.projects().models().versions().create(parent=model_id)\nrequest.body = version\n\nprint(json.dumps(json.loads(request.to_json()), indent=2))\n\nrequest = client.projects().models().versions().create(parent=model_id, body=version)", "Example output:\n{\n \"uri\": \"https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions?alt=json\",\n \"method\": \"POST\",\n \"body\": {\n \"name\": \"custom_job_SKL20210302140139\",\n \"deploymentUri\": \"gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\",\n \"runtimeVersion\": \"2.1\",\n \"framework\": \"SCIKIT_LEARN\",\n \"pythonVersion\": \"3.7\",\n \"machineType\": \"mls1-c1-m2\"\n },\n \"headers\": {\n \"accept\": \"application/json\",\n \"accept-encoding\": \"gzip, deflate\",\n \"user-agent\": \"(gzip)\",\n \"x-goog-api-client\": \"gdcl/1.12.8 gl-python/3.7.8\"\n },\n \"methodId\": \"ml.projects.models.versions.create\",\n \"resumable\": null,\n \"response_callbacks\": [],\n \"_in_error_state\": false,\n \"body_size\": 0,\n \"resumable_uri\": null,\n \"resumable_progress\": 0\n}\nCall", "result = request.execute()", "Response", "print(json.dumps(result, indent=2))", "Example output:\n{\n \"name\": \"projects/migration-ucaip-training/operations/create_custom_job_SKL20210302140139_custom_job_SKL20210302140139-1614695138432\",\n \"metadata\": {\n \"@type\": \"type.googleapis.com/google.cloud.ml.v1.OperationMetadata\",\n \"createTime\": \"2021-03-02T14:25:38Z\",\n \"operationType\": \"CREATE_VERSION\",\n \"modelName\": \"projects/migration-ucaip-training/models/custom_job_SKL20210302140139\",\n \"version\": {\n \"name\": \"projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions/custom_job_SKL20210302140139\",\n \"deploymentUri\": \"gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\",\n \"createTime\": \"2021-03-02T14:25:38Z\",\n \"runtimeVersion\": \"2.1\",\n \"etag\": \"ilPQVTiR+IM=\",\n \"framework\": \"SCIKIT_LEARN\",\n \"machineType\": \"mls1-c1-m2\",\n \"pythonVersion\": \"3.7\"\n }\n }\n}", "# The full unique ID for the model version\nmodel_version_name = result[\"metadata\"][\"version\"][\"name\"]\n\nprint(model_version_name)\n\nwhile True:\n response = (\n client.projects().models().versions().get(name=model_version_name).execute()\n )\n if response[\"state\"] == \"READY\":\n print(\"Model version created.\")\n break\n time.sleep(60)", "Make batch predictions\nBatch prediction only supports Tensorflow. FRAMEWORK_SCIKIT_LEARN is not currently available.\nMake online predictions\nPrepare data item for online prediction", "INSTANCES = [\n [\n 25,\n \"Private\",\n 226802,\n \"11th\",\n 7,\n \"Never-married\",\n \"Machine-op-inspct\",\n \"Own-child\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 38,\n \"Private\",\n 89814,\n \"HS-grad\",\n 9,\n \"Married-civ-spouse\",\n \"Farming-fishing\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 50,\n \"United-States\",\n ],\n [\n 28,\n \"Local-gov\",\n 336951,\n \"Assoc-acdm\",\n 12,\n \"Married-civ-spouse\",\n \"Protective-serv\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 44,\n \"Private\",\n 160323,\n \"Some-college\",\n 10,\n \"Married-civ-spouse\",\n \"Machine-op-inspct\",\n \"Husband\",\n \"Black\",\n \"Male\",\n 7688,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 18,\n \"?\",\n 103497,\n \"Some-college\",\n 10,\n \"Never-married\",\n \"?\",\n \"Own-child\",\n \"White\",\n \"Female\",\n 0,\n 0,\n 30,\n \"United-States\",\n ],\n [\n 34,\n \"Private\",\n 198693,\n \"10th\",\n 6,\n \"Never-married\",\n \"Other-service\",\n \"Not-in-family\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 30,\n \"United-States\",\n ],\n [\n 29,\n \"?\",\n 227026,\n \"HS-grad\",\n 9,\n \"Never-married\",\n \"?\",\n \"Unmarried\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 63,\n \"Self-emp-not-inc\",\n 104626,\n \"Prof-school\",\n 15,\n \"Married-civ-spouse\",\n \"Prof-specialty\",\n \"Husband\",\n \"White\",\n \"Male\",\n 3103,\n 0,\n 32,\n \"United-States\",\n ],\n [\n 24,\n \"Private\",\n 369667,\n \"Some-college\",\n 10,\n \"Never-married\",\n \"Other-service\",\n \"Unmarried\",\n \"White\",\n \"Female\",\n 0,\n 0,\n 40,\n \"United-States\",\n ],\n [\n 55,\n \"Private\",\n 104996,\n \"7th-8th\",\n 4,\n \"Married-civ-spouse\",\n \"Craft-repair\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 10,\n \"United-States\",\n ],\n]", "projects.predict\nRequest", "request = client.projects().predict(name=model_version_name)\nrequest.body = json.loads(json.dumps({\"instances\": INSTANCES}, indent=2))\n\nprint(json.dumps(json.loads(request.to_json()), indent=2))\n\nrequest = client.projects().predict(\n name=model_version_name, body={\"instances\": INSTANCES}\n)", "Example output:\n{\n \"uri\": \"https://ml.googleapis.com/v1/projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions/custom_job_SKL20210302140139:predict?alt=json\",\n \"method\": \"POST\",\n \"body\": {\n \"instances\": [\n [\n 25,\n \"Private\",\n 226802,\n \"11th\",\n 7,\n \"Never-married\",\n \"Machine-op-inspct\",\n \"Own-child\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\"\n ],\n [\n 38,\n \"Private\",\n 89814,\n \"HS-grad\",\n 9,\n \"Married-civ-spouse\",\n \"Farming-fishing\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 50,\n \"United-States\"\n ],\n [\n 28,\n \"Local-gov\",\n 336951,\n \"Assoc-acdm\",\n 12,\n \"Married-civ-spouse\",\n \"Protective-serv\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\"\n ],\n [\n 44,\n \"Private\",\n 160323,\n \"Some-college\",\n 10,\n \"Married-civ-spouse\",\n \"Machine-op-inspct\",\n \"Husband\",\n \"Black\",\n \"Male\",\n 7688,\n 0,\n 40,\n \"United-States\"\n ],\n [\n 18,\n \"?\",\n 103497,\n \"Some-college\",\n 10,\n \"Never-married\",\n \"?\",\n \"Own-child\",\n \"White\",\n \"Female\",\n 0,\n 0,\n 30,\n \"United-States\"\n ],\n [\n 34,\n \"Private\",\n 198693,\n \"10th\",\n 6,\n \"Never-married\",\n \"Other-service\",\n \"Not-in-family\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 30,\n \"United-States\"\n ],\n [\n 29,\n \"?\",\n 227026,\n \"HS-grad\",\n 9,\n \"Never-married\",\n \"?\",\n \"Unmarried\",\n \"Black\",\n \"Male\",\n 0,\n 0,\n 40,\n \"United-States\"\n ],\n [\n 63,\n \"Self-emp-not-inc\",\n 104626,\n \"Prof-school\",\n 15,\n \"Married-civ-spouse\",\n \"Prof-specialty\",\n \"Husband\",\n \"White\",\n \"Male\",\n 3103,\n 0,\n 32,\n \"United-States\"\n ],\n [\n 24,\n \"Private\",\n 369667,\n \"Some-college\",\n 10,\n \"Never-married\",\n \"Other-service\",\n \"Unmarried\",\n \"White\",\n \"Female\",\n 0,\n 0,\n 40,\n \"United-States\"\n ],\n [\n 55,\n \"Private\",\n 104996,\n \"7th-8th\",\n 4,\n \"Married-civ-spouse\",\n \"Craft-repair\",\n \"Husband\",\n \"White\",\n \"Male\",\n 0,\n 0,\n 10,\n \"United-States\"\n ]\n ]\n },\n \"headers\": {\n \"accept\": \"application/json\",\n \"accept-encoding\": \"gzip, deflate\",\n \"user-agent\": \"(gzip)\",\n \"x-goog-api-client\": \"gdcl/1.12.8 gl-python/3.7.8\"\n },\n \"methodId\": \"ml.projects.predict\",\n \"resumable\": null,\n \"response_callbacks\": [],\n \"_in_error_state\": false,\n \"body_size\": 0,\n \"resumable_uri\": null,\n \"resumable_progress\": 0\n}\nCall", "result = request.execute()", "Response", "print(json.dumps(result, indent=2))", "Example output:\n{\n \"predictions\": [\n false,\n false,\n false,\n false,\n false,\n false,\n false,\n false,\n false,\n false\n ]\n}\nprojects.models.versions.delete\nRequest", "request = client.projects().models().versions().delete(name=model_version_name)", "Call", "response = request.execute()", "Response", "print(json.dumps(response, indent=2))", "Example output:\n{\n \"name\": \"projects/migration-ucaip-training/operations/delete_custom_job_SKL20210302140139_custom_job_SKL20210302140139-1614695211809\",\n \"metadata\": {\n \"@type\": \"type.googleapis.com/google.cloud.ml.v1.OperationMetadata\",\n \"createTime\": \"2021-03-02T14:26:51Z\",\n \"operationType\": \"DELETE_VERSION\",\n \"modelName\": \"projects/migration-ucaip-training/models/custom_job_SKL20210302140139\",\n \"version\": {\n \"name\": \"projects/migration-ucaip-training/models/custom_job_SKL20210302140139/versions/custom_job_SKL20210302140139\",\n \"deploymentUri\": \"gs://migration-ucaip-trainingaip-20210302140139/custom_job_SKL20210302140139\",\n \"createTime\": \"2021-03-02T14:25:38Z\",\n \"runtimeVersion\": \"2.1\",\n \"state\": \"READY\",\n \"etag\": \"5R4YqeqWMk8=\",\n \"framework\": \"SCIKIT_LEARN\",\n \"machineType\": \"mls1-c1-m2\",\n \"pythonVersion\": \"3.7\"\n }\n }\n}\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "delete_model = True\ndelete_bucket = True\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model:\n client.projects().models().delete(name=model_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ipsl/cmip6/models/sandbox-1/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: IPSL\nSource ID: SANDBOX-1\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
OxES/k2sc
notebooks/lightkurve.ipynb
gpl-3.0
[ "K2SC Lightkurve Interface\nLet's go through how you would use k2sc with its lightkurve interface to detrend the light curve of WASP-55. First, we import some things!", "import numpy as np\nimport matplotlib \nimport matplotlib as mpl\n\nimport lightkurve as lk\n\nimport k2sc\nfrom k2sc.standalone import k2sc_lc\n\nfrom astropy.io import fits\n\n%pylab inline --no-import-all\nmatplotlib.rcParams['image.origin'] = 'lower'\nmatplotlib.rcParams['figure.figsize']=(10.0,10.0) #(6.0,4.0)\nmatplotlib.rcParams['font.size']=16 #10 \nmatplotlib.rcParams['savefig.dpi']= 300 #72 \ncolours = mpl.rcParams['axes.prop_cycle'].by_key()['color']\n\nimport warnings\nwarnings.filterwarnings('ignore')\n\nprint(lk.__version__)\nprint(k2sc.__version__)", "Reading in data.\nLet's search MAST for the long-cadence light curve file of WASP-55 using the lightkurve API, and do some very basic filtering for data quality.", "lc = lk.search_lightcurve('EPIC 212300977')[1].download()\nlc = lc.remove_nans()\nlc = lc[lc.quality==0]", "Let's now try K2SC!\nAs a quick hack for now, let's just clobber the lightkurve object class to our k2sc standalone.", "lc.__class__ = k2sc_lc", "Now we run with default values!\nThe tqdm progress bar will show a percentage of the maximum iterations of the differential evolution optimizer, but it will usually finish early.", "lc.k2sc()", "Now we plot! See how the k2sc lightcurve has such better quality than the uncorrected data.\nCareful with astropy units - flux and time are dimensionful quantities in lightkurve 2.0, so we have to use .value to render them as numbers.", "fig = plt.figure(figsize=(12.0,8.0))\nplt.plot(lc.time.value,lc.flux.value,'.',label=\"Uncorrected\")\ndetrended = lc.corr_flux-lc.tr_time + np.nanmedian(lc.tr_time)\nplt.plot(lc.time.value,detrended.value,'.',label=\"K2SC\")\nplt.legend()\nplt.xlabel('BJD')\nplt.ylabel('Flux')\nplt.title('WASP-55',y=1.01)", "Now we save the data.", "extras = {'CORR_FLUX':lc.corr_flux.value,\n 'TR_TIME':lc.tr_time.value,\n 'TR_POSITION':lc.tr_position.value}\nout = lc.to_fits(extra_data=extras,path='test.fits',overwrite=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
excelsimon/AI
DeepLearning/3-ConvolutionalNeuralNetwork/2-DeepCNN_CaseStudy/KerasTutorial/Keras+-+Tutorial+-+Happy+House+v1.ipynb
mit
[ "Keras tutorial - the Happy House\nWelcome to the first assignment of week 2. In this assignment, you will:\n1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. \n2. See how you can in a couple of hours build a deep learning algorithm.\nWhy are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models. \nIn this exercise, you'll work on the \"Happy House\" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!", "import numpy as np\nfrom keras import layers\nfrom keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D\nfrom keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D\nfrom keras.models import Model\nfrom keras.preprocessing import image\nfrom keras.utils import layer_utils\nfrom keras.utils.data_utils import get_file\nfrom keras.applications.imagenet_utils import preprocess_input\nimport pydot\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.utils import plot_model\nfrom kt_utils import *\n\nimport keras.backend as K\nK.set_image_data_format('channels_last')\nimport matplotlib.pyplot as plt\nfrom matplotlib.pyplot import imshow\n\n%matplotlib inline", "Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...).\n1 - The Happy House\nFor your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.\n<img src=\"images/happy-house.jpg\" style=\"width:350px;height:270px;\">\n<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : the Happy House</center></caption>\nAs a deep learning expert, to make sure the \"Happy\" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy. \nYou have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled. \n<img src=\"images/house-members.png\" style=\"width:550px;height:250px;\">\nRun the following code to normalize the dataset and learn about its shapes.", "X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()\n\n# Normalize image vectors\nX_train = X_train_orig/255.\nX_test = X_test_orig/255.\n\n# Reshape\nY_train = Y_train_orig.T\nY_test = Y_test_orig.T\n\nprint (\"number of training examples = \" + str(X_train.shape[0]))\nprint (\"number of test examples = \" + str(X_test.shape[0]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))", "Details of the \"Happy\" dataset:\n- Images are of shape (64,64,3)\n- Training: 600 pictures\n- Test: 150 pictures\nIt is now time to solve the \"Happy\" Challenge.\n2 - Building a model in Keras\nKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.\nHere is an example of a model in Keras:\n```python\ndef model(input_shape):\n # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!\n X_input = Input(input_shape)\n# Zero-Padding: pads the border of X_input with zeroes\nX = ZeroPadding2D((3, 3))(X_input)\n\n# CONV -&gt; BN -&gt; RELU Block applied to X\nX = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)\nX = BatchNormalization(axis = 3, name = 'bn0')(X)\nX = Activation('relu')(X)\n\n# MAXPOOL\nX = MaxPooling2D((2, 2), name='max_pool')(X)\n\n# FLATTEN X (means convert it to a vector) + FULLYCONNECTED\nX = Flatten()(X)\nX = Dense(1, activation='sigmoid', name='fc')(X)\n\n# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.\nmodel = Model(inputs = X_input, outputs = X, name='HappyModel')\n\nreturn model\n\n```\nNote that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above). \nExercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout(). \nNote: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.", "# GRADED FUNCTION: HappyModel\n\ndef HappyModel(input_shape):\n \"\"\"\n Implementation of the HappyModel.\n \n Arguments:\n input_shape -- shape of the images of the dataset\n\n Returns:\n model -- a Model() instance in Keras\n \"\"\"\n \n ### START CODE HERE ###\n # Feel free to use the suggested outline in the text above to get started, and run through the whole\n # exercise (including the later portions of this notebook) once. The come back also try out other\n # network architectures as well. \n \n \n ### END CODE HERE ###\n \n return model", "You have now built a function to describe your model. To train and test this model, there are four steps in Keras:\n1. Create the model by calling the function above\n2. Compile the model by calling model.compile(optimizer = \"...\", loss = \"...\", metrics = [\"accuracy\"])\n3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)\n4. Test the model on test data by calling model.evaluate(x = ..., y = ...)\nIf you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation.\nExercise: Implement step 1, i.e. create the model.", "### START CODE HERE ### (1 line)\nhappyModel = None\n### END CODE HERE ###", "Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem.", "### START CODE HERE ### (1 line)\nNone\n### END CODE HERE ###", "Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.", "### START CODE HERE ### (1 line)\nNone\n### END CODE HERE ###", "Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them.\nExercise: Implement step 4, i.e. test/evaluate the model.", "### START CODE HERE ### (1 line)\npreds = None\n### END CODE HERE ###\nprint()\nprint (\"Loss = \" + str(preds[0]))\nprint (\"Test Accuracy = \" + str(preds[1]))", "If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To pass this assignment, you have to get at least 75% accuracy. \nTo give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and \"adam\" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare. \nIf you have not yet achieved 75% accuracy, here're some things you can play around with to try to achieve it:\n\nTry using blocks of CONV->BATCHNORM->RELU such as:\npython\nX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)\nX = BatchNormalization(axis = 3, name = 'bn0')(X)\nX = Activation('relu')(X)\nuntil your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.\nYou can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.\nChange your optimizer. We find Adam works well. \nIf the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)\nRun on more epochs, until you see the train accuracy plateauing. \n\nEven if you have achieved 75% accuracy, please feel free to keep playing with your model to try to get even better results. \nNote: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.\n3 - Conclusion\nCongratulations, you have solved the Happy House challenge! \nNow, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here. \n<font color='blue'>\nWhat we would like you to remember from this assignment:\n- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras? \n- Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.\n4 - Test with your own image (Optional)\nCongratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!\nThe training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!", "### START CODE HERE ###\nimg_path = 'images/my_image.jpg'\n### END CODE HERE ###\nimg = image.load_img(img_path, target_size=(64, 64))\nimshow(img)\n\nx = image.img_to_array(img)\nx = np.expand_dims(x, axis=0)\nx = preprocess_input(x)\n\nprint(happyModel.predict(x))", "5 - Other useful functions in Keras (Optional)\nTwo other basic features of Keras that you'll find useful are:\n- model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs\n- plot_model(): plots your graph in a nice layout. You can even save it as \".png\" using SVG() if you'd like to share it on social media ;). It is saved in \"File\" then \"Open...\" in the upper bar of the notebook.\nRun the following code.", "happyModel.summary()\n\nplot_model(happyModel, to_file='HappyModel.png')\nSVG(model_to_dot(happyModel).create(prog='dot', format='svg'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bspalding/research_public
research/bayesian_risk_perf_v3.ipynb
apache-2.0
[ "<center><h1>Probabilistic Programming in Quantitative Finance</h1><br>\n<h3>Thomas Wiecki</h3>\n<br>\n<h3>@twiecki</h3>\n<br>\n<img width=40% src=\"http://i2.wp.com/stuffled.com/wp-content/uploads/2014/09/Quantopian-Logo-EPS-vector-image.png?resize=1020%2C680\">\n</center>\nAbout me\n\nLead Data Scientist at Quantopian Inc: Building a crowd sourced hedge fund.\nPrevious: PhD from Brown University -- research on computational neuroscience and machine learning using Bayesian modeling.", "%pyplot inline\n\nfigsize(12, 12)\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport itertools\nimport scipy as sp\nimport pymc3 as pm\nimport theano.tensor as T\n\nfrom scipy import stats\nimport scipy\n\ndata_0 = pd.read_csv('data0.csv', index_col=0, parse_dates=True, header=None)[1]\ndata_1 = pd.read_csv('data1.csv', index_col=0, parse_dates=True, header=None)[1]\n\ndef var_cov_var_t(P, c, nu=1, mu=0, sigma=1, **kwargs):\n \"\"\"\n Variance-Covariance calculation of daily Value-at-Risk\n using confidence level c, with mean of returns mu\n and standard deviation of returns sigma, on a portfolio\n of value P.\n \"\"\"\n alpha = stats.t.ppf(1-c, nu, mu, sigma)\n return P - P*(alpha + 1)\n\ndef var_cov_var_normal(P, c, mu=0, sigma=1, **kwargs):\n \"\"\"\n Variance-Covariance calculation of daily Value-at-Risk\n using confidence level c, with mean of returns mu\n and standard deviation of returns sigma, on a portfolio\n of value P.\n \"\"\"\n alpha = stats.norm.ppf(1-c, mu, sigma)\n return P - P*(alpha + 1)\n\ndef sample_normal(mu=0, sigma=1, **kwargs):\n samples = stats.norm.rvs(mu, sigma, kwargs.get('size', 100))\n return samples\n\ndef sample_t(nu=1, mu=0, sigma=1, **kwargs):\n samples = stats.t.rvs(nu, mu, sigma, kwargs.get('size', 100))\n return samples\n\ndef eval_normal(mu=0, sigma=1, **kwargs):\n pdf = stats.norm(mu, sigma).pdf(kwargs.get('x', np.linspace(-0.05, 0.05, 500)))\n return pdf\n\ndef eval_t(nu=1, mu=0, sigma=1, **kwargs):\n samples = stats.t(nu, mu, sigma).pdf(kwargs.get('x', np.linspace(-0.05, 0.05, 500)))\n return samples\n\ndef logp_normal(mu=0, sigma=1, **kwargs):\n logp = np.sum(stats.norm(mu, sigma).logpdf(kwargs['data']))\n return logp\n\ndef logp_t(nu=1, mu=0, sigma=1, **kwargs):\n logp = np.sum(stats.t(nu, mu, sigma).logpdf(kwargs['data']))\n return logp\n\n# generate posterior predictive\ndef post_pred(func, trace, *args, **kwargs):\n samples = kwargs.pop('samples', 50)\n ppc = []\n for i, idx in enumerate(np.linspace(0, len(trace), samples)):\n t = trace[int(i)]\n try:\n kwargs['nu'] = t['nu_minus_one']+1\n except KeyError:\n pass\n mu = t['mean returns']\n sigma = t['volatility']\n ppc.append(func(*args, mu=mu, sigma=sigma, **kwargs))\n\n return ppc\n\ndef plot_strats(sharpe=False):\n figsize(12, 6)\n f, (ax1, ax2) = plt.subplots(1, 2)\n if sharpe:\n label = 'etrade\\nn=%i\\nSharpe=%.2f' % (len(data_0), (data_0.mean() / data_0.std() * np.sqrt(252)))\n else:\n label = 'etrade\\nn=%i\\n' % (len(data_0))\n sns.distplot(data_0, kde=False, ax=ax1, label=label, color='b')\n ax1.set_xlabel('daily returns'); ax1.legend(loc=0)\n if sharpe:\n label = 'IB\\nn=%i\\nSharpe=%.2f' % (len(data_1), (data_1.mean() / data_1.std() * np.sqrt(252)))\n else:\n label = 'IB\\nn=%i\\n' % (len(data_1))\n sns.distplot(data_1, kde=False, ax=ax2, label=label, color='g')\n ax2.set_xlabel('daily returns'); ax2.legend(loc=0);\n\ndef model_returns_normal(data):\n with pm.Model() as model:\n mu = pm.Normal('mean returns', mu=0, sd=.01, testval=data.mean())\n sigma, log_sigma = model.TransformedVar('volatility', \n pm.HalfCauchy.dist(beta=1, testval=data.std()), \n pm.transforms.logtransform)\n #sigma = pm.HalfCauchy('volatility', beta=.1, testval=data.std())\n returns = pm.Normal('returns', mu=mu, sd=sigma, observed=data)\n ann_vol = pm.Deterministic('annual volatility', returns.distribution.variance**.5 * np.sqrt(252))\n sharpe = pm.Deterministic('sharpe', \n returns.distribution.mean / returns.distribution.variance**.5 * np.sqrt(252))\n start = pm.find_MAP(fmin=scipy.optimize.fmin_powell)\n step = pm.NUTS(scaling=start)\n trace_normal = pm.sample(5000, step, start=start)\n return trace_normal\n\ndef model_returns_t(data):\n with pm.Model() as model:\n mu = pm.Normal('mean returns', mu=0, sd=.01, testval=data.mean())\n sigma, log_sigma = model.TransformedVar('volatility', \n pm.HalfCauchy.dist(beta=1, testval=data.std()), \n pm.transforms.logtransform)\n nu, log_nu = model.TransformedVar('nu_minus_one',\n pm.Exponential.dist(1./10., testval=3.),\n pm.transforms.logtransform)\n\n returns = pm.T('returns', nu=nu+2, mu=mu, sd=sigma, observed=data)\n ann_vol = pm.Deterministic('annual volatility', returns.distribution.variance**.5 * np.sqrt(252))\n sharpe = pm.Deterministic('sharpe', \n returns.distribution.mean / returns.distribution.variance**.5 * np.sqrt(252))\n\n start = pm.find_MAP(fmin=scipy.optimize.fmin_powell)\n step = pm.NUTS(scaling=start)\n trace = pm.sample(5000, step, start=start)\n\n return trace\n\ndef model_returns_t_stoch_vol(data):\n from pymc3.distributions.timeseries import GaussianRandomWalk\n\n with pm.Model() as model:\n mu = pm.Normal('mean returns', mu=0, sd=.01, testval=data.mean())\n step_size, log_step_size = model.TransformedVar('step size', \n pm.Exponential.dist(1./.02, testval=.06), \n pm.transforms.logtransform)\n \n vol = GaussianRandomWalk('volatility', step_size**-2, shape=len(data))\n \n nu, log_nu = model.TransformedVar('nu_minus_one',\n pm.Exponential.dist(1./10., testval=3.),\n pm.transforms.logtransform)\n\n returns = pm.T('returns', nu=nu+2, mu=mu, lam=pm.exp(-2*vol), observed=data)\n #ann_vol = pm.Deterministic('annual volatility', returns.distribution.variance**.5 * np.sqrt(252))\n #sharpe = pm.Deterministic('sharpe', \n # returns.distribution.mean / ann_vol)\n\n start = pm.find_MAP(vars=[vol], fmin=sp.optimize.fmin_l_bfgs_b)\n #start = pm.find_MAP(fmin=scipy.optimize.fmin_powell, start=start)\n step = pm.NUTS(scaling=start)\n trace = pm.sample(5000, step, start=start)\n\n return trace\n\nresults_normal = {0: model_returns_normal(data_0),\n 1: model_returns_normal(data_1)}\nresults_t = {0: model_returns_t(data_0),\n 1: model_returns_t(data_1)}\n\nfrom IPython.display import Image\nsns.set_context('talk', font_scale=1.5)\nsns.set_context(rc={'lines.markeredgewidth': 0.1})\nfrom matplotlib import rc\nrc('xtick', labelsize=14) \nrc('ytick', labelsize=14)\nfigsize(10, 6)", "Data Science Motivation\n<center><img src=\"model-inference1.svg\">\n<center><img src=\"model-inference2.svg\">\nWhat's wrong with statistics\n\nModels should not be built for mathematical convenience (e.g. normality assumption), but to most accurately model the data.\nPre-specified models, like frequentist statistics, make many assumptions that are all to easily violated.\n\n<center><img src=\"general_vs_insight_wo_pp.svg\">\n<center><img src=\"general_vs_insight.svg\">\n\"The purpose of computation is insight, not numbers.\" -- Richard Hamming\n<center><img src=\"http://upload.wikimedia.org/wikipedia/en/0/08/Richard_Hamming.jpg\">\nQuantitative Finance motivation\n<center>Types of risk</center>\n<center><img src=\"risk_first.svg\"></center>\n<center>Types of risk</center>\n<center><img src=\"risk_full.svg\"></center>\nProbabilistic Programming\n\nModel unknown causes of a phenomenon as random variables.\nWrite a programmatic story in code of how unknown causes result in observable data -> generative framework\nUse Bayes formula to invert generative model to infer unknown causes.\nInference is completely automatic (when things go well)!\n-> No math, great flexibility and freedom when building models.\nThe catch: At the cost of increased computational demands. \n\nMotivating the problem we face at Quantopian\nWeb based backtester for trading algorithms & live trading\n<center><img src='quantopian.png' width=85%>\nCrowd sourced hedge fund\n\n\"15 uncorrelated return streams are the holy grail of investing\"\n-> Search for diverse trading strategies.\nPerfect problem for crowd sourcing!\n\nOur problem today\n\nHow do we identify the best trading algorithms?\n\nMonthly trading competitions -- win, and we'll invest $100k, you keep all the profits\n<center><img src='contest.png'>\nLet's take just two strategies and try to pick the better one", "plot_strats()", "Sharpe Ratio\n$$\\text{Sharpe} = \\frac{\\text{mean returns}}{\\text{volatility}}$$", "print \"Sharpe ratio strategy etrade =\", data_0.mean() / data_0.std() * np.sqrt(252)\nprint \"Sharpe ratio strategy IB =\", data_1.mean() / data_1.std() * np.sqrt(252)\n\nplt.title('Sharpe ratio'); plt.xlabel('Sharpe ratio');\nplt.axvline(data_0.mean() / data_0.std() * np.sqrt(252), color='b');\nplt.axvline(data_1.mean() / data_1.std() * np.sqrt(252), color='g');\nplt.xlim((-2, 5))", "Detour ahead\nShort primer on random variables\n\nRepresents our beliefs about an unknown state.\nProbability distribution assigns a probability to each possible state.\nNot a single number (e.g. most likely state).\n\nYou already know what a variable is...", "coin = 0 # 0 for tails\ncoin = 1 # 1 for heads", "A random variable assigns all possible values a certain probability", "#coin = {0: 50%,\n# 1: 50%}", "Alternatively:\ncoin ~ Bernoulli(p=0.5)\n\ncoin is a random variable\nBernoulli is a probability distribution\n~ reads as \"is distributed as\"\n\nThis was discrete (binary), what about the continuous case?\nreturns ~ Normal($\\mu$, $\\sigma^2$)", "from scipy import stats\nsns.distplot(data_0, kde=False, fit=stats.norm)\nplt.xlabel('returns')", "How to estimate $\\mu$ and $\\sigma$?\n\nNaive: point estimate\nSet mu = mean(data) and sigma = std(data)\nMaximum Likelihood Estimate\nCorrect answer as $n \\rightarrow \\infty$\n\nBayesian analysis\n\nMost of the time $n \\neq \\infty$...\nUncertainty about $\\mu$ and $\\sigma$\nTurn $\\mu$ and $\\sigma$ into random variables\nHow to estimate?\n\nBayes Formula!\n<img src=\"bayes_formula.svg\">\nUse prior knowledge and data to update our beliefs.", "figsize(7, 7)\nfrom IPython.html.widgets import interact, interactive\nfrom scipy import stats\ndef gen_plot(n=0, bayes=False):\n np.random.seed(3)\n x = np.random.randn(n)\n ax1 = plt.subplot(221)\n ax2 = plt.subplot(222)\n ax3 = plt.subplot(223)\n #fig, (ax1, ax2, ax3, _) = plt.subplots(2, 2)\n if n > 1:\n sns.distplot(x, kde=False, ax=ax3, rug=True, hist=False)\n \n def gen_post_mu(x, mu0=0, sigma0=5):\n mu = np.mean(x)\n sigma = 1\n n = len(x)\n \n if n == 0:\n post_mu = mu0\n post_sigma = sigma0\n else:\n post_mu = (mu0 / sigma0**2 + np.sum(x) / sigma0**2) / (1/sigma0**2 + n/sigma**2)\n post_sigma = (1 / sigma0**2 + n / sigma**2)**-1\n return stats.norm(post_mu, post_sigma**2)\n \n def gen_post_var(x, alpha0, beta0):\n mu = 0\n sigma = np.std(x)\n n = len(x)\n post_alpha = alpha0 + n / 2\n post_beta = beta0 + np.sum((x - mu)**2) / 2\n return stats.invgamma(post_alpha, post_beta)\n \n \n mu_lower = -.3\n mu_upper = .3\n \n sigma_lower = 0\n sigma_upper = 3\n ax2.set_xlim((2, n+1))#(sigma_lower, sigma_upper))\n ax1.axvline(0, lw=.5, color='k')\n #ax2.axvline(1, lw=.5, color='k')\n ax2.axhline(0, lw=0.5, color='k')\n if bayes:\n post_mu = gen_post_mu(x, 0, 5)\n #post_var = gen_post_var(x, 1, 1)\n if post_mu.ppf(.05) < mu_lower:\n mu_lower = post_mu.ppf(.01)\n if post_mu.ppf(.95) > mu_upper:\n mu_upper = post_mu.ppf(.99)\n x_mu = np.linspace(mu_lower, mu_upper, 500)\n ax1.plot(x_mu, post_mu.pdf(x_mu))\n #x_sigma = np.linspace(sigma_lower, sigma_upper, 100)\n l = []\n u = []\n for i in range(1, n):\n norm = gen_post_mu(x[:i])\n l.append(norm.ppf(.05))\n u.append(norm.ppf(.95))\n ax2.fill_between(np.arange(2, n+1), l, u, alpha=.3)\n ax1.set_ylabel('belief')\n #ax2.plot(x_sigma, post_var.pdf(x_sigma))\n else:\n mu = np.mean(x)\n sd = np.std(x)\n ax1.axvline(mu)\n ax2.plot(np.arange(2, n+1), [np.mean(x[:i]) for i in range(1, n)])\n #ax2.axvline(sd)\n \n ax1.set_xlim((mu_lower, mu_upper))\n ax1.set_title('current mu estimate')\n ax2.set_title('history of mu estimates')\n \n ax2.set_xlabel('n')\n ax2.set_ylabel('mu')\n ax3.set_title('data (returns)')\n plt.tight_layout()\n\ninteractive(gen_plot, n=(0, 600), bayes=True)", "Approximating the posterior with MCMC sampling", "def plot_want_get():\n from scipy import stats\n fig = plt.figure(figsize=(14, 6))\n ax1 = fig.add_subplot(121, title='What we want', ylim=(0, .5), xlabel='', ylabel='')\n ax1.plot(np.linspace(-4, 4, 100), stats.norm.pdf(np.linspace(-3, 3, 100)), lw=4.)\n ax2 = fig.add_subplot(122, title='What we get')#, xlim=(-4, 4), ylim=(0, 1800), xlabel='', ylabel='\\# of samples')\n sns.distplot(np.random.randn(50000), ax=ax2, kde=False, norm_hist=True);\n ax2.set_xlim((-4, 4));\n ax2.set_ylim((0, .5));", "Approximating the posterior with MCMC sampling", "plot_want_get()", "PyMC3\n\nProbabilistic Programming framework written in Python.\nAllows for construction of probabilistic models using intuitive syntax.\nFeatures advanced MCMC samplers.\nFast: Just-in-time compiled by Theano.\nExtensible: easily incorporates custom MCMC algorithms and unusual probability distributions.\nAuthors: John Salvatier, Chris Fonnesbeck, Thomas Wiecki\nUpcoming beta release!\n\n<center><img src=\"http://www.iskonline.org/media/news/end_detour.gif\"></center>\nModel returns distribution: Specifying our priors", "import theano.tensor as T\nx = np.linspace(-.3, .3, 500)\nplt.plot(x, T.exp(pm.Normal.dist(mu=0, sd=.1).logp(x)).eval())\nplt.title(u'Prior: mu ~ Normal(0, $.1^2$)'); plt.xlabel('mu'); plt.ylabel('Probability Density'); plt.xlim((-.3, .3));\n\nx = np.linspace(-.1, .5, 500)\nplt.plot(x, T.exp(pm.HalfNormal.dist(sd=.1).logp(x)).eval())\nplt.title(u'Prior: sigma ~ HalfNormal($.1^2$)'); plt.xlabel('sigma'); plt.ylabel('Probability Density');", "Bayesian Sharpe ratio\n$\\mu \\sim \\text{Normal}(0, .1^2)$ $\\leftarrow \\text{Prior}$\n$\\sigma \\sim \\text{HalfNormal}(.1^2)$ $\\leftarrow \\text{Prior}$\n$\\text{returns} \\sim \\text{Normal}(\\mu, \\sigma^2)$ $\\leftarrow \\text{Observed!}$\n$\\text{Sharpe} = \\frac{\\mu}{\\sigma}$\nGraphical model of returns\n<img width=80% src='bayes_formula_mu2.svg'>\nThis is what the data looks like", "print data_0.head()\n\nfrom pymc3 import *\n\nwith Model() as model:\n # Priors on parameters\n mean_return = Normal('mean return', mu=0, sd=.1)\n volatility = HalfNormal('volatility', sd=.1)\n\n # Model observed returns as Normal\n obs = Normal('returns', \n mu=mean_return, \n sd=volatility,\n observed=data) # data is a list of daily returns, e.g. [0.01, -.05, ...]\n \n sharpe = Deterministic('sharpe ratio', \n mean_return / volatility)\n\nwith model:\n # Instantiate MCMC sampler\n step = NUTS()\n # Draw 500 samples from the posterior\n trace = sample(500, step)", "Analyzing the posterior", "sns.distplot(results_normal[0]['mean returns'], hist=False, label='etrade')\nsns.distplot(results_normal[1]['mean returns'], hist=False, label='IB')\nplt.title('Posterior of the mean'); plt.xlabel('mean returns')\n\nsns.distplot(results_normal[0]['volatility'], hist=False, label='etrade')\nsns.distplot(results_normal[1]['volatility'], hist=False, label='IB')\nplt.title('Posterior of the volatility')\nplt.xlabel('volatility')\n\nsns.distplot(results_normal[0]['sharpe'], hist=False, label='etrade')\nsns.distplot(results_normal[1]['sharpe'], hist=False, label='IB')\nplt.title('Bayesian Sharpe ratio'); plt.xlabel('Sharpe ratio');\n\nprint 'P(Sharpe ratio IB > 0) = %.2f%%' % \\\n (np.mean(results_normal[1]['sharpe'] > 0) * 100)\n\nprint 'P(Sharpe ratio IB > Sharpe ratio etrade) = %.2f%%' % \\\n (np.mean(results_normal[1]['sharpe'] > results_normal[0][0]['sharpe']) * 100)", "Value at Risk with uncertainty", "results_normal[0]\n\nimport scipy.stats as stats\nppc_etrade = post_pred(var_cov_var_normal, results_normal[0], 1e6, .05, samples=800)\nppc_ib = post_pred(var_cov_var_normal, results_normal[1], 1e6, .05, samples=800)\nsns.distplot(ppc_etrade, label='etrade', norm_hist=True, hist=False, color='b')\nsns.distplot(ppc_ib, label='IB', norm_hist=True, hist=False, color='g')\nplt.title('VaR'); plt.legend(loc=0); plt.xlabel('5% daily Value at Risk (VaR) with \\$1MM capital (in \\$)'); plt.ylabel('Probability density'); plt.xticks(rotation=15);", "Interim summary\n\nBayesian stats allows us to reformulate common risk metrics, use priors and quantify uncertainty.\nIB strategy seems better in almost every regard. Is it though?\n\nSo far, only added confidence", "sns.distplot(results_normal[0]['sharpe'], hist=False, label='etrade')\nsns.distplot(results_normal[1]['sharpe'], hist=False, label='IB')\nplt.title('Bayesian Sharpe ratio'); plt.xlabel('Sharpe ratio');\nplt.axvline(data_0.mean() / data_0.std() * np.sqrt(252), color='b');\nplt.axvline(data_1.mean() / data_1.std() * np.sqrt(252), color='g');\n\nx = np.linspace(-.03, .03, 500)\nppc_dist_normal = post_pred(eval_normal, results_normal[1], x=x)\nppc_dist_t = post_pred(eval_t, results_t[1], x=x)", "Is this a good model?", "sns.distplot(data_1, label='data IB', kde=False, norm_hist=True, color='.5')\nfor p in ppc_dist_normal:\n plt.plot(x, p, c='r', alpha=.1)\nplt.plot(x, p, c='r', alpha=.5, label='Normal model')\nplt.xlabel('Daily returns')\nplt.legend();", "Can it be improved? Yes!\nIdentical model as before, but instead, use a heavy-tailed T distribution:\n$ \\text{returns} \\sim \\text{T}(\\nu, \\mu, \\sigma^2)$", "sns.distplot(data_1, label='data IB', kde=False, norm_hist=True, color='.5')\nfor p in ppc_dist_t:\n plt.plot(x, p, c='y', alpha=.1)\nplt.plot(x, p, c='y', alpha=.5, label='T model') \nplt.xlabel('Daily returns')\nplt.legend();", "Volatility", "sns.distplot(results_normal[1]['annual volatility'], hist=False, label='normal')\nsns.distplot(results_t[1]['annual volatility'], hist=False, label='T')\nplt.xlim((0, 0.2))\nplt.xlabel('Posterior of annual volatility')\nplt.ylabel('Probability Density');", "Lets compare posteriors of the normal and T model\nMean returns", "sns.distplot(results_normal[1]['mean returns'], hist=False, color='r', label='normal model')\nsns.distplot(results_t[1]['mean returns'], hist=False, color='y', label='T model')\nplt.xlabel('Posterior of the mean returns'); plt.ylabel('Probability Density');", "Bayesian T-Sharpe ratio", "sns.distplot(results_normal[1]['sharpe'], hist=False, color='r', label='normal model')\nsns.distplot(results_t[1]['sharpe'], hist=False, color='y', label='T model')\nplt.xlabel('Bayesian Sharpe ratio'); plt.ylabel('Probability Density');", "But why? T distribution is more robust!", "sim_data = list(np.random.randn(75)*.01)\nsim_data.append(-.2)\nsns.distplot(sim_data, label='data', kde=False, norm_hist=True, color='.5'); sns.distplot(sim_data, label='Normal', fit=stats.norm, kde=False, hist=False, fit_kws={'color': 'r', 'label': 'Normal'}); sns.distplot(sim_data, fit=stats.t, kde=False, hist=False, fit_kws={'color': 'y', 'label': 'T'})\nplt.xlabel('Daily returns'); plt.legend();", "Estimating tail risk using VaR", "ppc_normal = post_pred(var_cov_var_normal, results_normal[1], 1e6, .05, samples=800)\nppc_t = post_pred(var_cov_var_t, results_t[1], 1e6, .05, samples=800)\nsns.distplot(ppc_normal, label='Normal', norm_hist=True, hist=False, color='r')\nsns.distplot(ppc_t, label='T', norm_hist=True, hist=False, color='y')\nplt.legend(loc=0); plt.xlabel('5% daily Value at Risk (VaR) with \\$1MM capital (in \\$)'); plt.ylabel('Probability density'); plt.xticks(rotation=15);", "Comparing the Bayesian T-Sharpe ratios", "sns.distplot(results_t[0]['sharpe'], hist=False, label='etrade')\nsns.distplot(results_t[1]['sharpe'], hist=False, label='IB')\nplt.xlabel('Bayesian Sharpe ratio'); plt.ylabel('Probability Density');\n\nprint 'P(Sharpe ratio IB > Sharpe ratio etrade) = %.2f%%' % \\\n (np.mean(results_t[1]['sharpe'] > results_t[0]['sharpe']) * 100)", "Conclusions\n\nProbabilistic Programming allows us to construct complex models in code and automatically estimate them.\nBayesian statistics provides us with uncertainty quantification -- measure orthogonal sources of risk.\nPyMC3 puts advanced samplers at your fingertips.\n\nFurther reading\n\nQuantopian -- Develop trading algorithms like this in your browser.\nMy blog for Bayesian linear regression (financial alpha and beta)\nProbilistic Programming for Hackers -- IPython Notebook book on Bayesian stats using PyMC2.\nDoing Bayesian Data Analysis -- Great book by Kruschke.\nPyMC3 repository\nTwitter: @twiecki" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kevntao/ThinkStats2
code/.ipynb_checkpoints/chap03ex-checkpoint.ipynb
gpl-3.0
[ "%matplotlib inline\n\nimport thinkstats2\nimport thinkplot\nimport chap01soln\nresp = chap01soln.ReadFemResp()", "Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.", "kids = resp['numkdhh']\nkids", "Display the PMF.", "pmf = thinkstats2.Pmf(kids)\nthinkplot.Pmf(pmf, label='PMF')\nthinkplot.Show(xlabel='# of Children', ylabel='PMF')", "Define <tt>BiasPmf</tt>.", "def BiasPmf(pmf, label=''):\n \"\"\"Returns the Pmf with oversampling proportional to value.\n\n If pmf is the distribution of true values, the result is the\n distribution that would be seen if values are oversampled in\n proportion to their values; for example, if you ask students\n how big their classes are, large classes are oversampled in\n proportion to their size.\n\n Args:\n pmf: Pmf object.\n label: string label for the new Pmf.\n\n Returns:\n Pmf object\n \"\"\"\n new_pmf = pmf.Copy(label=label)\n\n for x, p in pmf.Items():\n new_pmf.Mult(x, x)\n \n new_pmf.Normalize()\n return new_pmf", "Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.\nDisplay the actual Pmf and the biased Pmf on the same axes.\nCompute the means of the two Pmfs." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
stable/_downloads/91078106f2c04f1e09c01a2fa07e9d27/10_raw_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "The Raw data structure: continuous data\nThis tutorial covers the basics of working with raw EEG/MEG data in Python. It\nintroduces the :class:~mne.io.Raw data structure in detail, including how to\nload, query, subselect, export, and plot data from a :class:~mne.io.Raw\nobject. For more info on visualization of :class:~mne.io.Raw objects, see\ntut-visualize-raw. For info on creating a :class:~mne.io.Raw object\nfrom simulated data in a :class:NumPy array &lt;numpy.ndarray&gt;, see\ntut-creating-data-structures.\nAs usual we'll start by importing the modules we need:", "import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mne", "Loading continuous data\n.. sidebar:: Datasets in MNE-Python\nThere are ``data_path`` functions for several example datasets in\nMNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`,\n:func:`mne.datasets.spm_face.data_path`, etc). All of them will check the\ndefault download location first to see if the dataset is already on your\ncomputer, and only download it if necessary. The default download\nlocation is also configurable; see the documentation of any of the\n``data_path`` functions for more information.\n\nAs mentioned in the introductory tutorial &lt;tut-overview&gt;,\nMNE-Python data structures are based around\nthe :file:.fif file format from Neuromag. This tutorial uses an\nexample dataset &lt;sample-dataset&gt; in :file:.fif format, so here we'll\nuse the function :func:mne.io.read_raw_fif to load the raw data; there are\nreader functions for a wide variety of other data formats\n&lt;data-formats&gt; as well.\nThere are also several other example datasets\n&lt;datasets&gt; that can be downloaded with just a few lines\nof code. Functions for downloading example datasets are in the\n:mod:mne.datasets submodule; here we'll use\n:func:mne.datasets.sample.data_path to download the \"sample-dataset\"\ndataset, which contains EEG, MEG, and structural MRI data from one subject\nperforming an audiovisual experiment. When it's done downloading,\n:func:~mne.datasets.sample.data_path will return the folder location where\nit put the files; you can navigate there with your file browser if you want\nto examine the files yourself. Once we have the file path, we can load the\ndata with :func:~mne.io.read_raw_fif. This will return a\n:class:~mne.io.Raw object, which we'll store in a variable called raw.", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)", "As you can see above, :func:~mne.io.read_raw_fif automatically displays\nsome information about the file it's loading. For example, here it tells us\nthat there are three \"projection items\" in the file along with the recorded\ndata; those are :term:SSP projectors &lt;projector&gt; calculated to remove\nenvironmental noise from the MEG signals, and are discussed in a the tutorial\ntut-projectors-background.\nIn addition to the information displayed during loading, you can\nget a glimpse of the basic details of a :class:~mne.io.Raw object by\nprinting it:", "print(raw)", "By default, the :samp:mne.io.read_raw_{*} family of functions will not\nload the data into memory (instead the data on disk are memory-mapped_,\nmeaning the data are only read from disk as-needed). Some operations (such as\nfiltering) require that the data be copied into RAM; to do that we could have\npassed the preload=True parameter to :func:~mne.io.read_raw_fif, but we\ncan also copy the data into RAM at any time using the\n:meth:~mne.io.Raw.load_data method. However, since this particular tutorial\ndoesn't do any serious analysis of the data, we'll first\n:meth:~mne.io.Raw.crop the :class:~mne.io.Raw object to 60 seconds so it\nuses less memory and runs more smoothly on our documentation server.", "raw.crop(tmax=60)", "Querying the Raw object\n.. sidebar:: Attributes vs. Methods\n**Attributes** are usually static properties of Python objects — things\nthat are pre-computed and stored as part of the object's representation\nin memory. Attributes are accessed with the ``.`` operator and do not\nrequire parentheses after the attribute name (example: ``raw.ch_names``).\n\n**Methods** are like specialized functions attached to an object.\nUsually they require additional user input and/or need some computation\nto yield a result. Methods always have parentheses at the end; additional\narguments (if any) go inside those parentheses (examples:\n``raw.estimate_rank()``, ``raw.drop_channels(['EEG 030', 'MEG 2242'])``).\n\nWe saw above that printing the :class:~mne.io.Raw object displays some\nbasic information like the total number of channels, the number of time\npoints at which the data were sampled, total duration, and the approximate\nsize in memory. Much more information is available through the various\nattributes and methods of the :class:~mne.io.Raw class. Some useful\nattributes of :class:~mne.io.Raw objects include a list of the channel\nnames (:attr:~mne.io.Raw.ch_names), an array of the sample times in seconds\n(:attr:~mne.io.Raw.times), and the total number of samples\n(:attr:~mne.io.Raw.n_times); a list of all attributes and methods is given\nin the documentation of the :class:~mne.io.Raw class.\nThe Raw.info attribute\nThere is also quite a lot of information stored in the raw.info\nattribute, which stores an :class:~mne.Info object that is similar to a\n:class:Python dictionary &lt;dict&gt; (in that it has fields accessed via named\nkeys). Like Python dictionaries, raw.info has a .keys() method that\nshows all the available field names; unlike Python dictionaries, printing\nraw.info will print a nicely-formatted glimpse of each field's data. See\ntut-info-class for more on what is stored in :class:~mne.Info\nobjects, and how to interact with them.", "n_time_samps = raw.n_times\ntime_secs = raw.times\nch_names = raw.ch_names\nn_chan = len(ch_names) # note: there is no raw.n_channels attribute\nprint('the (cropped) sample data object has {} time samples and {} channels.'\n ''.format(n_time_samps, n_chan))\nprint('The last time sample is at {} seconds.'.format(time_secs[-1]))\nprint('The first few channel names are {}.'.format(', '.join(ch_names[:3])))\nprint() # insert a blank line in the output\n\n# some examples of raw.info:\nprint('bad channels:', raw.info['bads']) # chs marked \"bad\" during acquisition\nprint(raw.info['sfreq'], 'Hz') # sampling frequency\nprint(raw.info['description'], '\\n') # miscellaneous acquisition info\n\nprint(raw.info)", "<div class=\"alert alert-info\"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at\n acquisition time, and should not be changed by the user. There are a few\n exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but\n in most cases there are dedicated MNE-Python functions or methods to\n update the :class:`~mne.Info` object safely (such as\n :meth:`~mne.io.Raw.add_proj` to update ``raw.info['projs']``).</p></div>\n\nTime, sample number, and sample index\n.. sidebar:: Sample numbering in VectorView data\nFor data from VectorView systems, it is important to distinguish *sample\nnumber* from *sample index*. See :term:`first_samp` for more information.\n\nOne method of :class:~mne.io.Raw objects that is frequently useful is\n:meth:~mne.io.Raw.time_as_index, which converts a time (in seconds) into\nthe integer index of the sample occurring closest to that time. The method\ncan also take a list or array of times, and will return an array of indices.\nIt is important to remember that there may not be a data sample at exactly\nthe time requested, so the number of samples between time = 1 second and\ntime = 2 seconds may be different than the number of samples between\ntime = 2 and time = 3:", "print(raw.time_as_index(20))\nprint(raw.time_as_index([20, 30, 40]), '\\n')\n\nprint(np.diff(raw.time_as_index([1, 2, 3])))", "Modifying Raw objects\n.. sidebar:: len(raw)\nAlthough the :class:`~mne.io.Raw` object underlyingly stores data samples\nin a :class:`NumPy array &lt;numpy.ndarray&gt;` of shape (n_channels,\nn_timepoints), the :class:`~mne.io.Raw` object behaves differently from\n:class:`NumPy arrays &lt;numpy.ndarray&gt;` with respect to the :func:`len`\nfunction. ``len(raw)`` will return the number of timepoints (length along\ndata axis 1), not the number of channels (length along data axis 0).\nHence in this section you'll see ``len(raw.ch_names)`` to get the number\nof channels.\n\n:class:~mne.io.Raw objects have a number of methods that modify the\n:class:~mne.io.Raw instance in-place and return a reference to the modified\ninstance. This can be useful for method chaining_\n(e.g., raw.crop(...).pick_channels(...).filter(...).plot())\nbut it also poses a problem during interactive analysis: if you modify your\n:class:~mne.io.Raw object for an exploratory plot or analysis (say, by\ndropping some channels), you will then need to re-load the data (and repeat\nany earlier processing steps) to undo the channel-dropping and try something\nelse. For that reason, the examples in this section frequently use the\n:meth:~mne.io.Raw.copy method before the other methods being demonstrated,\nso that the original :class:~mne.io.Raw object is still available in the\nvariable raw for use in later examples.\nSelecting, dropping, and reordering channels\nAltering the channels of a :class:~mne.io.Raw object can be done in several\nways. As a first example, we'll use the :meth:~mne.io.Raw.pick_types method\nto restrict the :class:~mne.io.Raw object to just the EEG and EOG channels:", "eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True)\nprint(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))", "Similar to the :meth:~mne.io.Raw.pick_types method, there is also the\n:meth:~mne.io.Raw.pick_channels method to pick channels by name, and a\ncorresponding :meth:~mne.io.Raw.drop_channels method to remove channels by\nname:", "raw_temp = raw.copy()\nprint('Number of channels in raw_temp:')\nprint(len(raw_temp.ch_names), end=' → drop two → ')\nraw_temp.drop_channels(['EEG 037', 'EEG 059'])\nprint(len(raw_temp.ch_names), end=' → pick three → ')\nraw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061'])\nprint(len(raw_temp.ch_names))", "If you want the channels in a specific order (e.g., for plotting),\n:meth:~mne.io.Raw.reorder_channels works just like\n:meth:~mne.io.Raw.pick_channels but also reorders the channels; for\nexample, here we pick the EOG and frontal EEG channels, putting the EOG\nfirst and the EEG in reverse order:", "channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001']\neog_and_frontal_eeg = raw.copy().reorder_channels(channel_names)\nprint(eog_and_frontal_eeg.ch_names)", "Changing channel name and type\n.. sidebar:: Long channel names\nDue to limitations in the :file:`.fif` file format (which MNE-Python uses\nto save :class:`~mne.io.Raw` objects), channel names are limited to a\nmaximum of 15 characters.\n\nYou may have noticed that the EEG channel names in the sample data are\nnumbered rather than labelled according to a standard nomenclature such as\nthe 10-20 or 10-05 systems, or perhaps it\nbothers you that the channel names contain spaces. It is possible to rename\nchannels using the :meth:~mne.io.Raw.rename_channels method, which takes a\nPython dictionary to map old names to new names. You need not rename all\nchannels at once; provide only the dictionary entries for the channels you\nwant to rename. Here's a frivolous example:", "raw.rename_channels({'EOG 061': 'blink detector'})", "This next example replaces spaces in the channel names with underscores,\nusing a Python dict comprehension_:", "print(raw.ch_names[-3:])\nchannel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names}\nraw.rename_channels(channel_renaming_dict)\nprint(raw.ch_names[-3:])", "If for some reason the channel types in your :class:~mne.io.Raw object are\ninaccurate, you can change the type of any channel with the\n:meth:~mne.io.Raw.set_channel_types method. The method takes a\n:class:dictionary &lt;dict&gt; mapping channel names to types; allowed types are\necg, eeg, emg, eog, exci, ias, misc, resp, seeg, dbs, stim, syst, ecog,\nhbo, hbr. A common use case for changing channel type is when using frontal\nEEG electrodes as makeshift EOG channels:", "raw.set_channel_types({'EEG_001': 'eog'})\nprint(raw.copy().pick_types(meg=False, eog=True).ch_names)", "Selection in the time domain\nIf you want to limit the time domain of a :class:~mne.io.Raw object, you\ncan use the :meth:~mne.io.Raw.crop method, which modifies the\n:class:~mne.io.Raw object in place (we've seen this already at the start of\nthis tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds\nto reduce memory demands). :meth:~mne.io.Raw.crop takes parameters tmin\nand tmax, both in seconds (here we'll again use :meth:~mne.io.Raw.copy\nfirst to avoid changing the original :class:~mne.io.Raw object):", "raw_selection = raw.copy().crop(tmin=10, tmax=12.5)\nprint(raw_selection)", ":meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and\n:attr:~mne.io.Raw.times attributes, so that the first sample of the cropped\nobject now corresponds to time = 0. Accordingly, if you wanted to re-crop\nraw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above)\nthen the subsequent call to :meth:~mne.io.Raw.crop should get tmin=1\n(not tmin=11), and leave tmax unspecified to keep everything from\ntmin up to the end of the object:", "print(raw_selection.times.min(), raw_selection.times.max())\nraw_selection.crop(tmin=1)\nprint(raw_selection.times.min(), raw_selection.times.max())", "Remember that sample times don't always align exactly with requested tmin\nor tmax values (due to sampling), which is why the max values of the\ncropped files don't exactly match the requested tmax (see\ntime-as-index for further details).\nIf you need to select discontinuous spans of a :class:~mne.io.Raw object —\nor combine two or more separate :class:~mne.io.Raw objects — you can use\nthe :meth:~mne.io.Raw.append method:", "raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds\nraw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds\nraw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds\nraw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total\nprint(raw_selection1.times.min(), raw_selection1.times.max())", "<div class=\"alert alert-danger\"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different\n recordings, especially when saving: :meth:`~mne.io.Raw.append` only\n preserves the ``info`` attribute of the initial :class:`~mne.io.Raw`\n object (the one outside the :meth:`~mne.io.Raw.append` method call).</p></div>\n\nExtracting data from Raw objects\nSo far we've been looking at ways to modify a :class:~mne.io.Raw object.\nThis section shows how to extract the data from a :class:~mne.io.Raw object\ninto a :class:NumPy array &lt;numpy.ndarray&gt;, for analysis or plotting using\nfunctions outside of MNE-Python. To select portions of the data,\n:class:~mne.io.Raw objects can be indexed using square brackets. However,\nindexing :class:~mne.io.Raw works differently than indexing a :class:NumPy\narray &lt;numpy.ndarray&gt; in two ways:\n\n\nAlong with the requested sample value(s) MNE-Python also returns an array\n of times (in seconds) corresponding to the requested samples. The data\n array and the times array are returned together as elements of a tuple.\n\n\nThe data array will always be 2-dimensional even if you request only a\n single time sample or a single channel.\n\n\nExtracting data by index\nTo illustrate the above two points, let's select a couple seconds of data\nfrom the first channel:", "sampling_freq = raw.info['sfreq']\nstart_stop_seconds = np.array([11, 13])\nstart_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int)\nchannel_index = 0\nraw_selection = raw[channel_index, start_sample:stop_sample]\nprint(raw_selection)", "You can see that it contains 2 arrays. This combination of data and times\nmakes it easy to plot selections of raw data (although note that we're\ntransposing the data array so that each channel is a column instead of a row,\nto match what matplotlib expects when plotting 2-dimensional y against\n1-dimensional x):", "x = raw_selection[1]\ny = raw_selection[0].T\nplt.plot(x, y)", "Extracting channels by name\nThe :class:~mne.io.Raw object can also be indexed with the names of\nchannels instead of their index numbers. You can pass a single string to get\njust one channel, or a list of strings to select multiple channels. As with\ninteger indexing, this will return a tuple of (data_array, times_array)\nthat can be easily plotted. Since we're plotting 2 channels this time, we'll\nadd a vertical offset to one channel so it's not plotted right on top\nof the other one:", "channel_names = ['MEG_0712', 'MEG_1022']\ntwo_meg_chans = raw[channel_names, start_sample:stop_sample]\ny_offset = np.array([5e-11, 0]) # just enough to separate the channel traces\nx = two_meg_chans[1]\ny = two_meg_chans[0].T + y_offset\nlines = plt.plot(x, y)\nplt.legend(lines, channel_names)", "Extracting channels by type\nThere are several ways to select all channels of a given type from a\n:class:~mne.io.Raw object. The safest method is to use\n:func:mne.pick_types to obtain the integer indices of the channels you\nwant, then use those indices with the square-bracket indexing method shown\nabove. The :func:~mne.pick_types function uses the :class:~mne.Info\nattribute of the :class:~mne.io.Raw object to determine channel types, and\ntakes boolean or string parameters to indicate which type(s) to retain. The\nmeg parameter defaults to True, and all others default to False,\nso to get just the EEG channels, we pass eeg=True and meg=False:", "eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True)\neeg_data, times = raw[eeg_channel_indices]\nprint(eeg_data.shape)", "Some of the parameters of :func:mne.pick_types accept string arguments as\nwell as booleans. For example, the meg parameter can take values\n'mag', 'grad', 'planar1', or 'planar2' to select only\nmagnetometers, all gradiometers, or a specific type of gradiometer. See the\ndocstring of :meth:mne.pick_types for full details.\nThe Raw.get_data() method\nIf you only want the data (not the corresponding array of times),\n:class:~mne.io.Raw objects have a :meth:~mne.io.Raw.get_data method. Used\nwith no parameters specified, it will extract all data from all channels, in\na (n_channels, n_timepoints) :class:NumPy array &lt;numpy.ndarray&gt;:", "data = raw.get_data()\nprint(data.shape)", "If you want the array of times, :meth:~mne.io.Raw.get_data has an optional\nreturn_times parameter:", "data, times = raw.get_data(return_times=True)\nprint(data.shape)\nprint(times.shape)", "The :meth:~mne.io.Raw.get_data method can also be used to extract specific\nchannel(s) and sample ranges, via its picks, start, and stop\nparameters. The picks parameter accepts integer channel indices, channel\nnames, or channel types, and preserves the requested channel order given as\nits picks parameter.", "first_channel_data = raw.get_data(picks=0)\neeg_and_eog_data = raw.get_data(picks=['eeg', 'eog'])\ntwo_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'],\n start=1000, stop=2000)\n\nprint(first_channel_data.shape)\nprint(eeg_and_eog_data.shape)\nprint(two_meg_chans_data.shape)", "Summary of ways to extract data from Raw objects\nThe following table summarizes the various ways of extracting data from a\n:class:~mne.io.Raw object.\n.. cssclass:: table-bordered\n.. rst-class:: midvalign\n+-------------------------------------+-------------------------+\n| Python code | Result |\n| | |\n| | |\n+=====================================+=========================+\n| raw.get_data() | :class:NumPy array |\n| | &lt;numpy.ndarray&gt; |\n| | (n_chans × n_samps) |\n+-------------------------------------+-------------------------+\n| raw[:] | :class:tuple of (data |\n+-------------------------------------+ (n_chans × n_samps), |\n| raw.get_data(return_times=True) | times (1 × n_samps)) |\n+-------------------------------------+-------------------------+\n| raw[0, 1000:2000] | |\n+-------------------------------------+ |\n| raw['MEG 0113', 1000:2000] | |\n+-------------------------------------+ |\n| raw.get_data(picks=0, | :class:`tuple` of |\n| start=1000, stop=2000, | (data (1 × 1000), |\n| return_times=True) | times (1 × 1000)) |\n+-------------------------------------+ |\n| raw.get_data(picks='MEG 0113', | |\n| start=1000, stop=2000, | |\n| return_times=True) | |\n+-------------------------------------+-------------------------+\n| raw[7:9, 1000:2000] | |\n+-------------------------------------+ |\n| raw[[2, 5], 1000:2000] | :class:tuple of |\n+-------------------------------------+ (data (2 × 1000), |\n| raw[['EEG 030', 'EOG 061'], | times (1 × 1000)) |\n| 1000:2000] | |\n+-------------------------------------+-------------------------+\nExporting and saving Raw objects\n:class:~mne.io.Raw objects have a built-in :meth:~mne.io.Raw.save method,\nwhich can be used to write a partially processed :class:~mne.io.Raw object\nto disk as a :file:.fif file, such that it can be re-loaded later with its\nvarious attributes intact (but see precision for an important\nnote about numerical precision when saving).\nThere are a few other ways to export just the sensor data from a\n:class:~mne.io.Raw object. One is to use indexing or the\n:meth:~mne.io.Raw.get_data method to extract the data, and use\n:func:numpy.save to save the data array:", "data = raw.get_data()\nnp.save(file='my_data.npy', arr=data)", "It is also possible to export the data to a :class:Pandas DataFrame\n&lt;pandas.DataFrame&gt; object, and use the saving methods that :mod:Pandas\n&lt;pandas&gt; affords. The :class:~mne.io.Raw object's\n:meth:~mne.io.Raw.to_data_frame method is similar to\n:meth:~mne.io.Raw.get_data in that it has a picks parameter for\nrestricting which channels are exported, and start and stop\nparameters for restricting the time domain. Note that, by default, times will\nbe converted to milliseconds, rounded to the nearest millisecond, and used as\nthe DataFrame index; see the scaling_time parameter in the documentation\nof :meth:~mne.io.Raw.to_data_frame for more details.", "sampling_freq = raw.info['sfreq']\nstart_end_secs = np.array([10, 13])\nstart_sample, stop_sample = (start_end_secs * sampling_freq).astype(int)\ndf = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample)\n# then save using df.to_csv(...), df.to_hdf(...), etc\nprint(df.head())", "<div class=\"alert alert-info\"><h4>Note</h4><p>When exporting data as a :class:`NumPy array <numpy.ndarray>` or\n :class:`Pandas DataFrame <pandas.DataFrame>`, be sure to properly account\n for the `unit of representation <units>` in your subsequent\n analyses.</p></div>\n\n.. LINKS\nhttps://docs.python.org/3/tutorial/datastructures.html#dictionaries" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davidhamann/python-fmrest
examples/uploading_container_data.ipynb
mit
[ "# hide ssl warnings for this example.\nimport requests\nrequests.packages.urllib3.disable_warnings()", "Uploading container data with python-fmrest\nThis is a short example on how to upload container data with python-fmrest.\nImport, create server and login", "import fmrest\n\nfms = fmrest.Server('https://10.211.55.15',\n user='admin',\n password='admin',\n database='Contacts',\n layout='Demo',\n verify_ssl=False\n )\nfms.login()", "Upload a document for record 1", "record_id = 1", "We open a file in binary mode from the current directory and pass it to the upload_container() method.", "with open('dog-meme.jpg', 'rb') as funny_picture:\n result = fms.upload_container(record_id, 'portrait', funny_picture)\nresult", "Retrieve the uploaded document again", "record = fms.get_record(1)\nrecord.portrait\n\nname, type_, length, response = fms.fetch_file(record.portrait)\nname, type_, length\n\nfrom IPython.display import Image\nImage(response.content) " ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
googledatalab/notebooks
samples/contrib/mlworkbench/structured_data_regression_taxi/Taxi Fare Model (full data).ipynb
apache-2.0
[ "About This Notebook\nThis notebook demonstrate how to use ML Workbench to create a regression model that accepts numeric and categorical data. This one shows \"cloud run\" mode, which does each step in Google Cloud Platform with various services. Cloud run can be distributed so it can handle large data without being restricted on memory, computation, or disk limits. The notebook is similar to last one (Taxi Fare Model (small data)), but it uses full data (about 77M instances).\nThere are only a few things that need to change between \"local run\" and \"cloud run\":\n\nall data sources or file paths must be on GCS.\nthe --cloud flag must be set for each step.\n\"cloud_config\" can be set for cloud specific settings, such as project_id, machine_type. In some cases it is required.\n\nOther than this, nothing else changes from local to cloud!\nNote: \"Run all cells\" does not work for this notebook because the steps are asynchonous. In many steps it submits a cloud job, and you should track the status by following the job link.\nExecution of this notebook requires Google Datalab (see setup instructions).\nThe Data\nWe will use Chicago Taxi Trip Data. Using pickup location, drop off location, taxi company, the model we will build predicts the trip fare.\nSplit Data Into Train/Eval Sets\nUse bigquery to select the features we need and also randomly choose 5% for eval, 95% for training.", "%%bq query --name texi_query_eval\nSELECT\n unique_key,\n fare,\n CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS STRING) as weekday,\n CAST(EXTRACT(DAYOFYEAR FROM trip_start_timestamp) AS STRING) as day,\n CAST(EXTRACT(HOUR FROM trip_start_timestamp) AS STRING) as hour,\n pickup_latitude,\n pickup_longitude,\n dropoff_latitude,\n dropoff_longitude,\n company\nFROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`\nWHERE \n fare > 2.0 AND fare < 200.0 AND\n pickup_latitude IS NOT NULL AND\n pickup_longitude IS NOT NULL AND\n dropoff_latitude IS NOT NULL AND\n dropoff_longitude IS NOT NULL AND\n MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) < 5\n \n\n%%bq query --name texi_query_train\nSELECT\n unique_key,\n fare,\n CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS STRING) as weekday,\n CAST(EXTRACT(DAYOFYEAR FROM trip_start_timestamp) AS STRING) as day,\n CAST(EXTRACT(HOUR FROM trip_start_timestamp) AS STRING) as hour,\n pickup_latitude,\n pickup_longitude,\n dropoff_latitude,\n dropoff_longitude,\n company\nFROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`\nWHERE \n fare > 2.0 AND fare < 200.0 AND\n pickup_latitude IS NOT NULL AND\n pickup_longitude IS NOT NULL AND\n dropoff_latitude IS NOT NULL AND\n dropoff_longitude IS NOT NULL AND\n MOD(ABS(FARM_FINGERPRINT(unique_key)), 100) >= 5", "Create \"chicago_taxi.train\" and \"chicago_taxi.eval\" BQ tables to store results.", "%%bq datasets create --name chicago_taxi\n\n%%bq execute\nquery: texi_query_eval\ntable: chicago_taxi.eval\nmode: overwrite\n\n%%bq execute\nquery: texi_query_train\ntable: chicago_taxi.train\nmode: overwrite", "Sanity check on the data.", "%%bq query\nSELECT count(*) FROM chicago_taxi.train\n\n%%bq query\nSELECT count(*) FROM chicago_taxi.eval", "Explore Data\nSee previous notebook (Taxi Fare Model (small data)) for data exploration.\nCreate Model with ML Workbench\nThe MLWorkbench Magics are a set of Datalab commands that allow an easy code-free experience to training, deploying, and predicting ML models. This notebook will take the data in BigQuery tables and build a regression model. The MLWorkbench Magics are a collection of magic commands for each step in ML workflows: analyzing input data to build transforms, transforming data, training a model, evaluating a model, and deploying a model.\nFor details of each command, run with --help. For example, \"%%ml train --help\".\nThis notebook will run the analyze, transform, and training steps in cloud with services. Notice the \"--cloud\" flag is set for each step.", "import google.datalab.contrib.mlworkbench.commands # this loads the %%ml commands\n\n%%ml dataset create\nname: taxi_data_full\nformat: bigquery\ntrain: chicago_taxi.train\neval: chicago_taxi.eval\n\n!gsutil mb gs://datalab-chicago-taxi-demo # Create a Storage Bucket to store results.", "Step 1: Analyze\nThe first step in the MLWorkbench workflow is to analyze the data for the requested transformations. Analysis in this case builds vocabulary for categorical features, and compute numeric stats for numeric features.", "!gsutil rm -r -f gs://datalab-chicago-taxi-demo/analysis # Remove previous analysis results if any\n\n%%ml analyze --cloud\noutput: gs://datalab-chicago-taxi-demo/analysis\ndata: taxi_data_full\nfeatures:\n unique_key:\n transform: key\n fare:\n transform: target \n company:\n transform: embedding\n embedding_dim: 10\n weekday:\n transform: one_hot\n day:\n transform: one_hot\n hour:\n transform: one_hot\n pickup_latitude:\n transform: scale \n pickup_longitude:\n transform: scale\n dropoff_latitude:\n transform: scale\n dropoff_longitude:\n transform: scale", "Step 2: Transform\nThe transform step performs some transformations on the input data and saves the results to a special TensorFlow file called a TFRecord file containing TF.Example protocol buffers. This allows training to start from preprocessed data. If this step is not used, training would have to perform the same preprocessing on every row of csv data every time it is used. As TensorFlow reads the same data row multiple times during training, this means the same row would be preprocessed multiple times. By writing the preprocessed data to disk, we can speed up training.\nThe transform is required if your source data is in BigQuery table.\nWe run the transform step for the training and eval data.", "!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/transform # Remove previous transform results if any.", "Transform takes about 6 hours in cloud. Data is fairely big (33GB) and processing locally on a single VM would be much longer.", "%%ml transform --cloud\noutput: gs://datalab-chicago-taxi-demo/transform\nanalysis: gs://datalab-chicago-taxi-demo/analysis\ndata: taxi_data_full\n\n!gsutil list gs://datalab-chicago-taxi-demo/transform/eval-*\n\n%%ml dataset create\nname: taxi_data_transformed\nformat: transformed\ntrain: gs://datalab-chicago-taxi-demo/transform/train-*\neval: gs://datalab-chicago-taxi-demo/transform/eval-*", "Step 3: Training\nMLWorkbench help build standard TensorFlow models without you having to write any TensorFlow code. We already know from last notebook that DNN regression model works better.", "!gsutil -m rm -r -f gs://datalab-chicago-taxi-demo/train # Remove previous training results.", "Training takes about 30 min with \"STANRDARD_1\" scale_tier. Note that we will perform 1M steps. This will take much longer if we run it locally on Datalab's VM. With CloudML Engine, it runs training in a distributed way with multiple VMs, so it runs much faster.", "%%ml train --cloud\noutput: gs://datalab-chicago-taxi-demo/train\nanalysis: gs://datalab-chicago-taxi-demo/analysis\ndata: taxi_data_transformed\nmodel_args:\n model: dnn_regression\n hidden-layer-size1: 400\n hidden-layer-size2: 200\n train-batch-size: 1000\n max-steps: 1000000\ncloud_config:\n region: us-east1\n scale_tier: STANDARD_1", "Step 4: Evaluation using batch prediction\nBelow, we use the evaluation model and run batch prediction in cloud. For demo purpose, we will use the evaluation data again.", "# Delete previous results\n!gsutil -m rm -r gs://datalab-chicago-taxi-demo/batch_prediction", "Currently, batch_prediction service does not work with BigQuery data. So we export eval data to csv file.", "%%bq extract\ntable: chicago_taxi.eval\nformat: csv\npath: gs://datalab-chicago-taxi-demo/eval.csv", "Run batch prediction. Note that we use evaluation_model because it takes input data with target (truth) column.", "%%ml batch_predict --cloud\nmodel: gs://datalab-chicago-taxi-demo/train/evaluation_model\noutput: gs://datalab-chicago-taxi-demo/batch_prediction\nformat: csv\ndata:\n csv: gs://datalab-chicago-taxi-demo/eval.csv\ncloud_config:\n region: us-east1", "Once batch prediction is done, check results files. Batch prediction service outputs to JSON files.", "!gsutil list -l -h gs://datalab-chicago-taxi-demo/batch_prediction", "We can load the results back to BigQuery.", "%%bq load\nformat: json\nmode: overwrite \ntable: chicago_taxi.eval_results\npath: gs://datalab-chicago-taxi-demo/batch_prediction/prediction.results*\nschema:\n - name: unique_key\n type: STRING\n - name: predicted\n type: FLOAT\n - name: target\n type: FLOAT", "With data in BigQuery can do some query analysis. For example, RMSE.", "%%ml evaluate regression\nbigquery: chicago_taxi.eval_results", "From above, the results are better than local run with sampled data. RMSE reduced by 2.5%, MAE reduced by around 20%. Average absolute error reduced by around 30%.\nSelect top results sorted by error.", "%%bq query\nSELECT\n predicted, \n target,\n ABS(predicted-target) as error,\n s.* \nFROM `chicago_taxi.eval_results` as r \nJOIN `chicago_taxi.eval` as s \nON r.unique_key = s.unique_key \nORDER BY error DESC\nLIMIT 10", "There is also a feature slice visualization component designed for viewing evaluation results. It shows correlation between features and prediction results.", "%%bq query --name error_by_hour\nSELECT\n COUNT(*) as count,\n hour as feature,\n AVG(ABS(predicted - target)) as avg_error,\n STDDEV(ABS(predicted - target)) as stddev_error\nFROM `chicago_taxi.eval_results` as r\nJOIN `chicago_taxi.eval` as s \nON r.unique_key = s.unique_key \nGROUP BY hour\n\n# Note: the interactive output is replaced with a static image so it displays well in github.\n# Please execute this cell to see the interactive component.\n\nfrom google.datalab.ml import FeatureSliceView\n\nFeatureSliceView().plot(error_by_hour)\n\n%%bq query --name error_by_weekday\nSELECT\n COUNT(*) as count,\n weekday as feature,\n AVG(ABS(predicted - target)) as avg_error,\n STDDEV(ABS(predicted - target)) as stddev_error\nFROM `chicago_taxi.eval_results` as r\nJOIN `chicago_taxi.eval` as s \nON r.unique_key = s.unique_key \nGROUP BY weekday\n\n# Note: the interactive output is replaced with a static image so it displays well in github.\n# Please execute this cell to see the interactive component.\n\nfrom google.datalab.ml import FeatureSliceView\n\nFeatureSliceView().plot(error_by_weekday)", "What we can see from above charts is that model performs worst in hour 5 and 6 (why?), and best on Sundays (less traffic?).\nModel Deployment and Online Prediction\nModel deployment works the same between locally trained models and cloud trained models. Please see previous notebook (Taxi Fare Model (small data)).\nCleanup", "!gsutil -m rm -rf gs://datalab-chicago-taxi-demo" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdsanch1/SimRC
02. Parte 2/15. Clase 15/.ipynb_checkpoints/10Class NB-checkpoint.ipynb
mit
[ "Clase 10: Inclusión de un activo libre de riesgo en el portafolio\nJuan Diego Sánchez Torres, \nProfesor, MAF ITESO\n\nDepartamento de Matemáticas y Física\ndsanchez@iteso.mx\nTel. 3669-34-34 Ext. 3069\nOficina: Cubículo 4, Edificio J, 2do piso\n\n1. Motivación\nEn primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.", "#importar los paquetes que se van a usar\nimport pandas as pd\nimport pandas_datareader.data as web\nimport numpy as np\nimport datetime\nfrom datetime import datetime\nimport scipy.stats as stats\nimport scipy as sp\nimport scipy.optimize as scopt\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport sklearn.covariance as skcov\n%matplotlib inline\n#algunas opciones para Python\npd.set_option('display.notebook_repr_html', True)\npd.set_option('display.max_columns', 6)\npd.set_option('display.max_rows', 10)\npd.set_option('display.width', 78)\npd.set_option('precision', 3)", "2. Uso de Pandas para descargar datos de precios de cierre\nBajar datos en forma de función", "def get_historical_closes(ticker, start_date, end_date):\n p = web.DataReader(ticker, \"yahoo\", start_date, end_date).sort_index('major_axis')\n d = p.to_frame()['Adj Close'].reset_index()\n d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)\n pivoted = d.pivot(index='Date', columns='Ticker')\n pivoted.columns = pivoted.columns.droplevel(0)\n return pivoted", "Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.\nNota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:\n*conda install -c conda-forge pandas-datareader *", "assets = ['AAPL','MSFT','AA','AMZN','KO','QAI']\ncloses=get_historical_closes(assets, '2016-01-01', '2017-09-22')\ncloses\n\ncloses.plot(figsize=(8,6));", "Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX. \nPor ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.\n3. Formulación del riesgo de un portafolio", "def calc_daily_returns(closes):\n return np.log(closes/closes.shift(1))[1:]\n\ndaily_returns=calc_daily_returns(closes)\ndaily_returns\n\ndaily_returns_b=calc_daily_returns(closes)\nyb=0.000001\ndaily_returns_b['BOND']=yb*np.ones(daily_returns.index.size)\ndaily_returns_b\n\nmean_daily_returns = pd.DataFrame(daily_returns.mean(),columns=['Mean'],index=daily_returns.columns)\nmean_daily_returns\n\nmean_daily_returns_b = pd.DataFrame(daily_returns_b.mean(),columns=['Mean'],index=daily_returns_b.columns)\nmean_daily_returns_b\n\ncov_matrix = daily_returns.cov()\ncov_matrix\n\ndaily_returns.corr().stack().sort_values(axis=0, ascending=True, kind='quicksort')\n\ncov_matrix_b = daily_returns_b.cov()\ncov_matrix_b\n\n#robust_cov_matrix= pd.DataFrame(skcov.EmpiricalCovariance().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)\n#robust_cov_matrix= pd.DataFrame(skcov.EllipticEnvelope().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)\n#robust_cov_matrix= pd.DataFrame(skcov.MinCovDet().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)\nrobust_cov_matrix= pd.DataFrame(skcov.ShrunkCovariance().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)\nrobust_cov_matrix\n\nrobust_cov_matrix_b= pd.DataFrame(np.insert((np.insert(skcov.ShrunkCovariance().fit(daily_returns).covariance_,len(assets),0,axis=0)),len(assets),0,axis=1)\n,columns=daily_returns_b.columns,index=daily_returns_b.columns)\nrobust_cov_matrix_b", "4. Optimización de portafolios", "num_portfolios = 200000\nnum_assets=len(assets)\n\nr=0.0001\n\nweights = np.array(np.random.random(num_assets*num_portfolios)).reshape(num_portfolios,num_assets)\nweights = weights*np.matlib.repmat(1/weights.sum(axis=1),num_assets,1).T\nrend=252*weights.dot(mean_daily_returns.values[:,0]).T\nsd = np.zeros(num_portfolios)\nfor i in range(num_portfolios):\n sd[i]=np.sqrt(252*(((weights[i,:]).dot(robust_cov_matrix)).dot(weights[i,:].T))) \nsharpe=np.divide((rend-r),sd)\nresults_frame = pd.DataFrame(data=np.column_stack((rend,sd,sharpe,weights)),columns=(['Rendimiento','SD','Sharpe']+list(daily_returns.columns)))\n\n#Sharpe Ratio\nmax_sharpe_port = results_frame.iloc[results_frame['Sharpe'].idxmax()]\n#Menor SD\nmin_vol_port = results_frame.iloc[results_frame['SD'].idxmin()]\n\nplt.scatter(results_frame.SD,results_frame.Rendimiento,c=results_frame.Sharpe,cmap='RdYlBu')\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Sharpe Ratio\nplt.scatter(max_sharpe_port[1],max_sharpe_port[0],marker=(5,1,0),color='r',s=1000);\n#Menor SD\nplt.scatter(min_vol_port[1],min_vol_port[0],marker=(5,1,0),color='g',s=1000);\n\npd.DataFrame(max_sharpe_port)\n\npd.DataFrame(min_vol_port)\n\nnum_assets_b=len(assets)+1\nweights_b = np.array(np.random.random(num_assets_b*num_portfolios)).reshape(num_portfolios,num_assets_b)\nweights_b[0:int(num_portfolios/5),-1]=weights_b[0:int(num_portfolios/5),-1]+5\nweights_b = weights_b*np.matlib.repmat(1/weights_b.sum(axis=1),num_assets_b,1).T\nweights_b[0,:]=np.zeros(num_assets_b) \nweights_b[0,:][-1]=1\nrend_b=252*weights_b.dot(mean_daily_returns_b.values[:,0]).T\nsd_b = np.zeros(num_portfolios)\nfor i in range(num_portfolios):\n sd_b[i]=np.sqrt(252*(((weights_b[i,:]).dot(robust_cov_matrix_b)).dot(weights_b[i,:].T))) \nsharpe_b = np.zeros(num_portfolios) \nsharpe_b[1:]=np.divide((rend_b[1:]-r),sd_b[1:])\nresults_frame_b = pd.DataFrame(data=np.column_stack((rend_b,sd_b,sharpe_b,weights_b)),columns=(['Rendimiento','SD','Sharpe']+list(daily_returns_b.columns)))\n\n#Sharpe Ratio\nmax_sharpe_port_b = results_frame_b.iloc[results_frame_b['Sharpe'].idxmax()]\n#Menor SD\nmin_vol_port_b = results_frame_b.iloc[results_frame_b['SD'].idxmin()]\n\nplt.scatter(results_frame_b.SD,results_frame_b.Rendimiento,c=results_frame_b.Sharpe,cmap='RdYlBu')\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Sharpe Ratio\nplt.scatter(max_sharpe_port_b[1],max_sharpe_port_b[0],marker=(5,1,0),color='r',s=1000);\n#Menor SD\nplt.scatter(min_vol_port_b[1],min_vol_port_b[0],marker=(5,1,0),color='g',s=1000);\n\npd.DataFrame(max_sharpe_port_b)\n\npd.DataFrame(min_vol_port_b)\n\ndef sim_mont_portfolio(daily_returns,num_portfolios,risk_free):\n num_assets=len(daily_returns.T)\n #Packages\n import pandas as pd\n import sklearn.covariance as skcov\n import statsmodels.api as sm\n huber = sm.robust.scale.Huber()\n #Mean and standar deviation returns\n returns_av, scale = huber(daily_returns)\n #returns_av = daily_returns.mean()\n covariance= skcov.ShrunkCovariance().fit(daily_returns).covariance_\n #Simulated weights\n weights = np.array(np.random.random(num_assets*num_portfolios)).reshape(num_portfolios,num_assets)\n weights = weights*np.matlib.repmat(1/weights.sum(axis=1),num_assets,1).T\n ret=252*weights.dot(returns_av).T\n sd = np.zeros(num_portfolios)\n for i in range(num_portfolios):\n sd[i]=np.sqrt(252*(((weights[i,:]).dot(covariance)).dot(weights[i,:].T))) \n sharpe=np.divide((ret-risk_free),sd) \n return pd.DataFrame(data=np.column_stack((ret,sd,sharpe,weights)),columns=(['Returns','SD','Sharpe']+list(daily_returns.columns)))\n\nresults_frame = sim_mont_portfolio(daily_returns,200000,r)\n\n#Sharpe Ratio\nmax_sharpe_port = results_frame.iloc[results_frame['Sharpe'].idxmax()]\n#Menor SD\nmin_vol_port = results_frame.iloc[results_frame['SD'].idxmin()]\n\nplt.scatter(results_frame.SD,results_frame.Returns,c=results_frame.Sharpe,cmap='RdYlBu')\nplt.xlabel('Volatility')\nplt.ylabel('Returns')\nplt.colorbar()\n#Sharpe Ratio\nplt.scatter(max_sharpe_port[1],max_sharpe_port[0],marker=(5,1,0),color='r',s=1000);\n#Menor SD\nplt.scatter(min_vol_port[1],min_vol_port[0],marker=(5,1,0),color='g',s=1000);\n\npd.DataFrame(max_sharpe_port)\n\npd.DataFrame(min_vol_port)\n\nimport sim_mont_portfolio_py\n\nsim_mont_portfolio_py.sim_mont_portfolio(daily_returns,2000,r)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JKeun/project-01-findmyjob
findmyjob_multiNB_model.ipynb
mit
[ "Find my job\n\nsmall project in data science school (남지열, 박재근, 신은지)\n\n\n\nWhy?\n\n내가 하고 싶은 일은 무엇일까?\n내가 잘 할 수 있는 일은 무엇일까?\n나는 과연 데이터 사이언티스트로서 적합한 사람일까? 잘할 수 있을까?\n\nGoal\n\n내가 가진 역량 및 성격을 입력 (input) = > 나에게 적합한 직업을 예측 (output)\n\nHow?\n\n\n데이터 수집\n\nlinkedin 웹사이트에서 키워드 'Data Scientist'로 채용공고 검색 (y)\n해당 공고에서 requirements & qualifications 크롤링 (X)\n\n\n\n분석 방법\n\nSupervised learning\nNaive Bayesian\nMultinomial Naive Bayes\n\n\n$\\hat{y} = \\arg\\max_y P(y) \\prod_{i=1}^{n} P(x_i \\mid y)$\n\n\n\n\n\n\n< Workflow >\n데이터수집 => 전처리 => 모델선택 => 계수추정 => 평가 => 개선작업 => 최종 성능평가\n\n1. Data & Samples\n\ninput data is string of job description(reponsibility&qualification)\ntarget is {class0, class1, class2} -> {'Data Science', 'Digital Marketing', 'UX/UI Deginger'}", "categories = ['Data Science', 'Digital Marketing', 'UX/UI Designer']\n\ndf = pd.read_excel('./resource/job.xlsx')\nX_train = df['X'].values\ny_train = df['Y'].values\ndf.head()", "2. Extracting features from text (Preprocessing)\n\nX_train_tfidf == X_train_tfidf_vect", "from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer, TfidfVectorizer\ncount_vect = CountVectorizer()\nX_train_counts = count_vect.fit_transform(X_train)\n\ntfidf_transformer = TfidfTransformer()\nX_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)\n\n#or\n\ntfidf_vect = TfidfVectorizer()\nX_train_tfidf_vect = tfidf_vect.fit_transform(X_train)\n\nprint(X_train_tfidf != X_train_tfidf_vect)", "3. Model Selection & Parameter search\n\nMultinomialNB\nSGD", "from sklearn.cross_validation import StratifiedKFold\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.linear_model.stochastic_gradient import SGDClassifier\n\ncv = StratifiedKFold(y_train, n_folds=5, random_state=0)\n\ni_range = []\nscore_range = []\nsigma = []\n\nfor a in np.arange(-5e-05, 5e-05, 1e-05):\n mnb = MultinomialNB(alpha = a)\n scores = np.zeros(5)\n \n for i, (train_idx, test_idx) in enumerate(cv):\n X_val = X_train_tfidf[train_idx]\n y_val = y_train[train_idx]\n X_test = X_train_tfidf[test_idx]\n y_test = y_train[test_idx]\n \n mnb.fit(X_val, y_val)\n y_pred = mnb.predict(X_test)\n \n scores[i] = np.mean(y_pred == y_test)\n \n i_range.append(a)\n score_range.append(np.mean(scores))\n sigma.append(np.std(scores))\n \nbest_idx = np.argmax(score_range)\nbest_alpha = i_range[best_idx]\nbest_score = score_range[best_idx]\nsigma\n\nplt.figure(figsize = (15, 5))\nplt.plot(i_range, score_range)\nplt.plot(i_range, np.array(score_range) + sigma, 'g--')\nplt.plot(i_range, np.array(score_range) - sigma, 'g--')\nplt.axhline(best_score - sigma[best_idx], linestyle=':', color='r')\nplt.axvline(best_alpha, linestyle=':', color='r')\n\ndef find_nearest(array, value):\n idx = (np.abs(array-value)).argmin()\n return idx\n\nsub_alpha = i_range[find_nearest(score_range, best_score - sigma[best_idx])]\nsub_score = best_score - sigma[best_idx]\n\nplt.scatter(sub_alpha, sub_score, color='r', s=50)\nplt.xlim(0, 0.00003)\nplt.ylabel('CV score')\nplt.xlabel('alpha')\n\nprint(\"best alpha : \", best_alpha)\nprint(\"best score : \", best_score)\nprint(\" 1-sigma : \", sigma[best_idx])\nprint(\"=\"*25)\nprint(\"sub_opt alpha : \", sub_alpha)\nprint(\"sub_opt score : \", sub_score)\n\ncv = StratifiedKFold(y_train, n_folds=5, random_state=0)\n\ni_range = []\nscore_range = []\nsigma = []\n\nfor a in np.arange(1e-5, 10, 0.1):\n sgd = SGDClassifier(alpha = a, loss='log')\n scores = np.zeros(5)\n \n for i, (train_idx, test_idx) in enumerate(cv):\n X_val = X_train_tfidf[train_idx]\n y_val = y_train[train_idx]\n X_test = X_train_tfidf[test_idx]\n y_test = y_train[test_idx]\n \n sgd.fit(X_val, y_val)\n y_pred = sgd.predict(X_test)\n \n scores[i] = np.mean(y_pred == y_test)\n \n i_range.append(a)\n score_range.append(np.mean(scores))\n sigma.append(np.std(scores))\n \nbest_idx = np.argmax(score_range)\nbest_alpha = i_range[best_idx]\nbest_score = score_range[best_idx]\nsigma\n\nplt.figure(figsize = (15, 5))\nplt.plot(i_range, score_range)\nplt.plot(i_range, np.array(score_range) + sigma, 'g--')\nplt.plot(i_range, np.array(score_range) - sigma, 'g--')\nplt.axhline(best_score - sigma[best_idx], linestyle=':', color='r')\nplt.axvline(best_alpha, linestyle=':', color='r')\n\ndef find_nearest(array, value):\n idx = (np.abs(array-value)).argmin()\n return idx\n\nsub_alpha = i_range[find_nearest(score_range, best_score - sigma[best_idx])]\nsub_score = best_score - sigma[best_idx]\n\nplt.scatter(sub_alpha, sub_score, color='r', s=50)\n#plt.xlim(0, 0.00003)\nplt.ylabel('CV score')\nplt.xlabel('alpha')\n\nprint(\"best alpha : \", best_alpha)\nprint(\"best score : \", best_score)\nprint(\" 1-sigma : \", sigma[best_idx])\nprint(\"=\"*25)\nprint(\"sub_opt alpha : \", sub_alpha)\nprint(\"sub_opt score : \", sub_score)", "4. Tuning & Improvement", "from sklearn.pipeline import Pipeline\ntext_clf = Pipeline([\n ('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', MultinomialNB()),\n ])\n#text_clf = text_clf.fit(X_train, y_train)\n\nfrom sklearn.grid_search import GridSearchCV\nparameters = {\n 'vect__ngram_range': [(1, 1), (1, 2)],\n 'vect__stop_words': ['english', None],\n 'tfidf__use_idf' : [True, False],\n 'clf__alpha' : np.arange(-5e-05, 5e-05, 1e-05),\n}\ngs_clf = GridSearchCV(text_clf, parameters, n_jobs=-1)\ngs_clf = gs_clf.fit(X_train, y_train)\n\nbest_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])\nfor param_name in sorted(parameters.keys()):\n print(\"{name}: {best}\".format(\n name=param_name, best=best_parameters[param_name]\n ))\nprint(\"=\"*25)\nprint('score :', score)", "5. Final test", "test_df = pd.read_excel('./resource/test.xlsx')\nX_test = test_df['X'].values\ny_test = test_df['Y'].values\ntest_df.head()\n\nfinal_clf = Pipeline([\n ('vect', CountVectorizer(ngram_range=(1, 2), stop_words='english')),\n ('clf', MultinomialNB(alpha=1e-05))\n ])\nfinal_clf = final_clf.fit(X_train, y_train)\n\npredicted = final_clf.predict(X_test)\nprint('='*20)\nprint(\"test score is :\" ,np.mean(predicted == y_test))\nprint('='*20)", "6. Prediction", "docs = [raw_input()]\n\npredicted = final_clf.predict(docs)[0]\nprob = final_clf.predict_proba(list(docs))[0]\nprob_gap = np.max(prob) - np.median(prob)\n\nif prob_gap > 0.4:\n print(\"\\n==== Your job ====\")\n print(categories[predicted])\n print(\"\\n=== Probability ===\")\n print(prob[predicted])\n \nelse:\n print(\"+++More detailed words please+++\")", "docs = [raw_input()]\npredicted = final_clf.predict(docs)[0]\nprob = final_clf.predict_proba(list(docs))[0]\nprob_gap = np.max(prob) - np.median(prob)\nif prob_gap > 0.4:\n print(\"\\n==== Your job ====\")\n print(categories[predicted])\n print(\"\\n=== Probability ===\")\n print(prob[predicted])\nelse:\n print(\"+++More detailed words please+++\")\nSample Resume: Data Scientist\nCore Competencies\nStrategic Thinking: Able to influence the strategic direction of the company by identifying opportunities in large, rich data sets and creating and implementing data driven strategies that fuel growth including revenue and profits.\nModeling: Design and implement statistical / predictive models and cutting edge algorithms utilizing diverse sources of data to predict demand, risk and price elasticity. Experience with creating ETL processes to source and link data.\nAnalytics: Utilize analytical applications like SAS to identify trends and relationships between different pieces of data, draw appropriate conclusions and translate analytical findings into risk management and marketing strategies that drive value.\nDrive Enhancements: Develop tools and reports that help users access and analyze data resulting in higher revenues and margins and a better customer experience.\nCommunications and Project Management: Capable of turning dry analysis into an exciting story that influences the direction of the business and communicating with diverse teams to take a project from start to finish. Collaborate with product teams to develop and support our internal data platform and to support ongoing analyses.\nSkills and Tools\nNoSQL data stores (Cassandra, MongoDB)\nHadoop, MySQL, Big Table, MapReduce, SAS\nLarge-scale, distributed systems design and development\nScaling, performance and scheduling and ETL techniques\nC, C++, Java, Ruby on Rails\nSample Resume: Digital Marketing Manager\nPromoted to manage and revitalize negative-performing display channel (mobile, video, banners, and social). Oversight included campaign conceptualization, vendor prospecting, media buying/negotiating, campaign trafficking, data analysis, account optimization, secondary monetization partnerships, and new hire training. Also pioneered and developed Education Ad Network and internal ad network across 8 company websites.\nRevived unprofitable display channel, generating $9.1 million in annual revenue growth and $2.1 million in annual margin dollars in less than 3 years.\nEstablished and developed secondary monetization strategies and new revenue streams that drove $1.2 million in profit in 2013.\nBoosted lead quality 80.2% by analyzing enrollment data, shifting media mix and enhancing marketing.\nIncreased channel margin by 400% in 2012. Also beat 2012 lead goal by 18%, bringing in $1.1 million in additional revenue.\nPlanned, executed, and optimized over 30 accounts at any given time, including high-profile vendors such Ad Roll, AOL, CareerBuilder, FutureAds, M&C Saatchi, Pandora, and Twitter.\nDetermined ad placement and pricing strategy, as well as developed direct advertiser and third-party relationships (AOL, Google, etc.) to build out highly profitable internal ad network.\nManaged Marketing Coordinators, Ad Ops Manager, and interns, and established new department standards for training new hires.\nSample Resume: UX/UI Designer\nSkills\nI synthesize the needs and goals of users, product managers, marketing and salespeople, user researchers, writers, localizers, developers and testers to ensure the best possible experience.\ninteraction design and documentation\nPersonas, user scenarios, UX specs, task flows, wireframes, site maps, storyboards, taxonomies, task flows, wireframes, mockups, prototypes, localization, accessibility, visual design and design patterns.\nKnow Windows client, server, phone and Windows Azure UX systems and guidelines.\nprototyping and production\nLow and high fidelity mockups on paper, with Photoshop, Fireworks, PowerPoint, Visio, HTML, WPF, Silverlight and WinRT.\nSpecialty in working directly with developers and testers to make what's designed happen in code.\nWriting & illustrating brand and usage guidelines. Optimizing images, formatting text, sourcing stock photography, creating small Flash pieces and compiling XAML resource dictionaries.\ntechnologies and tools\n(X)HTML, HTML4/5, CSS1/2/3, JavaScript, jQuery, XML/XSLT, PHP, various CMS.\nWindows Presentation Foundation (WPF), Silverlight 3 & 4 and ASP.NET (all in C#).\nApache, MySQL, IIS, SQL Server 2008 configuration and related basic system administration on Mac OS X, Linux, Windows Server 2008 R2 and Windows Server 2012.\nSome familiarity with developing and deploying services built on Windows Azure and System Center 2012 Virtual Machine Manager.\nother creative skills\nTraining in design, fine art drawing, painting, photography and scenography.\nWork as stagehand, electrician and costume technician.\nStudio photography and black-and-white darkroom experience.\nWorking knowledge of non-linear video editing and authoring with Final Cut Pro, Premiere, After Effects, QuickTime and Flash Video.\nPrint design of identities, logos, marketing collateral, apparel, signage, brand guidelines. Proficient with Illustrator, Freehand, PageMaker and InDesign.\nWord Cloud\nData Science", "from wordcloud import WordCloud\nds_text = \" \".join(df[df.Y == 0].X)\nds_text_adjusted = ds_text.lower().replace(\"skill\", \"\").replace(\"experience\", \"\")\ndm_text = \" \".join(df[df.Y == 1].X)\ndm_text_adjusted = dm_text.lower().replace(\"skill\", \"\").replace(\"experience\", \"\")\nux_text = \" \".join(df[df.Y == 2].X)\nux_text_adjusted = ux_text.lower().replace(\"skill\", \"\").replace(\"experience\", \"\")\n\nwordcloud_ds = WordCloud(background_color='white', width=800, height=400).generate(ds_text_adjusted)\nwordcloud_dm = WordCloud(background_color='white', width=800, height=400).generate(dm_text_adjusted)\nwordcloud_ux = WordCloud(background_color='white', width=800, height=400).generate(ux_text_adjusted)\n\nplt.figure(figsize=(18, 10))\nplt.imshow(wordcloud_ds.recolor(random_state=4))\nplt.xticks([])\nplt.yticks([])\nplt.grid(False)", "Digital Marketing", "plt.figure(figsize=(18, 10))\nplt.imshow(wordcloud_dm.recolor(random_state=31))\nplt.xticks([])\nplt.yticks([])\nplt.grid(False)", "UX/UI Design", "plt.figure(figsize=(18, 10))\nplt.imshow(wordcloud_ux.recolor(random_state=33))\nplt.xticks([])\nplt.yticks([])\nplt.grid(False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
napsternxg/DataMiningPython
Lecture Notebooks/Redoing Weka stuff.ipynb
gpl-3.0
[ "Redoing Weka Stuff\nIn this section we will try to redo some of the things we have already done in Weka.\nObjective: To try out some familiar algorithms for classification and regression in python using its libraries.\nImports\nI always try to import all the useful libraries upfront. It is also considered a good practice in programming community.", "%matplotlib inline\nimport numpy as np\nfrom scipy.io import arff\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport patsy\nimport statsmodels.api as sm\n\nfrom sklearn import tree, linear_model, metrics, dummy, naive_bayes, neighbors\n\nfrom IPython.display import Image\nimport pydotplus\n\nsns.set_context(\"paper\")\nsns.set_style(\"ticks\")\n\ndef load_arff(filename):\n data, meta = arff.loadarff(filename)\n df = pd.DataFrame(data, columns=meta.names())\n for c, k in zip(df.columns, meta.types()):\n if k == \"nominal\":\n df[c] = df[c].astype(\"category\")\n if k == \"numeric\":\n df[c] = df[c].astype(\"float\") \n return df\n\ndef get_confusion_matrix(clf, X, y, verbose=True):\n y_pred = clf.predict(X)\n cm = metrics.confusion_matrix(y_true=y, y_pred=y_pred)\n clf_report = metrics.classification_report(y, y_pred)\n df_cm = pd.DataFrame(cm, columns=clf.classes_, index=clf.classes_)\n if verbose:\n print clf_report\n print df_cm\n return clf_report, df_cm\n\ndef show_decision_tree(clf, X, y):\n dot_data = tree.export_graphviz(clf, out_file=None, \n feature_names=X.columns, \n class_names=y.unique(), \n filled=True, rounded=True, \n special_characters=True, impurity=False) \n graph = pydotplus.graph_from_dot_data(dot_data) \n return Image(graph.create_png())\n\n\ndef plot_decision_regions(clf, X, y, col_x=0, col_y=1,\n ax=None, plot_step=0.01, colors=\"bry\"):\n if ax is None:\n fig, ax = plt.subplots()\n x_min, x_max = X[col_x].min(), X[col_x].max()\n y_min, y_max = X[col_y].min(), X[col_y].max()\n xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),\n np.arange(y_min, y_max, plot_step))\n\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n b, Z = np.unique(Z, return_inverse=True)\n Z = Z.reshape(xx.shape)\n cs = ax.contourf(xx, yy, Z, cmap=plt.cm.Paired)\n for i, l in enumerate(clf.classes_):\n idx = np.where(y==l)[0]\n ax.scatter(X.ix[idx, col_x], X.ix[idx, col_y], label=l, c=colors[i], cmap=plt.cm.Paired)\n ax.set_xlabel(col_x)\n ax.set_ylabel(col_y)\n ax.legend(bbox_to_anchor=(1.2, 0.5))\n fig.tight_layout()\n return ax\n\n\ndf = load_arff(\"../data/iris.arff\")\nprint df.shape\ndf.head()\n\ndf.dtypes", "Feature creations - Math Expressions", "df_t = df.copy() ## Since we are going to edit the data we should always make a copy\n\ndf_t.head()\n\ndf_t[\"sepallength_sqr\"] = df_t[\"sepallength\"]**2 ## ** in python is used for exponent.\ndf_t.head()\n\ndf_t[\"sepallength_log\"] = np.log10(df_t[\"sepallength\"])\ndf_t.head()", "Creating many features at once using patsy", "df_t = df_t.rename(columns={\"class\": \"label\"})\ndf_t.head()\n\ny, X = patsy.dmatrices(\"label ~ petalwidth + petallength:petalwidth + I(sepallength**2)-1\", data=df_t, return_type=\"dataframe\")\nprint y.shape, X.shape\n\ny.head()\n\nX.head()\n\nmodel = sm.MNLogit(y, X)\nres = model.fit()\nres.summary()\n\nmodel_sk = linear_model.LogisticRegression(multi_class=\"multinomial\", solver=\"lbfgs\")\nmodel_sk.fit(X, df_t[\"label\"])\n\n\ny_pred = model_sk.predict(X)\n\ny_pred[:10]\n\nprint metrics.classification_report(df_t[\"label\"], y_pred)\n\nmodel_sk_t = tree.DecisionTreeClassifier()\n\nmodel_sk_t.fit(X, df_t[\"label\"])\n\nshow_decision_tree(model_sk_t, X, df_t[\"label\"])\n\nmodel_0r = dummy.DummyClassifier(strategy=\"most_frequent\")\nmodel_0r.fit(X, df_t[\"label\"])\ny_pred = model_0r.predict(X)\nprint metrics.classification_report(df_t[\"label\"], y_pred)\n\ncm = metrics.confusion_matrix(y_true=df_t[\"label\"], y_pred=y_pred)\n\ndf_cm = pd.DataFrame(cm, columns=model_0r.classes_, index=model_0r.classes_)\n\ndf_cm\n\n_ = get_confusion_matrix(model_0r, X, df_t[\"label\"])\n\n_ = get_confusion_matrix(model_sk_t, X, df_t[\"label\"])\n\n_ = get_confusion_matrix(model_sk, X, df_t[\"label\"])", "Plot decision regions\nWe can only do this if our data has 2 features", "y, X = patsy.dmatrices(\"label ~ petalwidth + petallength - 1\", data=df_t, return_type=\"dataframe\") \n# -1 forces the data to not generate an intercept\n\nX.columns\n\ny = df_t[\"label\"]\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(X, y)\n_ = get_confusion_matrix(clf, X, y)\n\nclf.feature_importances_\n\nshow_decision_tree(clf, X, y)\n\nX.head()\n\ny.value_counts()\n\nplot_decision_regions(clf, X, y, col_x=\"petalwidth\", col_y=\"petallength\")", "Naive Bayes classifier", "clf = naive_bayes.GaussianNB()\nclf.fit(X, y)\n_ = get_confusion_matrix(clf, X, y)", "Decision surface of Naive Bayes classifier will not have overlapping colors because of the basic code I am using to show decision boundaries. A better code can show the mixing of colors properly", "plot_decision_regions(clf, X, y, col_x=\"petalwidth\", col_y=\"petallength\")", "Logistic regression", "clf = linear_model.LogisticRegression(multi_class=\"multinomial\", solver=\"lbfgs\")\nclf.fit(X, y)\n_ = get_confusion_matrix(clf, X, y)\n\nplot_decision_regions(clf, X, y, col_x=\"petalwidth\", col_y=\"petallength\")", "IBk of K-nearest neighbors classifier", "clf = neighbors.KNeighborsClassifier(n_neighbors=1)\nclf.fit(X, y)\n_ = get_confusion_matrix(clf, X, y)\n\nplot_decision_regions(clf, X, y, col_x=\"petalwidth\", col_y=\"petallength\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
M-R-Houghton/euroscipy_2015
scikit_image/lectures/adv5_blob_segmentation.ipynb
mit
[ "Image segmentation: extracting objects from images\nDuring this part of the tutorial, we will illustrate a task of image processing frequently encountered in natural or material science, that is the extraction and labeling of pixels belonging to objects of interest. Such an operation is called image segmentation.\n<img src=\"../images/phase_separation.png\" width=\"300px\"/>\nImage segmentation typically requires to perform a succession of different operations on the image of interest, therefore this second part of the tutorial will bring the opportunity to use concepts introduced during the first part of the tutorial, such as the manipulation of numpy arrays, or the filtering of images.\nAs an example, we will use a scanning electron microscopy image of a multiphase glass. Let us start by opening the image.", "from __future__ import division, print_function\n%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt, cm\n\nfrom skimage import io\nfrom skimage import img_as_float\n\nim = io.imread('../images/phase_separation.png')\n\nplt.imshow(im, cmap='gray')\n\nim.dtype, im.shape", "For the sake of convenience, one first removes the information bar at the bottom, in order to retain only the region of the image with the blobs of interest. This operation is just an array slicing removing the last rows, for which we can leverage the nice syntax of NumPy's slicing. \nIn order to determine how many rows to remove, it is possible to use either visual inspection, or a more advanced and robust way relying on NumPy machinery in order to determine the first completely dark row.", "phase_separation = im[:947]\nplt.imshow(phase_separation, cmap='gray')\n\nnp.nonzero(np.all(im < 0.1 * im.max(), axis=1))[0][0]", "Image contrast, histogram and thresholding\nIn order to separate blobs from the background, a simple idea is to use the gray values of pixels: blobs are typically darker than the background. \nIn order to check this impression, let us look at the histogram of pixel values of the image.", "from skimage import exposure\n\nhistogram = exposure.histogram(phase_separation)\nplt.plot(histogram[1], histogram[0])\nplt.xlabel('gray value')\nplt.ylabel('number of pixels')\nplt.title('Histogram of gray values')", "Two peaks are clearly visible in the histogram, but they have a strong overlap. What happens if we try to threshold the image at a value that separates the two peaks?\nFor an automatic computation of the thresholding values, we use Otsu's thresholding, an operation that chooses the threshold in order to have a good separation between gray values of background and foreground.", "from skimage import filters\n\nthreshold = filters.threshold_otsu(phase_separation)\nprint(threshold)\n\nfig, ax = plt.subplots(ncols=2, figsize=(12, 8))\nax[0].imshow(phase_separation, cmap='gray')\nax[0].contour(phase_separation, [threshold])\nax[1].imshow(phase_separation < threshold, cmap='gray')", "Image denoising\nIn order to improve the thresholding, we will try first to filter the image so that gray values are more uniform inside the two phases, and more separated. Filters used to this aim are called denoising filters, since their action amounts to reducing the intensity of the noise on the image.\nZooming on a part of the image that should be uniform illustrates well the concept of noise: the image has random variations of gray levels that originate from the imaging process. Noise can be due to low photon-counting, or to electronic noise on the sensor, although other sources of noise are possible as well.", "plt.imshow(phase_separation[390:410, 820:840], cmap='gray', \n interpolation='nearest')\nplt.colorbar()\nprint(phase_separation[390:410, 820:840].std())", "Several denoising filters average together pixels that are close to each other. If the noise is not spatially correlated, random noise fluctuations will be strongly attenuated by this averaging. \nOne of the most common denoising filters is called the median filter: it replaces the value of a pixel by the median gray value inside a neighbourhood of the pixel. Taking the median gray value preserves edges much better than taking the mean gray value.\nHere we use a square neighbourhood of size 7x7: the larger the window size, the larger the attenuation of the noise, but this may come at the expense of precision for the location of boundaries. Choosing a window size therefore represents a trade-off between denoising and accuracy.", "from skimage import restoration\nfrom skimage import filters\n\nmedian_filtered = filters.median(phase_separation, np.ones((7, 7)))\n\nplt.imshow(median_filtered, cmap='gray')\n\nplt.imshow(median_filtered[390:410, 820:840], cmap='gray', \n interpolation='nearest')\nplt.colorbar()\nprint(median_filtered[390:410, 820:840].std())", "Variations of gray levels inside zones that should be uniform are now smaller in range, and also spatially smoother.\nPlotting the histogram of the denoised image shows that the gray levels of the two phases are now better separated.", "histo_median = exposure.histogram(median_filtered)\nplt.plot(histo_median[1], histo_median[0])", "As a consequence, Otsu thresholding now results in a much better segmentation.", "plt.imshow(phase_separation[:300, :300], cmap='gray')\nplt.contour(median_filtered[:300, :300], \n [filters.threshold_otsu(median_filtered)])", "Going further: Otsu thresholding with adaptative threshold. For images with non-uniform illumination, it is possible to extend Otsu's method to the case for which different thresholds are used in different regions of space.", "binary_image = median_filtered < filters.threshold_otsu(median_filtered)\n\nplt.imshow(binary_image, cmap='gray')", "Exercise: try other denoising filters\nSeveral other denoising filters are available in scikit-image.\n\n\nThe bilateral filter uses similar ideas as for the median filter or the average filter: it averages a pixel with other pixels in a neighbourhood, but gives more weight to pixels for which the gray value is close to the one of the central pixel. The bilateral filter is very efficient at preserving edges.\n\n\nThe total variation filter results in images that are piecewise-constant. This filter optimizes a trade-off between the closeness to the original image, and the (L1) norm of the gradient, the latter part resulting in picewise-constant regions. \n\n\nGoing further: in addition to trying different denoising filters on the phase separation image, do the same on a synthetic image of a square, corrupted by artificial noise.\nFurther reading on denoising with scikit-image: see the Gallery example on denoising\nAn another approach: more advanced segmentation algorithms\nOur approach above consisted in filtering the image so that it was as binary as possible, and then to threshold it. Other methods are possible, that do not threshold the image according only to gray values, but also use spatial information: they tend to attribute the same label to neighbouring pixels. A famous algorithm in order to segment binary images is called the graph cuts algorithm. Although graph cut is not available yet in scikit-image, other algorithms using spatial information are available as well, such as the watershed algorithm, or the random walker algorithm.", "blob_markers = median_filtered < 110\nbg_markers = median_filtered > 160\nmarkers = np.zeros_like(phase_separation)\nmarkers[blob_markers] = 2\nmarkers[bg_markers] = 1\nfrom skimage import morphology\nwatershed = morphology.watershed(filters.sobel(median_filtered), markers)\nplt.imshow(watershed, cmap='gray')", "Image cleaning\nIf we use the denoising + thresholding approach, the result of the thresholding is not completely what we want: small objects are detected, and small holes exist in the objects. Such defects of the segmentation can be amended, using the knowledge that no small holes should exist, and that blobs have a minimal size.\nUtility functions to modify binary images are found in the morphology submodule. Although mathematical morphology encompasses a large set of possible operations, we will only see here how to remove small objects. In order to learn more about mathematical morphology within scikit-image, please take a look at the dedicated tutorial", "from skimage import morphology\n\nonly_large_blobs = morphology.remove_small_objects(binary_image, \n min_size=300)\nplt.imshow(only_large_blobs, cmap='gray')\n\nonly_large = np.logical_not(morphology.remove_small_objects(\n np.logical_not(only_large_blobs), \n min_size=300))\nplt.imshow(only_large, cmap='gray')", "Measuring region properties\nThe segmentation of foreground (objects) and background results in a binary image. In order to measure the properties of the different blobs, one must first attribute a different label to each blob (identified as a connected component of the foreground phase). Then, the utility function measure.regionprops can be used to compute several properties of the labeled regions.\nProperties of the regions can be used for classifying the objects, for example with scikit-learn.", "from skimage import measure\n\nlabels = measure.label(only_large)\nplt.imshow(labels, cmap='spectral')\n\nprops = measure.regionprops(labels, phase_separation)\n\nareas = np.array([prop.area for prop in props])\nperimeters = np.array([prop.perimeter for prop in props])\n\nplt.plot(np.sort(perimeters**2./areas), 'o')", "Other examples\nPlotting labels on an image\nMeasuring region properties\nExercise: visualize an image where the color of a blob encodes its size (blobs of similar size have a similar color). \nExercise: visualize an image where only the most circular blobs are represented. Hint: this involves some manipulations of NumPy arrays.\nProcessing batches of images\nIf one wishes to process a single image, a lot of trial and error is possible, using interactive sessions and intermediate visualizations. Such workflow typically allows to optimize over parameter values, such as the size of the filtering window for denoising the image, or the area of small spurious objects to be removed.\nIn a time of cheap CCD sensors, it is also frequent to deal with collections of images, for which one cannot afford to process each image individually. In such a case, the workflow has to be adapted.\n\n\nfunction parameters need to be set in a more robust manner, using statistical information like the typical noise of the image, or the typical size of objects in the image.\n\n\nit is a good practice to divide the different array manipulations into several functions. Outside an Ipython notebook, such functions would typically be found in a dedicated module, that could be imported from a script.\n\n\napplying the same operations to a collection of (independent) images is a typical example of embarassingly parallel workflow, that calls for multiprocessing computation. The joblib module provides a simple helper function for using multiprocessing on embarassingly parallel for loops. \n\n\nLet us first define two functions with a more robust handling of parameters.", "def remove_information_bar(image, value=0.1):\n value *= image.max()\n row_index = np.nonzero(np.all(image < value, axis=1))[0][0]\n return image[:row_index]\n\nfrom scipy import stats\ndef clean_image(binary_image):\n labels = measure.label(binary_image)\n props = measure.regionprops(labels)\n areas = np.array([prop.area for prop in props])\n large_area = stats.scoreatpercentile(areas, 90)\n remove_small = morphology.remove_small_objects(binary_image, \n large_area / 20)\n remove_holes = np.logical_not(morphology.remove_small_objects(\n np.logical_not(remove_small), \n large_area / 20))\n return remove_holes\n\ndef process_blob_image(image):\n image = remove_information_bar(image)\n image = filters.median(image, np.ones((7, 7)))\n binary_im = image < filters.threshold_otsu(image)\n binary_im = clean_image(binary_im)\n return binary_im", "The glob module is very handy to retrieve lists of image file names using wildcard patterns.", "from glob import glob\nfilelist = glob('../images/phase_separation*.png')\nfilelist.sort()\nprint(filelist)\n\nfig, ax = plt.subplots(nrows=2, ncols=2, figsize=(12, 8))\nfor index, filename in enumerate(filelist[1:]):\n print(filename)\n im = io.imread(filename)\n binary_im = process_blob_image(im)\n i, j = np.unravel_index(index, (2, 2))\n ax[i, j].imshow(binary_im, cmap='gray')\n ax[i, j].axis('off')", "Pipeline approach and order of operations\nIt is quite uncommon to perform a successful segmentation in only one or two operations: typical image require some pre- and post-processing. However, a large number of image processing steps, each using some hand-tuning of parameters, can result in disasters, since the processing pipeline will not work as well for a different image.\nAlso, the order in which the operations are performed is important.", "crude_segmentation = phase_separation < filters.threshold_otsu(phase_separation)\n\nclean_crude = morphology.remove_small_objects(crude_segmentation, 300)\nclean_crude = np.logical_not(morphology.remove_small_objects(\n np.logical_not(clean_crude), 300))\nplt.imshow(clean_crude[:200, :200], cmap='gray')", "It would be possible to filter the image to smoothen the boundary, and then threshold again. However, it is more satisfying to first filter the image so that it is as binary as possible (which corresponds better to our prior information on the materials), and then to threshold the image.\nGoing further: want to know more about image segmentation with scikit-image, or see different examples?\n\nthe tutorial on image segmentation in the user documentation \nthe chapter on scikit-image of the SciPy lecture notes.\na tutorial on chromosome segmentation with uneven illumination", "%reload_ext load_style\n%load_style ../themes/tutorial.css" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
MarsUniversity/ece387
website/block_4_mobile_robotics/misc/lsn28.ipynb
mit
[ "Inertial Measurement Unit (IMU)\nKevin J. Walchko, 12 July 2017\n\nIMUs are key sensors in Inertial Navigation Systems (INS). INS is key for aircraft, ships, cruise missiles, ICBMs, etc to travel long distances and arrive at a location where we want them. Although the mathematical equations behind an INS is a little complex, the sensors feeding an INS need to be calibrated in order to get good results. Today, a lot of devices have built into them IMU's. The IMU we will use is actually one for a cell phone and costs around $15. \nWhen I was a grad student at UF, I was developing an INS for a robotic system. I was given a MEMS IMU which had a worse performance than the one we are using today. My IMU cost around $5000 back in the mid-to-late 1990's.\nReferences\n\nWikipedia MEMS\nSimple Calibration Routine\nCalibration Routine\nHard/Soft Iron Effects\nWikipedia: Earth's magnetic field\nNASA: Earth's Pole Reversal\nWikipedia: Tesla (unit)\nC version\n\nSetup", "%matplotlib inline\n\nfrom __future__ import print_function\nfrom __future__ import division\nimport numpy as np\nfrom the_collector import BagReader\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import pyplot as plt\nfrom math import sin, cos, atan2, pi, sqrt, asin\nfrom math import radians as deg2rad\nfrom math import degrees as rad2deg", "NXP IMU\n\nOur inertial measurement unit (IMU) contains 2 main chips:\nFXOS8700 3-Axis Accelerometer/Magnetometer\n\n±2 g/±4 g/±8 g adjustable acceleration range\n±1200 µT magnetic sensor range\nOutput data rates (ODR) from 1.563 Hz to 800 Hz\n14-bit ADC resolution for acceleration measurements\n16-bit ADC resolution for magnetic measurements\n\nFXAS21002 3-Axis Gyroscope\n\n±250/500/1000/2000°/s configurable range\nOutput Data Rates (ODR) from 12.5 to 800 Hz\n16-bit digital output resolution\n\nMicoelectromechanical Systems (MEMS)\nMEMS is the technology of microscopic devices, particularly those with moving parts. It merges at the nano-scale into nanoelectromechanical systems (NEMS) and nanotechnology. MEMS are also referred to as micromachines in Japan, or micro systems technology (MST) in Europe. Basically, using technology for microprocessor production, companies are able to produce microscopic mechanical devices.\nOur Inertial Measurement Unit is capable of measuring acceleration and the magnetic field of the Earth. Shown below, NXP produced a small mechanical device. Basically, as gravity pulls on the proof mass, the capaticence of C1 and C2 changes. When properly calibrated, the sensor is able to determine the amount of gravity from the capacitence change.\n\nBelow shows what it looks like under an electron microscope.\n\nHere we will take sensor measurements and determine sensor biases to properly calibrate the sensor.\nAccelerometers\nFor the next series of images, imagine the a gross simpification of the above discussion. A proof mass in magically suspended inside a box. The walls of the box are able to measure the amount of force applied to them. The proof mass will only displace from the center under the influence of an external acceleration.\n\nGiven a righthand coordinate system (e.g., standard cartesian coordinate system), the sensors measure acceleration as follows:\n\nNo gravity or acceleration acting on the device. Probably this is in space far from any celestial bodies) or the device is in free fall (terminal velocity).\n\nNow the device is accelerating in the posative x-direction with no gravity present. Notice how the proof mass is being pulled by gravity and pressing against the negative x-axis? Thus the IMU reads the reaction (equal and opposite) of what is happening.\n\nJust like before, there is only one acceleration acting on the device, gravity, and we measure an acceleration in the -Z direction. Typically people talk in terms of g's, this is mainly to stay away from the annoying issue of are we talking SI units ($9.81 m/sec^2$) or imperial units ($32 ft/sec^2$).\n\nFinally, we have oriented our divice $45^\\circ$ up, and gravity is forcing the mass equally on the -x and -z axes.\nAccel Calibration\nManufactures try to produce sensors that perform well, but when you are making millions of them and lowest cost is the is a critical factor in determining if someone will buy it, your sensor won't be perfect. Typically, you want to run the accels through a variety of static tests, where you orient them\nGyroscopes\nMEMS gyroscopes contain a pair of masses that are driven to oscillate with equal amplitude but in opposite directions. When rotated, the Coriolis force creates an orthogonal vibration that can be sensed by a variety of mechanisms.\n$$\nF = 2 M v \\times \\Omega\n$$\nwhere $F$ is the force, $M$ is the mass, $v$ is the velocity of the mass, and $\\Omega$ is the angular velocity.\nMagnetometers\nEarth's Magnetic Field\nThe intensity of the field is often measured in gauss (G), but is generally reported in nanoteslas (nT), with 1 G = 100,000 nT. A nanotesla is also referred to as a gamma ($\\gamma$). The tesla is the SI unit of the Magnetic field, B. The Earth's field ranges between approximately 25,000 and 65,000 nT (0.25–0.65 G or 25-65 $\\mu$T). By comparison, a strong refrigerator magnet has a field of about 10,000,000 nanoteslas (100 G).\n| Prefix | Symbol | Decimal |\n|--------|--------|-----------|\n| milli | m | $10^{-3}$ |\n| micro | $\\mu$ | $10^{-6}$ |\n| nano | n | $10^{-9}$ |\nGeographical Variation of the Field\n\nTemperal Variation of the Field\nThe Earth's magnetic field is always changing and often change every 200k - 300k years.\n\nNoise\nHard Iron Interference\nSo called hard iron interference is caused by static magnetic fields associated with the enviornment. For example, this could include any minor (or major) magnetism in the metal chassis or frame of a vehicle, any actual magnets such as speakers, etc... This interference pattern is unique to the environment but is constant. If you have your compass in an enclosure that is held together with metal screws even these relatively small amounts of ferromagnetic material can cause issues. If we consider the magnetic data circle, hard iron interference has the effect of shifting the entire circle away from the origin by some amount. The amount is dependent on any number of different factors and can be very large. The important part is that this shift is the same for all points in time so it can be calibrated out very easily with a numeric offset which is taken care of by the calibration process\nTo compensate and recenter, for each axis (x,y,z), we will calculate the mean offset ($\\alpha$):\n$$\n\\alpha_x = \\frac{x_{max} + x_{min}}{2} \\\nmag_{corrected} = mag_{raw} - \\alpha_x\n$$\nSoft Iron Interference\nSoft iron interference is caused by distortion of the Earth's magnetic field due to materials in the environment. Think of it like electricity - the magnetic field is looking for the easiest path to get to where it is going. Since magnetic fields can flow more easily through ferromagnetic materials than air, more of the field will flow through the ferromagnetic material than you would expect if it were just air. This distortion effect causes the magnetic field lines to be bent sometimes quite a bit. Note that unlike hard iron interference which is the result of materials which actually have a magnetic field of their own, soft iron interference is caused by non-magnetic materials distorting the Earth's magnetic field. This type of interference has a squishing effect on the magnetic data circle turning it into more of an ellipsoid shape. The distortion in this case depends on the direction that the compass is facing. Because of this, the distortion cannot be calibrated out with a simple offset, more complicated math will still let the compass account for this type of interference though.\nCell Phones\nThe coordinate systems for cell phones are strange. They don't follow normal aerospace coordinate system definitions for doing INS. This is probably because CompSci types don't understand the math and only think about GUI programming where the axes are typically oriented different. \nSome IMUs, if you spend a little more money, come with Kalman Filters built into them. Understanding how they are setup can be confusing for someone with an aerospace/INS background and not an Andriod/Windows background. Any ways, typical definitions for cell phones IMUs are:\n\n\nOur IMU does not have this issue, but in the future, as cell phone/laptop/tablet IMU become cheaper and more common, it is good information to understand becuase you may be using one on your robot.", "def normalize(x, y, z):\n \"\"\"Return a unit vector\"\"\"\n norm = sqrt(x * x + y * y + z * z)\n if norm > 0.0:\n inorm = 1/norm\n x *= inorm\n y *= inorm\n z *= inorm\n else:\n raise Exception('division by zero: {} {} {}'.format(x, y, z))\n return (x, y, z)\n\ndef plotArray(g, dt=None, title=None):\n \"\"\"\n Plots the x, y, and z components of a sensor.\n \n In:\n title - what you want to name something\n [[x,y,z],[x,y,z],[x,y,z], ...]\n Out:\n None\n \"\"\"\n x = []\n y = []\n z = []\n for d in g:\n x.append(d[0])\n y.append(d[1])\n z.append(d[2])\n \n plt.subplot(3,1,1)\n plt.plot(x)\n plt.ylabel('x')\n plt.grid(True)\n if title:\n plt.title(title)\n \n plt.subplot(3,1,2)\n plt.plot(y)\n plt.ylabel('y')\n plt.grid(True)\n \n plt.subplot(3,1,3)\n plt.plot(z)\n plt.ylabel('z')\n plt.grid(True)\n\ndef getOrientation(accel, mag, deg=True):\n ax, ay, az = normalize(*accel)\n mx, my, mz = normalize(*mag)\n \n roll = atan2(ay, az)\n pitch = atan2(-ax, ay*sin(roll)+az*cos(roll))\n\n heading = atan2(\n mz*sin(roll) - my*cos(roll),\n mx*cos(pitch) + my*sin(pitch)*sin(roll) + mz*sin(pitch)*cos(roll)\n )\n\n if deg:\n roll *= 180/pi\n pitch *= 180/pi\n heading *= 180/pi\n\n heading = heading if heading >= 0.0 else 360 + heading\n heading = heading if heading <= 360 else heading - 360\n else:\n heading = heading if heading >= 0.0 else 2*pi + heading\n heading = heading if heading <= 2*pi else heading - 2*pi\n\n return (roll, pitch, heading)\n\ndef find_calibration(mag):\n \"\"\"\n Go through the raw data and find the max/min for x, y, z\n \"\"\"\n max_m = [-1000]*3\n min_m = [1000]*3\n for m in mag:\n for i in range(3):\n max_m[i] = m[i] if m[i] > max_m[i] else max_m[i]\n min_m[i] = m[i] if m[i] < min_m[i] else min_m[i]\n bias = [0]*3\n for i in range(3):\n bias[i] = (max_m[i] + min_m[i])/2\n return bias\n\ndef apply_calibration(data, bias):\n \"\"\"\n Given the data and the bias, correct the data \n \"\"\"\n c_data = []\n for d in data:\n t = []\n for i in [0,1,2]:\n t.append(d[i]-bias[i])\n c_data.append(t)\n \n return c_data\n\ndef split_xyz(data):\n \"\"\"\n Break out the x, y, and z into it's own array for plotting\n \"\"\"\n xx = []\n yy = []\n zz = []\n for v in data:\n xx.append(v[0])\n yy.append(v[1])\n zz.append(v[2])\n return xx, yy, zz\n\ndef plotMagnetometer3D(data, title=None):\n x,y,z = split_xyz(data)\n fig = plt.figure()\n ax = fig.gca(projection='3d')\n ax.plot(x, y, z, '.b');\n ax.set_xlabel('$\\mu$T')\n ax.set_ylabel('$\\mu$T')\n ax.set_zlabel('$\\mu$T')\n if title:\n plt.title(title);\n\ndef plotMagnetometer(data, title=None):\n x,y,z = split_xyz(data)\n plt.plot(x,y,'.b', x,z,'.r', z,y, '.g')\n plt.xlabel('$\\mu$T')\n plt.ylabel('$\\mu$T')\n plt.grid(True);\n plt.legend(['x', 'y', 'z'])\n if title:\n plt.title(title);", "Run Raw Compass Performance\nFirst lets tumble around the imu and grab lots of data in ALL orientations.", "bag = BagReader()\nbag.use_compression = True\ncal = bag.load('imu-1-2.json')\n\ndef split(data):\n ret = []\n rdt = []\n start = data[0][1]\n for d, ts in data:\n ret.append(d)\n rdt.append(ts - start)\n return ret, rdt\n\naccel, adt = split(cal['accel'])\nmag, mdt = split(cal['mag'])\ngyro, gdt = split(cal['gyro'])\n\nplotArray(accel, 'Accel [g]')\n\nplotArray(mag, 'Mag [uT]')\n\nplotArray(gyro, 'Gyros [dps]')\n\n# now, ideally these should be an ellipsoid centered around 0.0\n# but they aren't ... need to fix the bias (offset)\nplotMagnetometer(mag, 'raw mag')\nplotMagnetometer3D(mag, 'raw mag')\n\n# so let's find the bias needed to correct the imu\nbias = find_calibration(mag)\nprint('bias', bias)\n\n# now the data should be nicely centered around (0,0,0)\ncm = apply_calibration(mag, bias)\nplotMagnetometer(cm, 'corrected mag')\nplotMagnetometer3D(cm, 'corrected mag')", "Now using this bias, we should get better performance.\nCheck Calibration", "# apply correction in previous step\ncm = apply_calibration(mag, bias)\nplotMagnetometer(cm)\n\n# Now let's run through the data and correct it\nroll = []\npitch = []\nheading = []\n\nfor accel, mag in zip(a, cm):\n r,p,h = getOrientation(accel, mag)\n roll.append(r)\n pitch.append(p)\n heading.append(h)\n \nx_scale = [x-ts[0] for x in ts]\nprint('timestep', ts[1] - ts[0])\n\nplt.subplot(2,2,1)\nplt.plot(x_scale, roll)\nplt.grid(True)\nplt.title('Roll')\n\nplt.subplot(2,2,2)\nplt.plot(x_scale, pitch)\nplt.grid(True)\nplt.title('Pitch')\n\nplt.subplot(2,2,3)\nplt.plot(x_scale, heading)\nplt.grid(True)\nplt.title('Heading');", "Now, this data was acquired with the imu starting off flat and slow rotated 360 degrees with stops around 90, 180, 270 from the starting position. At the end, I started wobbling/nutating (don't know how else to describe it) so you see the roll and pitch jump up ... there was some inadvertant change to heading during the end.\nNow looking at the results, you can see I didn't start at 0 deg and there is approximately 4 movements about 90 degrees appart ... which is what I did.\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-sa/4.0/88x31.png\" /></a><br />This work is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-sa/4.0/\">Creative Commons Attribution-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gVallverdu/cookbook
stacked_bar_graph.ipynb
gpl-2.0
[ "Stacked Bar Graph with pandas ans matplotlib\nGermain Salvato Vallverdu germain.vallverdu@univ-pau.fr\nQuelques notions sur Stacked Bar Graph sur le datavizcatalogue.\nCe notebook présente la construction d'un stacked bar graph avec pandas et matplotlib. \nConcernant pandas, ce notebook montre comment :\n\nappliquer une fonction colonne par colonne\nTransformer un score en catégorie\nExtraire le pourcentatge d'une catégorie par colonne", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style=\"white\", font_scale=1.5)\n%matplotlib inline", "Création d'une data frame aléatoire", "N = 10\ndf = pd.DataFrame({k: np.random.randint(low=1, high=7, size=N) for k in \"ABCD\"})\ndf", "Transformer le score en catégorie\nL'idée est de transformer le score du questionnaire en une catégorie : \"défavorise\", \"neutre\", \"favrorise\".\nLe point le plus difficile est l'écriture de la fonction. Avec df.apply() on va appliquer la fonction à l'ensemble du tableau. On peut le faire de deux manières différentes.\nVersion 1: fonction par colonne\nIci je vais appliquer la fonction colonne par colonne (axis=0). Il faut donc que la fonction prenne commme argument la colonne et retourne la liste des nouvelles valeurs de la colonne.\nDonc \n\nOn commence par créer une liste vide\nOn fait une boucle sur les éléments de la colonne\nOn remplit la liste", "def cat_column(column):\n values = list()\n for val in column:\n if val <= 2:\n cat = \"defavorise\"\n elif val >= 5:\n cat = \"favorise\"\n else:\n cat = \"neutre\"\n values.append(cat)\n return values", "Sur le principe, voici comme ça marche avec :", "liste = [1, 2, 3, 4, 5, 6]\nvalues = cat_column(liste)\nprint(values)", "Maintenant on applique à notre tableau :", "df_cat = df.apply(cat_column, axis=0)\ndf_cat", "Version 2: sur tout le tableau\nLes opérations ne dépendent que d'une case du tableau. On peut donc utiliser une fonction qui ne connait que le contenu d'une case et retourne la bonne catégorie. La fonction est plus simple (pas de boucle), elle prend comme arguement le contenu d'une case et retourne la catégorie :", "def cat_cell(val):\n if val <= 2:\n cat = \"defavorise\"\n elif val >= 5:\n cat = \"favorise\"\n else:\n cat = \"neutre\"\n\n return cat", "Par exemple :", "print(cat_cell(2), cat_cell(3))", "On l'applique au tableau. Maintenant on doit utiliser la méthode df.applymap() au lieu de apply().", "df_cat = df.applymap(cat_cell)\ndf_cat", "Pourcentage de chaque catégorie par colonne\nMaintenant on va calculer le nombre de fois que chaque catégorie apparaît dans une colonne. Il faut pour cela utiliser pd.value_counts() :", "df_cat.apply(pd.value_counts)", "Si on veut le pourcentage, il faut savoir combien il y a de lignes. Dans cet exemple, c'est N que l'on a définit tout au début. Sinon, il faut récupérer le nombre de lignes de la DataFrame. Cette information est contenu dans df.shape :", "nrows, ncols = df_cat.shape\nprint(nrows, ncols)\n\ndf_percent = df_cat.apply(pd.value_counts) / nrows * 100\ndf_percent", "Stacked histogram\nAvant de faire le graphique avec pandas, il faut transposer le tableau pour qu'il trace en fonction de A, B, C et D.", "df_percent_t = df_percent.transpose()\ndf_percent_t", "Pour que le graphique soit plus cohérent, on va réorganiser les colonnes de sorte que \"neutre\" soit au milieu :", "df_percent_t = df_percent_t[[\"defavorise\", \"neutre\", \"favorise\"]]\ndf_percent_t", "Voici une première version, le principe étant de choisir un graphique de type barh avec stacked vrai. On choisit ensuite une colormap divergente pour donner un sens aux couleurs.", "fig = plt.figure()\nax = fig.add_subplot(111)\ndf_percent_t.plot(kind=\"barh\", stacked=True, ax=ax, colormap=\"RdYlGn\", alpha=.8, xlim=(0, 101))\nax.legend(ncol=3, loc='upper center', bbox_to_anchor=(0.5, 1.15), frameon=False)", "Un peu plus de détails, la partie compliquée est l'ajout des annotations.", "fig = plt.figure()\nax = fig.add_subplot(111)\ndf_percent_t.plot(kind=\"barh\", stacked=True, ax=ax, colormap=\"RdYlGn\", alpha=.8)\nax.legend(ncol=3, loc='upper center', bbox_to_anchor=(0.5, 1.15), frameon=False)\nax.set_frame_on(False)\nax.set_xticks([])\n\n# add texts\ny = 0\nfor index, row in df_percent_t.iterrows(): # boucle sur les lignes\n # on calcule les intervalles \n xbounds = [0]\n for i in range(len(row)): # len(row) est le nombre d'éléments sur la ligne, 3 ici\n xbounds.append(xbounds[i] + row[i])\n print(xbounds)\n # ajout du texte au centre de chaque intervalle\n for i in range(3):\n x = (xbounds[i] + xbounds[i+1]) / 2\n ax.text(x, y, \"%3.0f%%\" % row[i], \n verticalalignment=\"center\",\n horizontalalignment=\"center\")\n y += 1\n", "Pour y voir plus clair sur le calcul des intervalles, voici un exemple :", "l = [40, 50, 10]\nx = [0]\nfor i in range(3):\n x.append(x[i] + l[i])\nprint(x)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
samstav/scipy_2015_sklearn_tutorial
notebooks/02.4 Unsupervised Learning - Clustering.ipynb
cc0-1.0
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "Clustering\nClustering is the task of gathering samples into groups of similar\nsamples according to some predefined similarity or dissimilarity\nmeasure (such as the Euclidean distance).\nIn this section we will explore a basic clustering task on some synthetic and real datasets.\nHere are some common applications of clustering algorithms:\n\nCompression, in a data reduction sens\nCan be used as a preprocessing step for recommender systems\nSimilarly:\ngrouping related web news (e.g. Google News) and web search results\ngrouping related stock quotes for investment portfolio management\nbuilding customer profiles for market analysis\nBuilding a code book of prototype samples for unsupervised feature extraction\n\nLet's start of with a very simple and obvious example:", "from sklearn.datasets import make_blobs\nX, y = make_blobs(random_state=42)\nX.shape\n\nplt.scatter(X[:, 0], X[:, 1])", "There are clearly three separate groups of points in the data, and we would like to recover them using clustering.\nEven if the groups are obvious in the data, it is hard to find them when the data lives in a high-dimensional space.\nNow we will use one of the simplest clustering algorithms, K-means.\nThis is an iterative algorithm which searches for three cluster\ncenters such that the distance from each point to its cluster is\nminimized.\nQuestion: what would you expect the output to look like?", "from sklearn.cluster import KMeans\n\nkmeans = KMeans(n_clusters=3, random_state=42)", "We can get the cluster labels either by calling fit and then accessing the \nlabels_ attribute of the K means estimator, or by calling fit_predict.\nEither way, the result contains the ID of the cluster that each point is assigned to.", "labels = kmeans.fit_predict(X)\n\nall(labels == kmeans.labels_)", "Let's visualize the assignments that have been found", "plt.scatter(X[:, 0], X[:, 1], c=labels)", "Here, we are probably satisfied with the clustering. But in general we might want to have a more quantitative evaluation. How about we compare our cluster labels with the ground truth we got when generating the blobs?", "from sklearn.metrics import confusion_matrix, accuracy_score\nprint(accuracy_score(y, labels))\nprint(confusion_matrix(y, labels))\n", "Even though we recovered the partitioning of the data into clusters perfectly, the cluster IDs we assigned were arbitrary,\nand we can not hope to recover them. Therefore, we must use a different scoring metric, such as adjusted_rand_score, which is invariant to permutations of the labels:", "from sklearn.metrics import adjusted_rand_score\nadjusted_rand_score(y, labels)", "Clustering comes with assumptions: A clustering algorithm finds clusters by making assumptions with samples should be grouped together. Each algorithm makes different assumptions and the quality and interpretability of your results will depend on whether the assumptions are satisfied for your goal. For K-means clustering, the model is that all clusters have equal, spherical variance.\nIn general, there is no guarantee that structure found by a clustering algorithm has anything to do with what you were interested in.\nWe can easily create a dataset that has non-isotropic clusters, on which kmeans will fail:", "from sklearn.datasets import make_blobs\n\nX, y = make_blobs(random_state=170, n_samples=600)\nrng = np.random.RandomState(74)\n\ntransformation = rng.normal(size=(2, 2))\nX = np.dot(X, transformation)\n\ny_pred = KMeans(n_clusters=3).fit_predict(X)\n\nplt.scatter(X[:, 0], X[:, 1], c=y_pred)", "Some Notable Clustering Routines\nThe following are two well-known clustering algorithms. \n\nsklearn.cluster.KMeans: <br/>\n The simplest, yet effective clustering algorithm. Needs to be provided with the\n number of clusters in advance, and assumes that the data is normalized as input\n (but use a PCA model as preprocessor).\nsklearn.cluster.MeanShift: <br/>\n Can find better looking clusters than KMeans but is not scalable to high number of samples.\nsklearn.cluster.DBSCAN: <br/>\n Can detect irregularly shaped clusters based on density, i.e. sparse regions in\n the input space are likely to become inter-cluster boundaries. Can also detect\n outliers (samples that are not part of a cluster).\nsklearn.cluster.AffinityPropagation: <br/>\n Clustering algorithm based on message passing between data points.\nsklearn.cluster.SpectralClustering: <br/>\n KMeans applied to a projection of the normalized graph Laplacian: finds\n normalized graph cuts if the affinity matrix is interpreted as an adjacency matrix of a graph.\nsklearn.cluster.Ward: <br/>\n Ward implements hierarchical clustering based on the Ward algorithm,\n a variance-minimizing approach. At each step, it minimizes the sum of\n squared differences within all clusters (inertia criterion).\n\nOf these, Ward, SpectralClustering, DBSCAN and Affinity propagation can also work with precomputed similarity matrices.\n<img src=\"figures/cluster_comparison.png\" width=\"900\">\nExercise: digits clustering\nPerform K-means clustering on the digits data, searching for ten clusters.\nVisualize the cluster centers as images (i.e. reshape each to 8x8 and use\nplt.imshow) Do the clusters seem to be correlated with particular digits? What is the adjusted_rand_score?\nVisualize the projected digits as in the last notebook, but this time use the\ncluster labels as the color. What do you notice?", "from sklearn.datasets import load_digits\ndigits = load_digits()\n# ...\n\n# %load solutions/08B_digits_clustering.py" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/eng-edu
ml/cc/exercises/pandas_dataframe_ultraquick_tutorial.ipynb
apache-2.0
[ "#@title Copyright 2020 Google LLC. Double-click here for license information.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Pandas DataFrame UltraQuick Tutorial\nThis Colab introduces DataFrames, which are the central data structure in the pandas API. This Colab is not a comprehensive DataFrames tutorial. Rather, this Colab provides a very quick introduction to the parts of DataFrames required to do the other Colab exercises in Machine Learning Crash Course.\nA DataFrame is similar to an in-memory spreadsheet. Like a spreadsheet:\n\nA DataFrame stores data in cells. \nA DataFrame has named columns (usually) and numbered rows.\n\nImport NumPy and pandas modules\nRun the following code cell to import the NumPy and pandas modules.", "import numpy as np\nimport pandas as pd", "Creating a DataFrame\nThe following code cell creates a simple DataFrame containing 10 cells organized as follows:\n\n5 rows\n2 columns, one named temperature and the other named activity\n\nThe following code cell instantiates a pd.DataFrame class to generate a DataFrame. The class takes two arguments:\n\nThe first argument provides the data to populate the 10 cells. The code cell calls np.array to generate the 5x2 NumPy array.\nThe second argument identifies the names of the two columns.", "# Create and populate a 5x2 NumPy array.\nmy_data = np.array([[0, 3], [10, 7], [20, 9], [30, 14], [40, 15]])\n\n# Create a Python list that holds the names of the two columns.\nmy_column_names = ['temperature', 'activity']\n\n# Create a DataFrame.\nmy_dataframe = pd.DataFrame(data=my_data, columns=my_column_names)\n\n# Print the entire DataFrame\nprint(my_dataframe)", "Adding a new column to a DataFrame\nYou may add a new column to an existing pandas DataFrame just by assigning values to a new column name. For example, the following code creates a third column named adjusted in my_dataframe:", "# Create a new column named adjusted.\nmy_dataframe[\"adjusted\"] = my_dataframe[\"activity\"] + 2\n\n# Print the entire DataFrame\nprint(my_dataframe)", "Specifying a subset of a DataFrame\nPandas provide multiples ways to isolate specific rows, columns, slices or cells in a DataFrame.", "print(\"Rows #0, #1, and #2:\")\nprint(my_dataframe.head(3), '\\n')\n\nprint(\"Row #2:\")\nprint(my_dataframe.iloc[[2]], '\\n')\n\nprint(\"Rows #1, #2, and #3:\")\nprint(my_dataframe[1:4], '\\n')\n\nprint(\"Column 'temperature':\")\nprint(my_dataframe['temperature'])", "Task 1: Create a DataFrame\nDo the following:\n\n\nCreate an 3x4 (3 rows x 4 columns) pandas DataFrame in which the columns are named Eleanor, Chidi, Tahani, and Jason. Populate each of the 12 cells in the DataFrame with a random integer between 0 and 100, inclusive.\n\n\nOutput the following:\n\nthe entire DataFrame\nthe value in the cell of row #1 of the Eleanor column\n\n\n\nCreate a fifth column named Janet, which is populated with the row-by-row sums of Tahani and Jason.\n\n\nTo complete this task, it helps to know the NumPy basics covered in the NumPy UltraQuick Tutorial.", "# Write your code here.\n\n#@title Double-click for a solution to Task 1.\n\n# Create a Python list that holds the names of the four columns.\nmy_column_names = ['Eleanor', 'Chidi', 'Tahani', 'Jason']\n\n# Create a 3x4 numpy array, each cell populated with a random integer.\nmy_data = np.random.randint(low=0, high=101, size=(3, 4))\n\n# Create a DataFrame.\ndf = pd.DataFrame(data=my_data, columns=my_column_names)\n\n# Print the entire DataFrame\nprint(df)\n\n# Print the value in row #1 of the Eleanor column.\nprint(\"\\nSecond row of the Eleanor column: %d\\n\" % df['Eleanor'][1])\n\n# Create a column named Janet whose contents are the sum\n# of two other columns.\ndf['Janet'] = df['Tahani'] + df['Jason']\n\n# Print the enhanced DataFrame\nprint(df)", "Copying a DataFrame (optional)\nPandas provides two different ways to duplicate a DataFrame:\n\nReferencing. If you assign a DataFrame to a new variable, any change to the DataFrame or to the new variable will be reflected in the other. \nCopying. If you call the pd.DataFrame.copy method, you create a true independent copy. Changes to the original DataFrame or to the copy will not be reflected in the other. \n\nThe difference is subtle, but important.", "# Create a reference by assigning my_dataframe to a new variable.\nprint(\"Experiment with a reference:\")\nreference_to_df = df\n\n# Print the starting value of a particular cell.\nprint(\" Starting value of df: %d\" % df['Jason'][1])\nprint(\" Starting value of reference_to_df: %d\\n\" % reference_to_df['Jason'][1])\n\n# Modify a cell in df.\ndf.at[1, 'Jason'] = df['Jason'][1] + 5\nprint(\" Updated df: %d\" % df['Jason'][1])\nprint(\" Updated reference_to_df: %d\\n\\n\" % reference_to_df['Jason'][1])\n\n# Create a true copy of my_dataframe\nprint(\"Experiment with a true copy:\")\ncopy_of_my_dataframe = my_dataframe.copy()\n\n# Print the starting value of a particular cell.\nprint(\" Starting value of my_dataframe: %d\" % my_dataframe['activity'][1])\nprint(\" Starting value of copy_of_my_dataframe: %d\\n\" % copy_of_my_dataframe['activity'][1])\n\n# Modify a cell in df.\nmy_dataframe.at[1, 'activity'] = my_dataframe['activity'][1] + 3\nprint(\" Updated my_dataframe: %d\" % my_dataframe['activity'][1])\nprint(\" copy_of_my_dataframe does not get updated: %d\" % copy_of_my_dataframe['activity'][1])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maubarsom/ORFan-proteins
phage_assembly/5_annotation/asm_v1.2/orf_160621/.ipynb_checkpoints/4_select_reliable_orfs-checkpoint.ipynb
mit
[ "import pandas as pd\nimport re\nfrom glob import glob", "1. Load blast hits", "#Load blast hits\nblastp_hits = pd.read_csv(\"2_blastp_hits.csv\")\nblastp_hits.head()\n#Filter out Metahit 2010 hits, keep only Metahit 2014\nblastp_hits = blastp_hits[blastp_hits.db != \"metahit_pep\"]", "2. Process blastp results\n2.1 Extract ORF stats from fasta file", "#Assumes the Fasta file comes with the header format of EMBOSS getorf\nfh = open(\"1_orf/d9539_asm_v1.2_orf.fa\")\nheader_regex = re.compile(r\">([^ ]+?) \\[([0-9]+) - ([0-9]+)\\]\")\norf_stats = []\nfor line in fh:\n header_match = header_regex.match(line)\n if header_match:\n is_reverse = line.rstrip(\" \\n\").endswith(\"(REVERSE SENSE)\")\n q_id = header_match.group(1)\n #Position in contig\n q_cds_start = int(header_match.group(2) if not is_reverse else header_match.group(3))\n q_cds_end = int(header_match.group(3) if not is_reverse else header_match.group(2))\n #Length of orf in aminoacids\n q_len = (q_cds_end - q_cds_start + 1) / 3\n orf_stats.append( pd.Series(data=[q_id,q_len,q_cds_start,q_cds_end,(\"-\" if is_reverse else \"+\")],\n index=[\"q_id\",\"orf_len\",\"q_cds_start\",\"q_cds_end\",\"strand\"]))\n \norf_stats_df = pd.DataFrame(orf_stats)\nprint(orf_stats_df.shape)\norf_stats_df.head()\n\n#Write orf stats to fasta\norf_stats_df.to_csv(\"1_orf/orf_stats.csv\",index=False)", "2.2 Annotate blast hits with orf stats", "blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on=\"query_id\",right_on=\"q_id\")\n#Add query coverage calculation\nblastp_hits_annot[\"q_cov_calc\"] = (blastp_hits_annot[\"q_end\"] - blastp_hits_annot[\"q_start\"] + 1 ) * 100 / blastp_hits_annot[\"q_len\"]\nblastp_hits_annot.sort_values(by=\"bitscore\",ascending=False).head()\n\nassert blastp_hits_annot.shape[0] == blastp_hits.shape[0]", "2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)\nDefine these resulting 7 ORFs as the core ORFs for the d9539 assembly. \nThe homology between the Metahit gene catalogue is very good, and considering the catalogue was curated \non a big set of gut metagenomes, it is reasonable to assume that these putative proteins would come \nfrom our detected circular putative virus/phage genome\nTwo extra notes:\n * Additionally, considering only these 7 ORFs , almost the entire genomic region is covered, with very few non-coding regions, still consistent with the hypothesis of a small viral genome which should be mainly coding\n\nAlso, even though the naive ORF finder detected putative ORFs in both positive and negative strands, the supported ORFs only occur in the positive strand. This could be an indication of a ssDNA or ssRNA virus.", "! mkdir -p 4_msa_prots\n\n#Get best hit (highest bitscore) for each ORF\ngb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby(\"query_id\")\nreliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[[\"query_id\",\"db\",\"subject_id\",\"pct_id\",\"q_cov\",\"q_len\",\n \"bitscore\",\"e_value\",\"strand\",\"q_cds_start\",\"q_cds_end\"]]\nreliable_orfs = reliable_orfs.sort_values(by=\"q_cds_start\",ascending=True)\nreliable_orfs", "2.4 Extract selected orfs for further analysis", "reliable_orfs[\"orf_id\"] = [\"orf{}\".format(x) for x in range(1,reliable_orfs.shape[0]+1) ]\nreliable_orfs[\"cds_len\"] = reliable_orfs[\"q_cds_end\"] - reliable_orfs[\"q_cds_start\"] +1\nreliable_orfs.sort_values(by=\"q_cds_start\",ascending=True).to_csv(\"3_filtered_orfs/filt_orf_stats.csv\",index=False,header=True)\nreliable_orfs.sort_values(by=\"q_cds_start\",ascending=True).to_csv(\"3_filtered_orfs/filt_orf_list.txt\",index=False,header=False,columns=[\"query_id\"])", "2.4.2 Extract fasta", "! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa", "2.4.3 Write out filtered blast hits", "filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())]\nfilt_blastp_hits.to_csv(\"3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.csv\")\nfilt_blastp_hits.head()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
dev/_downloads/f31e73ee907864d95a2b617fdc76b71e/source_label_time_frequency.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute power and phase lock in label of the source space\nCompute time-frequency maps of power and phase lock in the source space.\nThe inverse method is linear based on dSPM inverse operator.\nThe example also shows the difference in the time-frequency maps\nwhen they are computed with and without subtracting the evoked response\nfrom each epoch. The former results in induced activity only while the\nlatter also includes evoked (stimulus-locked) activity.", "# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, source_induced_power\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_raw.fif'\nfname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'\nlabel_name = 'Aud-rh'\nfname_label = meg_path / 'labels' / f'{label_name}.label'\n\ntmin, tmax, event_id = -0.2, 0.5, 2\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\ninverse_operator = read_inverse_operator(fname_inv)\n\ninclude = []\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# Picks MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n stim=False, include=include, exclude='bads')\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\n# Load epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject,\n preload=True)\n\n# Compute a source estimate per frequency band including and excluding the\n# evoked response\nfreqs = np.arange(7, 30, 2) # define frequencies of interest\nlabel = mne.read_label(fname_label)\nn_cycles = freqs / 3. # different number of cycle per frequency\n\n# subtract the evoked response in order to exclude evoked activity\nepochs_induced = epochs.copy().subtract_evoked()\n\nplt.close('all')\n\nfor ii, (this_epochs, title) in enumerate(zip([epochs, epochs_induced],\n ['evoked + induced',\n 'induced only'])):\n # compute the source space power and the inter-trial coherence\n power, itc = source_induced_power(\n this_epochs, inverse_operator, freqs, label, baseline=(-0.1, 0),\n baseline_mode='percent', n_cycles=n_cycles, n_jobs=None)\n\n power = np.mean(power, axis=0) # average over sources\n itc = np.mean(itc, axis=0) # average over sources\n times = epochs.times\n\n ##########################################################################\n # View time-frequency plots\n plt.subplots_adjust(0.1, 0.08, 0.96, 0.94, 0.2, 0.43)\n plt.subplot(2, 2, 2 * ii + 1)\n plt.imshow(20 * power,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=0., vmax=30., cmap='RdBu_r')\n plt.xlabel('Time (s)')\n plt.ylabel('Frequency (Hz)')\n plt.title('Power (%s)' % title)\n plt.colorbar()\n\n plt.subplot(2, 2, 2 * ii + 2)\n plt.imshow(itc,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=0, vmax=0.7,\n cmap='RdBu_r')\n plt.xlabel('Time (s)')\n plt.ylabel('Frequency (Hz)')\n plt.title('ITC (%s)' % title)\n plt.colorbar()\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code" ]
Parisson/TimeSide
docs/ipynb/02_Aubio.ipynb
agpl-3.0
[ "Audio analysis with TimeSide and Aubio\nIn the following example, we illustrate how to perform common audio signal analysis tasks like pitch estimation and onsets detection with TimeSide by using some analyzers based on Aubio.", "%pylab inline\n\nfrom __future__ import division\nimport matplotlib.pylab as pylab\npylab.rcParams['figure.figsize'] = 16, 8 # that's default image size for this interactive session\n\nimport timeside\nfrom timeside.core import get_processor\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# Audio Source = a 15s excerpt of Peter and the Wolf by Prokofiev\n# from the Internet Archive : https://archive.org/details/PeterAndTheWolf_753\n#audiofile = 'https://ia801408.us.archive.org/3/items/PeterAndTheWolf_753/Peter_and_the_Wolf.mp3'\naudiofile = 'https://ia801408.us.archive.org/3/items/PeterAndTheWolf_753/PeterAndTheWolf_01.mp3'\n## Setup the processing pipe\nfile_decoder = get_processor('file_decoder')(uri=audiofile, start=5, duration=15)\naubio_pitch = get_processor('aubio_pitch')()\naubio_temporal = get_processor('aubio_temporal')()\nspecgram_ = get_processor('spectrogram_analyzer')()\nwaveform = get_processor('waveform_analyzer')()\n\n\npipe = (file_decoder | aubio_pitch | aubio_temporal | specgram_ | waveform)\npipe.run()", "Display Spectrogram + Aubio Pitch + Aubio Beat", "plt.figure(1)\n\nspec_res = specgram_.results['spectrogram_analyzer']\nN = spec_res.parameters['fft_size']\nplt.imshow(20 * np.log10(spec_res.data.T),\n origin='lower',\n extent=[spec_res.time[0], spec_res.time[-1], 0,\n (N // 2 + 1) / N * spec_res.data_object.frame_metadata.samplerate],\n aspect='auto')\n\nres_pitch = aubio_pitch.results['aubio_pitch.pitch']\nplt.plot(res_pitch.time, res_pitch.data)\n\nres_beats = aubio_temporal.results['aubio_temporal.beat']\n\nfor time in res_beats.time:\n plt.axvline(time, color='r')\n\nplt.title('Spectrogram + Aubio pitch + Aubio beat')\nplt.grid()", "Display waveform + Onsets", "plt.figure(2)\nres_wave = waveform.results['waveform_analyzer']\nplt.plot(res_wave.time, res_wave.data)\nres_onsets = aubio_temporal.results['aubio_temporal.onset']\nfor time in res_onsets.time:\n plt.axvline(time, color='r')\nplt.grid()\nplt.title('Waveform + Aubio onset')\nplt.show()\n\nres_pitch.render();\n\nres_beats.render();\n\nres_onsets.render();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
CompPhysics/MachineLearning
doc/src/Optimization/autodiff/examples_allowed_functions.ipynb
cc0-1.0
[ "Examples of the supported features in Autograd\nBefore using Autograd for more complicated calculations, it might be useful to experiment with what kind of functions Autograd is capable of finding the gradient of. The following Python functions are just meant to illustrate what Autograd can do, but please feel free to experiment with other, possibly more complicated, functions as well!", "import autograd.numpy as np\nfrom autograd import grad", "Supported functions\nHere are some examples of supported function implementations that Autograd can differentiate. Keep in mind that this list over examples is not comprehensive, but rather explores which basic constructions one might often use. \nFunctions using simple arithmetics", "def f1(x):\n return x**3 + 1\n\nf1_grad = grad(f1)\n\n# Remember to send in float as argument to the computed gradient from Autograd!\na = 1.0\n\n# See the evaluated gradient at a using autograd:\nprint(\"The gradient of f1 evaluated at a = %g using autograd is: %g\"%(a,f1_grad(a)))\n\n# Compare with the analytical derivative, that is f1'(x) = 3*x**2 \ngrad_analytical = 3*a**2\nprint(\"The gradient of f1 evaluated at a = %g by finding the analytic expression is: %g\"%(a,grad_analytical))", "Functions with two (or more) arguments\nTo differentiate with respect to two (or more) arguments of a Python function, Autograd need to know at which variable the function if being differentiated with respect to.", "def f2(x1,x2):\n return 3*x1**3 + x2*(x1 - 5) + 1\n\n# By sending the argument 0, Autograd will compute the derivative w.r.t the first variable, in this case x1\nf2_grad_x1 = grad(f2,0)\n\n# ... and differentiate w.r.t x2 by sending 1 as an additional arugment to grad\nf2_grad_x2 = grad(f2,1)\n\nx1 = 1.0\nx2 = 3.0 \n\nprint(\"Evaluating at x1 = %g, x2 = %g\"%(x1,x2))\nprint(\"-\"*30)\n\n# Compare with the analytical derivatives:\n\n# Derivative of f2 w.r.t x1 is: 9*x1**2 + x2:\nf2_grad_x1_analytical = 9*x1**2 + x2\n\n# Derivative of f2 w.r.t x2 is: x1 - 5:\nf2_grad_x2_analytical = x1 - 5\n\n# See the evaluated derivations:\nprint(\"The derivative of f2 w.r.t x1: %g\"%( f2_grad_x1(x1,x2) ))\nprint(\"The analytical derivative of f2 w.r.t x1: %g\"%( f2_grad_x1(x1,x2) ))\n\nprint()\n\nprint(\"The derivative of f2 w.r.t x2: %g\"%( f2_grad_x2(x1,x2) ))\nprint(\"The analytical derivative of f2 w.r.t x2: %g\"%( f2_grad_x2(x1,x2) ))", "Note that the grad function will not produce the true gradient of the function. The true gradient of a function with two or more variables will produce a vector, where each element is the function differentiated w.r.t a variable. \nFunctions using the elements of its argument directly", "def f3(x): # Assumes x is an array of length 5 or higher\n return 2*x[0] + 3*x[1] + 5*x[2] + 7*x[3] + 11*x[4]**2\n\nf3_grad = grad(f3)\n\nx = np.linspace(0,4,5)\n\n# Print the computed gradient:\nprint(\"The computed gradient of f3 is: \", f3_grad(x))\n\n# The analytical gradient is: (2, 3, 5, 7, 22*x[4])\nf3_grad_analytical = np.array([2, 3, 5, 7, 22*x[4]])\n\n# Print the analytical gradient:\nprint(\"The analytical gradient of f3 is: \", f3_grad_analytical)", "Note that in this case, when sending an array as input argument, the output from Autograd is another array. This is the true gradient of the function, as opposed to the function in the previous example. By using arrays to represent the variables, the output from Autograd might be easier to work with, as the output is closer to what one could expect form a gradient-evaluting function. \nFunctions using mathematical functions from Numpy", "def f4(x):\n return np.sqrt(1+x**2) + np.exp(x) + np.sin(2*np.pi*x)\n\nf4_grad = grad(f4)\n\nx = 2.7\n\n# Print the computed derivative:\nprint(\"The computed derivative of f4 at x = %g is: %g\"%(x,f4_grad(x)))\n\n# The analytical derivative is: x/sqrt(1 + x**2) + exp(x) + cos(2*pi*x)*2*pi\nf4_grad_analytical = x/np.sqrt(1 + x**2) + np.exp(x) + np.cos(2*np.pi*x)*2*np.pi\n\n# Print the analytical gradient:\nprint(\"The analytical gradient of f4 at x = %g is: %g\"%(x,f4_grad_analytical))", "Functions using if-else tests", "def f5(x):\n if x >= 0:\n return x**2\n else:\n return -3*x + 1\n\nf5_grad = grad(f5)\n\nx = 2.7\n\n# Print the computed derivative:\nprint(\"The computed derivative of f5 at x = %g is: %g\"%(x,f5_grad(x)))\n\n# The analytical derivative is: \n# if x >= 0, then 2*x\n# else -3\n\nif x >= 0:\n f5_grad_analytical = 2*x\nelse:\n f5_grad_analytical = -3\n\n\n# Print the analytical derivative:\nprint(\"The analytical derivative of f5 at x = %g is: %g\"%(x,f5_grad_analytical))", "Functions using for- and while loops", "def f6_for(x):\n val = 0\n for i in range(10):\n val = val + x**i\n return val\n\ndef f6_while(x):\n val = 0\n i = 0\n while i < 10:\n val = val + x**i\n i = i + 1\n return val\n\nf6_for_grad = grad(f6_for)\nf6_while_grad = grad(f6_while)\n\nx = 0.5\n\n# Print the computed derivaties of f6_for and f6_while\nprint(\"The computed derivative of f6_for at x = %g is: %g\"%(x,f6_for_grad(x)))\nprint(\"The computed derivative of f6_while at x = %g is: %g\"%(x,f6_while_grad(x)))\n\n# Both of the functions are implementation of the sum: sum(x**i) for i = 0, ..., 9\n# The analytical derivative is: sum(i*x**(i-1)) \nf6_grad_analytical = 0\nfor i in range(10):\n f6_grad_analytical += i*x**(i-1)\n\nprint(\"The analytical derivative of f6 at x = %g is: %g\"%(x,f6_grad_analytical))", "Functions using recursion", "def f7(n): # Assume that n is an integer\n if n == 1 or n == 0:\n return 1\n else:\n return n*f7(n-1)\n\nf7_grad = grad(f7)\n\nn = 2.0\n\nprint(\"The computed derivative of f7 at n = %d is: %g\"%(n,f7_grad(n)))\n\n# The function f7 is an implementation of the factorial of n.\n# By using the product rule, one can find that the derivative is:\n\nf7_grad_analytical = 0\nfor i in range(int(n)-1):\n tmp = 1\n for k in range(int(n)-1):\n if k != i:\n tmp *= (n - k)\n f7_grad_analytical += tmp\n\nprint(\"The analytical derivative of f7 at n = %d is: %g\"%(n,f7_grad_analytical))", "Note that if n is equal to zero or one, Autograd will give an error message. This message appears when the output is independent on input. \nUnsupported functions\nAutograd supports many features. However, there are some functions that is not supported (yet) by Autograd.\nAssigning a value to the variable being differentiated with respect to", "def f8(x): # Assume x is an array\n x[2] = 3\n return x*2\n\nf8_grad = grad(f8)\n\nx = 8.4\n\nprint(\"The derivative of f8 is:\",f8_grad(x))", "Here, Autograd tells us that an 'ArrayBox' does not support item assignment. The item assignment is done when the program tries to assign x[2] to the value 3. However, Autograd has implemented the computation of the derivative such that this assignment is not possible. \nThe syntax a.dot(b) when finding the dot product", "def f9(a): # Assume a is an array with 2 elements\n b = np.array([1.0,2.0])\n return a.dot(b)\n\nf9_grad = grad(f9)\n\nx = np.array([1.0,0.0])\n\nprint(\"The derivative of f9 is:\",f9_grad(x))", "Here we are told that the 'dot' function does not belong to Autograd's version of a Numpy array.\nTo overcome this, an alternative syntax which also computed the dot product can be used:", "def f9_alternative(x): # Assume a is an array with 2 elements\n b = np.array([1.0,2.0])\n return np.dot(x,b) # The same as x_1*b_1 + x_2*b_2\n\nf9_alternative_grad = grad(f9_alternative)\n\nx = np.array([3.0,0.0])\n\nprint(\"The gradient of f9 is:\",f9_alternative_grad(x))\n\n# The analytical gradient of the dot product of vectors x and b with two elements (x_1,x_2) and (b_1, b_2) respectively\n# w.r.t x is (b_1, b_2).", "Recommended to avoid\nThe documentation recommends to avoid inplace operations such as" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davofis/computational_seismology
lambs_problem/lambs_problem_solution.ipynb
gpl-3.0
[ "<div style='background-image: url(\"../../share/images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Computational Seismology</div>\n <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\"> Lamb's problem </div>\n </div>\n </div>\n</div>\n\nSeismo-Live: http://seismo-live.org\nAuthors:\n\nDavid Vargas (@dvargas)\nHeiner Igel (@heinerigel)\n\nBasic Equations\nThe fundamental analytical solution to the three-dimensional Lamb’s problem, the problem of determining the elastic disturbance resulting from a point force in a homogeneous half space, is implemented in this Ipython notebook. This solution provides fundamental information used as benchmark for comparison with entirely numerical solutions. A setup of the fundamental problem is illustrated below. The figure on the right hand side is published in [1] (Figure 1. System of coordinates)\n<p style=\"width:65%;float:right;padding-left:50px\">\n<img src=lambs_setup.png>\n<span style=\"font-size:smaller\">\n</span>\n</p>\n\nSimulations of 3D elastic wave propagation need to be validated by the use of analytical solutions. In order to evaluate how healthy a numerical solution is, one may recreate conditions for which analytical solutions exist with the aim of reproducing and compare the different results.\nWe which to find the displacement wavefield $\\mathbf{u}(\\mathbf{x},t)$ at some distance $\\mathbf{x}$ from a seismic source with $ \\mathbf{F} = f_1\\mathbf{\\hat{x}_1} + f_2\\mathbf{\\hat{x}_2} + f_3\\mathbf{\\hat{x}_3}$.\nFor a uniform elastic material and a Cartesian co-ordinate system the equation for the conservation of linear momentum can be written\n\\begin{align}\n\\rho(x) \\frac{\\partial^2}{\\partial t^2} \\mathbf{u(\\mathbf{x},t)} = (\\lambda + \\mu)\\nabla(\\nabla\\mathbf{u(\\mathbf{x},t)}) + \\mu\\nabla^2 \\mathbf{u(\\mathbf{x},t)} + \\mathbf{f(\\mathbf{x},t)}\n\\end{align}\nWe will consider the case where the source function is localized in both time and space\n\\begin{align}\n\\mathbf{f(\\mathbf{x},t)} = (f_1\\mathbf{\\hat{x}1} + f_2\\mathbf{\\hat{x}_2} + f_3\\mathbf{\\hat{x}_3})\\delta(x_1 - x^{'}{1})\\delta(x_2 - x^{'}{2})\\delta(x_3 - x^{'}{3})\\delta(t - t^{'})\n\\end{align}\nFor such a source we will refer to the displacement solution as a Green’s function, and use the standard notation\n\\begin{align}\n\\mathbf{u(\\mathbf{x},t)} = g_1(\\mathbf{x},t;\\mathbf{x^{'}},t^{'})\\mathbf{\\hat{x}_1} + g_2(\\mathbf{x},t;\\mathbf{x^{'}},t^{'})\\mathbf{\\hat{x}_2} + g_3(\\mathbf{x},t;\\mathbf{x^{'}},t^{'})\\mathbf{\\hat{x}_3}\n\\end{align}\nThe complete solution is found after applying the Laplace transform to the elastic wave equation, implementing the stress-free boundary condition, defining some transformations, and performing some algebraic manoeuvres. Then, the Green's function at the free surface is given:\n\\begin{align}\n\\begin{split}\n\\mathbf{G}(x_1,x_2,0,t;0,0,x^{'}{3},0) & = \\dfrac{1}{\\pi^2\\mu r} \\dfrac{\\partial}{\\partial t}\\int{0}^{((t/r)^2 - \\alpha^{-2})^{1/2}}\\mathbf{H}(t-r/\\alpha)\\mathbb{R}[\\eta_\\alpha\\sigma^{-1}((t/r)^2 - \\alpha^{-2} - p^2)^{-1/2}\\mathbf{M}(q,p,0,t,x^{'}{3})\\mathbf{F}] dp \\\n & + \\dfrac{1}{\\pi^2\\mu r} \\dfrac{\\partial}{\\partial t}\\int{0}^{p_2}\\mathbf{H}(t-t_2)\\mathbb{R}[\\eta_\\beta\\sigma^{-1}((t/r)^2 - \\beta^{-2} - p^2)^{-1/2}\\mathbf{N}(q,p,0,t,x^{'}_{3})\\mathbf{F}] dp\n\\end{split}\n\\end{align}\nDetails on the involved terms are found in the original paper [2]. The Green's $\\mathbf{G}$ function consist of three components of displacement evolving from the application of three components of force $\\mathbf{F}$. If we assume that each component of $\\mathbf{F}$ provokes three components of displacement, then $\\mathbf{G}$ is composed by nine independent components that correspond one to one to the matrices $\\mathbf{M}$ and $\\mathbf{N}$. Without losing generality it is shown that among them four are equal zero, and we end up only with five possible components. \n<p style=\"text-align: justify;\">\n [1] Eduardo Kausel - Lamb's problem at its simplest, 2012</p>\n\n<p style=\"text-align: justify;\">\n [2] Lane R. Johnson - Green’s Function for Lamb’s Problem, 1974</p>", "# Import all necessary libraries, this is a configuration step for the exercise.\n# Please run it before the simulation code!\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os\nfrom ricker import ricker\n\n# Show the plots in the Notebook.\nplt.switch_backend(\"nbagg\")\n\n# Compile the source code (needs gfortran!)\n!rm -rf lamb.exe output.txt input.txt\n!gfortran canhfs.for -o lamb.exe", "Calling the original FORTRAN code", "# Initialization of setup:\n# Figure 4 in Lane R. Johnson - Green’s Function for Lamb’s Problem, 1974\n# is reproduced when the following parameters are given\n# -----------------------------------------------------------------------------\nr = 10.0 # km\nvp = 8.0 # P-wave velocity km/s\nvs = 4.62 # s-wave velocity km/s\nrho = 3.3 # Density kg/m^3 \nnt = 512 # Number of time steps\ndt = 0.01 # Time step s\nh = 0.2 # Source position km (0.01 to reproduce Fig 2.16 of the book)\nti = 0.0 # Initial time s\n\nvar = [vp, vs, rho, nt, dt, h, r, ti]\n\n# -----------------------------------------------------------------------------\n# Execute fortran code\n# -----------------------------------------------------------------------------\nwith open('input.txt', 'w') as f:\n for i in var:\n print(i, file=f, end=' ') # Write input for fortran code\n\nf.close()\n\nos.system(\"./lamb.exe\") # Code execution\n\n# -----------------------------------------------------------------------------\n# Load the solution\n# -----------------------------------------------------------------------------\nG = np.genfromtxt('output.txt')\n\nu_rx = G[:,0] # Radial displacement owing to horizontal load\nu_tx = G[:,1] # Tangential displacement due to horizontal load\nu_zx = G[:,2] # Vertical displacement owing to horizontal load\n\nu_rz = G[:,3] # Radial displacement owing to a vertical load\nu_zz = G[:,4] # Vertical displacement owing to vertical load\n\nt = np.linspace(dt, nt*dt, nt) # Time axis\n", "Visualization of the Green's function", "# Plotting\n# -----------------------------------------------------------------------------\nseis = [u_rx, u_tx, u_zx, u_rz, u_zz] # Collection of seismograms\nlabels = ['$u_{rx}(t) [cm]$','$u_{tx}(t)[cm]$','$u_{zx}(t)[cm]$','$u_{rz}(t)[cm]$','$u_{zz}(t)[cm]$']\ncols = ['b','r','k','g','c']\n\n# Initialize animated plot\nfig = plt.figure(figsize=(12,8), dpi=80)\n\nfig.suptitle(\"Green's Function for Lamb's problem\", fontsize=16)\n\nplt.ion() # set interective mode\nplt.show()\n\nfor i in range(5): \n st = seis[i]\n ax = fig.add_subplot(2, 3, i+1)\n ax.plot(t, st, lw = 1.5, color=cols[i]) \n ax.set_xlabel('Time(s)')\n ax.text(0.8*nt*dt, 0.8*max(st), labels[i], fontsize=16)\n plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\n \n ax.spines['left'].set_position('zero')\n ax.spines['right'].set_color('none')\n ax.spines['bottom'].set_position('zero')\n ax.spines['top'].set_color('none')\n ax.spines['left'].set_smart_bounds(True)\n ax.spines['bottom'].set_smart_bounds(True)\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n\nplt.show()", "Convolution\nLet $S(t)$ be a general source time function, then the displacent seismogram is given in terms of the Green's function $G$ via\n\\begin{equation}\nu(\\mathbf{x},t) = G(\\mathbf{x},t; \\mathbf{x}',t') \\ast S(t)\n\\end{equation}\nExercise\nCompute the convolution of the source time function 'ricker' with the Green's function of a Vertical displacement due to vertical loads. Plot the resulting displacement.", "# call the source time function\nT = 1/5 # Period\nsrc = ricker(dt,T)\n\n# Normalize source time function\nsrc = src/max(src)\n\n# Initialize source time function\nf = np.zeros(nt)\nf[0:int(2 * T/dt)] = src\n\n# Compute convolution\nu = np.convolve(u_zz, f)\nu = u[0:nt]\n\n# ---------------------------------------------------------------\n# Plot Seismogram\n# ---------------------------------------------------------------\nfig = plt.figure(figsize=(12,4), dpi=80)\n\nplt.subplot(1,3,1)\nplt.plot(t, u_zz, color='r', lw=2)\nplt.title('Green\\'s function')\nplt.xlabel('time [s]', size=16)\nplt.ylabel('Displacement [cm]', size=14)\nplt.xlim([0,nt*dt])\nplt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\n\nplt.subplot(1,3,2)\nplt.plot(t, f, color='k', lw=2)\nplt.title('Source time function')\nplt.xlabel('time [s]', size=16)\nplt.xlim([0,nt*dt])\nplt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\n\nplt.subplot(1,3,3)\nplt.plot(t, u, color='b', lw=2)\nplt.title('Displacement')\nplt.xlabel('time [s]', size=16)\nplt.xlim([0,nt*dt]) \nplt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))\n\nplt.grid(True)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JrtPec/opengrid
notebooks/Analysis/Multivariable_regression_cached_data.ipynb
apache-2.0
[ "Multivariable regression\nImports and setup", "import os\nimport pandas as pd\n\nfrom opengrid.library import houseprint, caching, regression\nfrom opengrid import config\n\nc = config.Config()\n\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n%matplotlib inline\nplt.rcParams['figure.figsize'] = 16,8\n\n# Create houseprint from saved file, if not available, parse the google spreadsheet\ntry:\n hp_filename = os.path.join(c.get('data', 'folder'), 'hp_anonymous.pkl')\n hp = houseprint.load_houseprint_from_file(hp_filename)\n print(\"Houseprint loaded from {}\".format(hp_filename))\nexcept Exception as e:\n print(e)\n print(\"Because of this error we try to build the houseprint from source\")\n hp = houseprint.Houseprint()\nhp.init_tmpo()", "Load Data\nWe are going to use daily gas, electricity and water consumption data and weather data. Because we don't want to overload the weather API, we will only use 1 location (Ukkel).\nFirst, let's define the start and end date of the identification data. That is the data to be used to find the model. Later, we will use the model to predict.", "start = pd.Timestamp('2015-01-01', tz='Europe/Brussels')\nend = pd.Timestamp('now', tz='Europe/Brussels')\nend_model = pd.Timestamp('2016-12-31', tz='Europe/Brussels') #last day of the data period for the model", "Energy data\nWe for each consumption type (electricity, gas and water), we create a daily dataframe and save it in the dictionary dfs. The data is obtained from the daily caches.", "caches = {}\ndfs = {}\nfor cons in ['electricity', 'gas', 'water']:\n caches[cons] = caching.Cache(variable='{}_daily_total'.format(cons))\n dfs[cons] = caches[cons].get(sensors = hp.get_sensors(sensortype=cons))", "Weather and other exogenous data\nRun this block to download the weather data and save it to a pickle. This is a large request, and you can only do 2 or 3 of these per day before your credit with Forecast.io runs out!\nWe also add a column for each day-of-week which may be used by the regression algorithm on a daily basis.", "from opengrid.library import forecastwrapper\nweather = forecastwrapper.Weather(location=(50.8024, 4.3407), start=start, end=end - pd.Timedelta(days=1))\nirradiances=[\n (0, 90), # north vertical\n (90, 90), # east vertical\n (180, 90), # south vertical\n (270, 90), # west vertical\n]\norientations = [0, 90, 180, 270]\nweather_data = weather.days(irradiances=irradiances, \n wind_orients=orientations, \n heating_base_temperatures=[0, 6, 8 ,10, 12, 14, 16, 18]).dropna(axis=1)\nweather_data.drop(['icon', 'summary', 'moonPhase', 'windBearing', 'temperatureMaxTime', 'temperatureMinTime',\n 'apparentTemperatureMaxTime', 'apparentTemperatureMinTime', 'uvIndexTime', \n 'sunsetTime', 'sunriseTime'], \n axis=1, inplace=True)\n# Add columns for the day-of-week\nfor i, d in zip(range(7), ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']):\n weather_data[d] = 0\n weather_data.loc[weather_data.index.weekday == i, d] = 1\nweather_data = weather_data.applymap(float)\n\nweather_data.head()\n\nweather_data.columns", "Put data together\nThe generator below will return a dataframe with sensor id as first column and all exogenous data as other columns.", "def data_generator(consumable):\n dfcons = dfs[consumable]\n for sensorid in dfcons.columns:\n df = pd.concat([dfcons[sensorid], weather_data], axis=1).dropna()\n df = df.tz_convert('Europe/Brussels')\n yield sensorid, df", "Let's have a peek", "cons = 'gas'\nanalysis_data = data_generator(cons)\n\nsensorid, peek = next(analysis_data)\npeek = peek.resample(rule='MS').sum()\n\nfig, ax1 = plt.subplots()\nax2 = ax1.twinx()\nax1.plot_date(peek.index, peek[sensorid], '-', color='grey', lw=8, label=cons)\nfor column in peek.columns[1:]:\n if 'heatingDegreeDays' in column:\n ax2.plot_date(peek.index, peek[column], '-', label=column)\nplt.legend()", "Run Regression Analysis\nWe run the analysis on monthly and weekly basis.", "cons = 'water' \nsave_figures = True\n\nanalysis_data = data_generator(cons)\n\nmrs_month = []\nmrs_month_cv = []\nmrs_week = []\nfor sensorid, data in analysis_data: \n data.rename(columns={sensorid:cons}, inplace=True)\n \n df = data.resample(rule='MS').sum()\n if len(df) < 2:\n continue\n \n # monthly model, statistical validation\n mrs_month.append(regression.MVLinReg(df.ix[:end_model], cons, p_max=0.03)) \n figures = mrs_month[-1].plot(df=df)\n \n if save_figures:\n figures[0].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_model_'+sensorid+'.png'), dpi=100)\n figures[1].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_results_'+sensorid+'.png'), dpi=100)\n\n\n # weekly model, statistical validation\n df = data.resample(rule='W').sum()\n if len(df.ix[:end_model]) < 4:\n continue\n mrs_week.append(regression.MVLinReg(df.ix[:end_model], cons, p_max=0.02))\n if len(df.ix[end_model:]) > 0:\n figures = mrs_week[-1].plot(model=False, bar_chart=True, df=df.ix[end_model:])\n if save_figures:\n figures[0].savefig(os.path.join(c.get('data', 'folder'), 'figures', 'multivar_prediction_weekly_'+sensorid+'.png'), dpi=100)\n \n \n \n " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/tensorflow-without-a-phd
tensorflow-rnn-tutorial/old-school-tensorflow/tutorial/02_RNN_generator_temperatures_solution.ipynb
apache-2.0
[ "An RNN model for temperature data\nThis time we will be working with real data: daily (Tmin, Tmax) temperature series from 1666 weather stations spanning 50 years. It is to be noted that a pretty good predictor model already exists for temperatures: the average of temperatures on the same day of the year in N previous years. It is not clear if RNNs can do better but we will se how far they can go.\n<div class=\"alert alert-block alert-warning\">\nThis is the solution file. The corresponding tutorial file is [01_RNN_generator_temperatures_playground.ipynb](01_RNN_generator_temperatures_playground.ipynb)\n</div>", "import math\nimport sys\nimport time\nimport numpy as np\nimport utils_batching\nimport utils_args\nimport tensorflow as tf\nfrom tensorflow.python.lib.io import file_io as gfile\nprint(\"Tensorflow version: \" + tf.__version__)\n\nfrom matplotlib import pyplot as plt\nimport utils_prettystyle\nimport utils_display", "Hyperparameters\nN_FORWARD = 1: works but model struggles to predict from some positions<br/>\nN_FORWARD = 4: better but still bad occasionnally<br/>\nN_FORWARD = 8: works perfectly", "NB_EPOCHS = 5 # number of times the model sees all the data during training\n\nN_FORWARD = 8 # train the network to predict N in advance (traditionnally 1)\nRESAMPLE_BY = 5 # averaging period in days (training on daily data is too much)\nRNN_CELLSIZE = 128 # size of the RNN cells\nN_LAYERS = 2 # number of stacked RNN cells (needed for tensor shapes but code must be changed manually)\nSEQLEN = 128 # unrolled sequence length\nBATCHSIZE = 64 # mini-batch size\nDROPOUT_PKEEP = 0.7 # probability of neurons not being dropped (should be between 0.5 and 1)\nACTIVATION = tf.nn.tanh # Activation function for GRU cells (tf.nn.relu or tf.nn.tanh)\n\nJOB_DIR = \"checkpoints\"\nDATA_DIR = \"temperatures\"\n\n# potentially override some settings from command-line arguments\nif __name__ == '__main__':\n JOB_DIR, DATA_DIR = utils_args.read_args1(JOB_DIR, DATA_DIR)\n\nALL_FILEPATTERN = DATA_DIR + \"/*.csv\" # pattern matches all 1666 files \nEVAL_FILEPATTERN = DATA_DIR + \"/USC000*2.csv\" # pattern matches 8 files\n# pattern USW*.csv -> 298 files, pattern USW*0.csv -> 28 files\nprint('Reading data from \"{}\".\\nWrinting checkpoints to \"{}\".'.format(DATA_DIR, JOB_DIR))", "Temperature data\nThis is what our temperature datasets looks like: sequences of daily (Tmin, Tmax) from 1960 to 2010. They have been cleaned up and eventual missing values have been filled by interpolation. Interpolated regions of the dataset are marked in red on the graph.", "all_filenames = gfile.get_matching_files(ALL_FILEPATTERN)\neval_filenames = gfile.get_matching_files(EVAL_FILEPATTERN)\ntrain_filenames = list(set(all_filenames) - set(eval_filenames))\n\n# By default, this utility function loads all the files and places data\n# from them as-is in an array, one file per line. Later, we will use it\n# to shape the dataset as needed for training.\nite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames)\nevtemps, _, evdates, _, _ = next(ite) # gets everything\n\nprint('Pattern \"{}\" matches {} files'.format(ALL_FILEPATTERN, len(all_filenames)))\nprint('Pattern \"{}\" matches {} files'.format(EVAL_FILEPATTERN, len(eval_filenames)))\nprint(\"Evaluation files: {}\".format(len(eval_filenames)))\nprint(\"Training files: {}\".format(len(train_filenames)))\nprint(\"Initial shape of the evaluation dataset: \" + str(evtemps.shape))\nprint(\"{} files, {} data points per file, {} values per data point\"\n \" (Tmin, Tmax, is_interpolated) \".format(evtemps.shape[0], evtemps.shape[1],evtemps.shape[2]))\n\n# You can adjust the visualisation range and dataset here.\n# Interpolated regions of the dataset are marked in red.\nWEATHER_STATION = 0 # 0 to 7 in default eval dataset\nSTART_DATE = 0 # 0 = Jan 2nd 1950\nEND_DATE = 18262 # 18262 = Dec 31st 2009\nvisu_temperatures = evtemps[WEATHER_STATION,START_DATE:END_DATE]\nvisu_dates = evdates[START_DATE:END_DATE]\n\nutils_display.picture_this_4(visu_temperatures, visu_dates)", "Resampling\nOur RNN would need ot be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.", "# This time we ask the utility function to average temperatures over 5-day periods (RESAMPLE_BY=5)\nite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames, RESAMPLE_BY, tminmax=True)\nevaltemps, _, evaldates, _, _ = next(ite)\n\n# display five years worth of data\nWEATHER_STATION = 0 # 0 to 7 in default eval dataset\nSTART_DATE = 0 # 0 = Jan 2nd 1950\nEND_DATE = 365*5//RESAMPLE_BY # 5 years\nvisu_temperatures = evaltemps[WEATHER_STATION, START_DATE:END_DATE]\nvisu_dates = evaldates[START_DATE:END_DATE]\nplt.fill_between(visu_dates, visu_temperatures[:,0], visu_temperatures[:,1])\nplt.show()", "Visualize training sequences\nThis is what the neural network will see during training.", "# The function rnn_multistation_sampling_temperature_sequencer puts one weather station per line in\n# a batch and continues with data from the same station in corresponding lines in the next batch.\n# Features and labels are returned with shapes [BATCHSIZE, SEQLEN, 2]. The last dimension of size 2\n# contains (Tmin, Tmax).\nite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames,\n RESAMPLE_BY,\n BATCHSIZE,\n SEQLEN,\n N_FORWARD,\n nb_epochs=1,\n tminmax=True)\n\n# load 6 training sequences (each one contains data for all weather stations)\nvisu_data = [next(ite) for _ in range(6)]\n\n# Check that consecutive training sequences from the same weather station are indeed consecutive\nWEATHER_STATION = 4\nutils_display.picture_this_5(visu_data, WEATHER_STATION)", "The model definition\n\n<div style=\"text-align: right; font-family: monospace\">\n X shape [BATCHSIZE, SEQLEN, 2]<br/>\n Y shape [BATCHSIZE, SEQLEN, 2]<br/>\n H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]\n</div>\nWhen executed, this function instantiates the Tensorflow graph for our model.", "def model_rnn_fn(features, Hin, labels, step, dropout_pkeep):\n X = features # shape [BATCHSIZE, SEQLEN, 2], 2 for (Tmin, Tmax)\n batchsize = tf.shape(X)[0]\n seqlen = tf.shape(X)[1]\n pairlen = tf.shape(X)[2] # should be 2 (tmin, tmax)\n \n cells = [tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE, activation=ACTIVATION) for _ in range(N_LAYERS)]\n # dropout useful between cell layers only: no output dropout on last cell\n cells = [tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob = dropout_pkeep) for cell in cells]\n # a stacked RNN cell still works like an RNN cell\n cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=False)\n # X[BATCHSIZE, SEQLEN, 2], Hin[BATCHSIZE, RNN_CELLSIZE*N_LAYERS]\n # the sequence unrolling happens here\n Yn, H = tf.nn.dynamic_rnn(cell, X, initial_state=Hin, dtype=tf.float32)\n # Yn[BATCHSIZE, SEQLEN, RNN_CELLSIZE]\n Yn = tf.reshape(Yn, [batchsize*seqlen, RNN_CELLSIZE])\n Yr = tf.layers.dense(Yn, 2) # Yr [BATCHSIZE*SEQLEN, 2]\n Yr = tf.reshape(Yr, [batchsize, seqlen, 2]) # Yr [BATCHSIZE, SEQLEN, 2]\n Yout = Yr[:,-N_FORWARD:,:] # Last N_FORWARD outputs Yout [BATCHSIZE, N_FORWARD, 2]\n \n loss = tf.losses.mean_squared_error(Yr, labels) # labels[BATCHSIZE, SEQLEN, 2]\n \n lr = 0.001 + tf.train.exponential_decay(0.01, step, 1000, 0.5)\n optimizer = tf.train.AdamOptimizer(learning_rate=lr)\n train_op = optimizer.minimize(loss)\n \n return Yout, H, loss, train_op, Yr", "Instantiate the model", "tf.reset_default_graph() # restart model graph from scratch\n\n# placeholder for inputs\nHin = tf.placeholder(tf.float32, [None, RNN_CELLSIZE * N_LAYERS])\nfeatures = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]\nlabels = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]\nstep = tf.placeholder(tf.int32)\ndropout_pkeep = tf.placeholder(tf.float32)\n\n# instantiate the model\nYout, H, loss, train_op, Yr = model_rnn_fn(features, Hin, labels, step, dropout_pkeep)", "Initialize Tensorflow session\nThis resets all neuron weights and biases to initial random values", "# variable initialization\nsess = tf.Session()\ninit = tf.global_variables_initializer()\nsess.run([init])\nsaver = tf.train.Saver(max_to_keep=1)", "The training loop\nYou can re-execute this cell to continue training. <br/>\n<br/>\nTraining data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.", "losses = []\nindices = []\nlast_epoch = 99999\nlast_fileid = 99999\n\nfor i, (next_features, next_labels, dates, epoch, fileid) in enumerate(\n utils_batching.rnn_multistation_sampling_temperature_sequencer(train_filenames,\n RESAMPLE_BY,\n BATCHSIZE,\n SEQLEN,\n N_FORWARD,\n NB_EPOCHS, tminmax=True)):\n \n # reinintialize state between epochs or when starting on data from a new weather station\n if epoch != last_epoch or fileid != last_fileid:\n batchsize = next_features.shape[0]\n H_ = np.zeros([batchsize, RNN_CELLSIZE * N_LAYERS])\n print(\"State reset\")\n\n #train\n feed = {Hin: H_, features: next_features, labels: next_labels, step: i, dropout_pkeep: DROPOUT_PKEEP}\n Yout_, H_, loss_, _, Yr_ = sess.run([Yout, H, loss, train_op, Yr], feed_dict=feed)\n \n # print progress\n if i%20 == 0:\n print(\"{}: epoch {} loss = {} ({} weather stations this epoch)\".format(i, epoch, np.mean(loss_), fileid+1))\n sys.stdout.flush()\n if i%10 == 0:\n losses.append(np.mean(loss_))\n indices.append(i)\n # This visualisation can be helpful to see how the model \"locks\" on the shape of the curve\n# if i%100 == 0:\n# plt.figure(figsize=(10,2))\n# plt.fill_between(dates, next_features[0,:,0], next_features[0,:,1]).set_alpha(0.2)\n# plt.fill_between(dates, next_labels[0,:,0], next_labels[0,:,1])\n# plt.fill_between(dates, Yr_[0,:,0], Yr_[0,:,1]).set_alpha(0.8)\n# plt.show()\n \n last_epoch = epoch\n last_fileid = fileid\n \n# save the trained model\nSAVEDMODEL = JOB_DIR + \"/ckpt\" + str(int(time.time()))\ntf.saved_model.simple_save(sess, SAVEDMODEL,\n inputs={\"features\":features, \"Hin\":Hin, \"dropout_pkeep\":dropout_pkeep},\n outputs={\"Yout\":Yout, \"H\":H})\n\nplt.ylim(ymax=np.amax(losses[1:])) # ignore first value for scaling\nplt.plot(indices, losses)\nplt.show()", "Inference\nThis is a generative model: run an trained RNN cell in a loop", "def prediction_run(predict_fn, prime_data, run_length):\n H = np.zeros([1, RNN_CELLSIZE * N_LAYERS]) # zero state initially\n Yout = np.zeros([1, N_FORWARD, 2])\n data_len = prime_data.shape[0]-N_FORWARD\n\n # prime the state from data\n if data_len > 0:\n Yin = np.array(prime_data[:-N_FORWARD])\n Yin = np.reshape(Yin, [1, data_len, 2]) # reshape as one sequence of pairs (Tmin, Tmax)\n r = predict_fn({'features': Yin, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference\n Yout = r[\"Yout\"]\n H = r[\"H\"]\n \n # initaily, put real data on the inputs, not predictions\n Yout = np.expand_dims(prime_data[-N_FORWARD:], axis=0)\n # Yout shape [1, N_FORWARD, 2]: batch of a single sequence of length N_FORWARD of (Tmin, Tmax) data pointa\n \n # run prediction\n # To generate a sequence, run a trained cell in a loop passing as input and input state\n # respectively the output and output state from the previous iteration.\n results = []\n for i in range(run_length//N_FORWARD+1):\n r = predict_fn({'features': Yout, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference\n Yout = r[\"Yout\"]\n H = r[\"H\"]\n results.append(Yout[0]) # shape [N_FORWARD, 2]\n \n return np.concatenate(results, axis=0)[:run_length]", "Validation", "QYEAR = 365//(RESAMPLE_BY*4)\nYEAR = 365//(RESAMPLE_BY)\n\n# Try starting predictions from January / March / July (resp. OFFSET = YEAR or YEAR+QYEAR or YEAR+2*QYEAR)\n# Some start dates are more challenging for the model than others.\nOFFSET = 30*YEAR+1*QYEAR\n\nPRIMELEN=5*YEAR\nRUNLEN=3*YEAR\nPRIMELEN=512\nRUNLEN=256\nRMSELEN=3*365//(RESAMPLE_BY*2) # accuracy of predictions 1.5 years in advance\n\n# Restore the model from the last checkpoint saved previously.\n\n# Alternative checkpoints:\n# Once you have trained on all 1666 weather stations on Google Cloud ML Engine, you can load the checkpoint from there.\n# SAVEDMODEL = \"gs://{BUCKET}/sinejobs/sines_XXXXXX_XXXXXX/ckptXXXXXXXX\"\n# A sample checkpoint is provided with the lab. You can try loading it for comparison.\n# SAVEDMODEL = \"temperatures_best_checkpoint\"\n\npredict_fn = tf.contrib.predictor.from_saved_model(SAVEDMODEL)\n\nfor evaldata in evaltemps:\n prime_data = evaldata[OFFSET:OFFSET+PRIMELEN]\n results = prediction_run(predict_fn, prime_data, RUNLEN)\n utils_display.picture_this_6(evaldata, evaldates, prime_data, results, PRIMELEN, RUNLEN, OFFSET, RMSELEN)\n\nrmses = []\nbad_ones = 0\nfor offset in [YEAR, YEAR+QYEAR, YEAR+2*QYEAR]:\n for evaldata in evaltemps:\n prime_data = evaldata[offset:offset+PRIMELEN]\n results = prediction_run(predict_fn, prime_data, RUNLEN)\n rmse = math.sqrt(np.mean((evaldata[offset+PRIMELEN:offset+PRIMELEN+RMSELEN] - results[:RMSELEN])**2))\n rmses.append(rmse)\n if rmse>7: bad_ones += 1\n print(\"RMSE on {} predictions (shaded area): {}\".format(RMSELEN, rmse))\nprint(\"Average RMSE on {} weather stations: {} ({} really bad ones, i.e. >7.0)\".format(len(evaltemps), np.mean(rmses), bad_ones))\nsys.stdout.flush()", "Copyright 2018 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aapeebles/tibertraining
October 13th 2017 - Intro to Python/Lesson 1/Chapter 1 - Reading from a CSV.ipynb
mit
[ "import pandas as pd\npd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier\nfigsize(15, 5)", "1.1 Reading data from a csv file\nYou can read data from a CSV file using the read_csv function. By default, it assumes that the fields are comma-separated.\nWe're going to be looking some cyclist data from Montréal. Here's the original page (in French), but it's already included in this repository. We're using the data from 2012.\nThis dataset is a list of how many people were on 7 different bike paths in Montreal, each day.", "broken_df = pd.read_csv('../data/bikes.csv')\n\n# Look at the first 3 rows\nbroken_df[:3]", "You'll notice that this is totally broken! read_csv has a bunch of options that will let us fix that, though. Here we'll\n\nchange the column separator to a ;\nSet the encoding to 'latin1' (the default is 'utf8')\nParse the dates in the 'Date' column\nTell it that our dates have the date first instead of the month first\nSet the index to be the 'Date' column", "fixed_df = pd.read_csv('../data/bikes.csv', sep=';', encoding='latin1', parse_dates=['Date'], dayfirst=True, index_col='Date')\nfixed_df[:3]", "1.2 Selecting a column\nWhen you read a CSV, you get a kind of object called a DataFrame, which is made up of rows and columns. You get columns out of a DataFrame the same way you get elements out of a dictionary.\nHere's an example:", "fixed_df['Berri 1']", "1.3 Plotting a column\nJust add .plot() to the end! How could it be easier? =)\nWe can see that, unsurprisingly, not many people are biking in January, February, and March,", "fixed_df['Berri 1'].plot()", "We can also plot all the columns just as easily. We'll make it a little bigger, too.\nYou can see that it's more squished together, but all the bike paths behave basically the same -- if it's a bad day for cyclists, it's a bad day everywhere.", "fixed_df.plot(figsize=(15, 10))", "1.4 Putting all that together\nHere's the code we needed to write do draw that graph, all together:", "df = pd.read_csv('../data/bikes.csv', sep=';', encoding='latin1', parse_dates=['Date'], dayfirst=True, index_col='Date')\ndf['Berri 1'].plot()", "<style>\n @font-face {\n font-family: \"Computer Modern\";\n src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');\n }\n div.cell{\n width:800px;\n margin-left:16% !important;\n margin-right:auto;\n }\n h1 {\n font-family: Helvetica, serif;\n }\n h4{\n margin-top:12px;\n margin-bottom: 3px;\n }\n div.text_cell_render{\n font-family: Computer Modern, \"Helvetica Neue\", Arial, Helvetica, Geneva, sans-serif;\n line-height: 145%;\n font-size: 130%;\n width:800px;\n margin-left:auto;\n margin-right:auto;\n }\n .CodeMirror{\n font-family: \"Source Code Pro\", source-code-pro,Consolas, monospace;\n }\n .text_cell_render h5 {\n font-weight: 300;\n font-size: 22pt;\n color: #4057A1;\n font-style: italic;\n margin-bottom: .5em;\n margin-top: 0.5em;\n display: block;\n }\n\n .warning{\n color: rgb( 240, 20, 20 )\n }" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AlphaGit/deep-learning
batch-norm/Batch_Normalization_Exercises.ipynb
mit
[ "Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.", "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)", "Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer", "Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.", "def fully_connected(prev_layer, num_units, is_training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=None)\n layer = tf.layers.batch_normalization(layer, training=is_training)\n layer = tf.nn.relu(layer)\n return layer", "TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.", "def conv_layer(prev_layer, layer_depth, is_training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)\n conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)\n conv_layer = tf.nn.relu(conv_layer)\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n is_training = tf.placeholder(tf.bool)\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, is_training)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, is_training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\n with tf.control_dependencies(extra_update_ops):\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training: False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n is_training: False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,\n is_training: False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n is_training: False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.", "def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n weights = tf.Variable(tf.random_normal([prev_layer.shape[0], num_units]))\n layer = tf.matmul(prev_layer, weights)\n \n # missing: ReLU\n return layer", "TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.", "def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n bias = tf.Variable(tf.zeros(out_channels))\n\n conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n conv_layer = tf.nn.relu(conv_layer)\n\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
saashimi/code_guild
interactive-coding-challenges/graphs_trees/graph_path_exists/path_exists_challenge.ipynb
mit
[ "<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>\nChallenge Notebook\nProblem: Determine whether there is a path between two nodes in a graph.\n\nConstraints\nTest Cases\nAlgorithm\nCode\nUnit Test\nSolution Notebook\n\nConstraints\n\nIs the graph directed?\nYes\n\n\nCan we assume we already have Graph and Node classes?\nYes\n\n\n\nTest Cases\nInput:\n* add_edge(source, destination, weight)\ngraph.add_edge(0, 1, 5)\ngraph.add_edge(0, 4, 3)\ngraph.add_edge(0, 5, 2)\ngraph.add_edge(1, 3, 5)\ngraph.add_edge(1, 4, 4)\ngraph.add_edge(2, 1, 6)\ngraph.add_edge(3, 2, 7)\ngraph.add_edge(3, 4, 8)\nResult:\n* search_path(start=0, end=2) -> True\n* search_path(start=0, end=0) -> True\n* search_path(start=4, end=5) -> False\nAlgorithm\nRefer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.\nCode", "%run ../graph/graph.py\n%load ../graph/graph.py\n\ndef path_exists(start, end):\n # TODO: Implement me\n pass", "Unit Test\nThe following unit test is expected to fail until you solve the challenge.", "# %load test_path_exists.py\nfrom nose.tools import assert_equal\n\n\nclass TestPathExists(object):\n\n def test_path_exists(self):\n nodes = []\n graph = Graph()\n for id in range(0, 6):\n nodes.append(graph.add_node(id))\n graph.add_edge(0, 1, 5)\n graph.add_edge(0, 4, 3)\n graph.add_edge(0, 5, 2)\n graph.add_edge(1, 3, 5)\n graph.add_edge(1, 4, 4)\n graph.add_edge(2, 1, 6)\n graph.add_edge(3, 2, 7)\n graph.add_edge(3, 4, 8)\n\n assert_equal(path_exists(nodes[0], nodes[2]), True)\n assert_equal(path_exists(nodes[0], nodes[0]), True)\n assert_equal(path_exists(nodes[4], nodes[5]), False)\n\n print('Success: test_path_exists')\n\n\ndef main():\n test = TestPathExists()\n test.test_path_exists()\n\n\nif __name__ == '__main__':\n main()", "Solution Notebook\nReview the Solution Notebook for a discussion on algorithms and code solutions." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
colour-science/colour-ipython
notebooks/temperature/cct.ipynb
bsd-3-clause
[ "Colour Temperature & Correlated Colour Temperature\nThe colour temperature noted $T_c$ is the temperature of a Planckian radiator whose radiation has the same chromaticity as that of a given stimulus. <a name=\"back_reference_1\"></a><a href=\"#reference_1\">[1]</a>\nThe correlated colour temperature noted $T_{cp}$ and shortened to $CCT$ is the temperature of the Planckian radiator having the chromaticity nearest the chromaticity associated with the given spectral distribution on a diagram where the (CIE 1931 2° Standard Observer based) $u^\\prime, \\cfrac{2}{3}v^\\prime$ coordinates of the Planckian locus and the test stimulus are depicted. <a name=\"back_reference_2\"></a><a href=\"#reference_2\">[2]</a>\nThe CIE Standard Illuminant A, CIE Standard Illuminant D65 and CIE Illuminant E illuminants plotted in the CIE 1960 UCS Chromaticity Diagram:", "%matplotlib inline\n\nimport colour\nfrom colour.plotting import *\n\ncolour.utilities.filter_warnings(True, False)\n\ncolour_plotting_defaults()\n\nplanckian_locus_chromaticity_diagram_plot_CIE1960UCS(['A', 'D65', 'E'])\n\n# Zooming into the *Planckian Locus*.\nplanckian_locus_chromaticity_diagram_plot_CIE1960UCS(\n ['A', 'D65', 'E'], bounding_box=[0.15, 0.35, 0.25, 0.45])", "The concept of correlated colour temperature should not be used if the chromaticity of the test source differs more than $\\Delta C=5\\cdot10^{-2}$ from the Planckian radiator with: <a name=\"back_reference_2\"></a><a href=\"#reference_2\">[2]</a>\n$$\n\\Delta C= \\Biggl[ \\Bigl(u_t^\\prime-u_p^\\prime\\Bigr)^2+\\cfrac{4}{9}\\Bigl(v_t^\\prime-v_p^\\prime\\Bigr)^2\\Biggr]^{1/2}\n$$\nwhere $u_t^\\prime$, $u_p^\\prime$ refer to the test source, $v_t^\\prime$, $v_p^\\prime$ to the Planckian radiator.\nColour implements various methods for correlated colour temperature computation $T_{cp}$ from chromaticity coordinates $xy$ or $uv$ and chomaticity coordinates $xy$ or $uv$ computation from correlated colour temperature:\n\nRobertson (1968) correlated colour temperature $T_{cp}$ and $D_{uv}$ computation method by interpolation between isotemperature lines.\nOhno (2013) correlated colour temperature $T_{cp}$ and $D_{uv}$ computation method by direct approach and combined triangular and parabolic solutions.\nMcCamy (1992) correlated colour temperature $T_{cp}$ cubic approximation computation method.\nHernandez-Andres, Lee and Romero (1999) correlated colour temperature $T_{cp}$ cubic approximation computation method.\nKrystek (1985) chomaticity coordinates $uv$ polynomial approximation computation method.\nKang et al. (2002) chomaticity coordinates $xy$ cubic approximation computation method.\nCIE Illuminant D Series chomaticity coordinates $xy$ computation method.\n\nRobertson (1968) Method\nRobertson (1968) method is based on $T_{cp}$ computation by linear interpolation between two adjacent members of a defined set of 31 isotemperature lines. <a name=\"back_reference_3\"></a><a href=\"#reference_3\">[3]</a>\nIn the CIE 1960 UCS chromaticity diagram the distance $d_i$ of the chromaticity point of given source ($u_s$, $v_s$) from each of the chromaticity point ($u_i$, $v_i$) through which the $i$th isotemperature line of slope $t_i$ passes is calculated as follows: <a name=\"back_reference_3\"></a><a href=\"#reference_3\">[3]</a>\n$$\n\\begin{equation}\nd_i=\\cfrac{(v_s-v_i)-t_i(u_s-u_i)}{(1+t_i^2)^{1/2}}\n\\end{equation}\n$$\nThe chromaticity point ($u_s$, $v_s$) is located between the adjacent isotemperature lines $j$ and $j + 1$ if $d_j/d_{j+1} < 0$\n$$\n\\begin{equation}\nT_c=\\Biggl[\\cfrac{1}{T_j}+\\cfrac{\\theta_1}{\\theta_1+\\theta_2}\\biggl(\\cfrac{1}{T_{j+1}}-\\cfrac{1}{T_j}\\biggr)\\Biggr]^{-1}\n\\end{equation}\n$$\nwhere $\\theta_1$ and $\\theta_2$ are respectively the angles between the isotemperature lines $T_j$ and $T_{j+1}$ and the line joining ($u_s$, $v_s$) to their intersection. Since the isotemperature lines are narrow spaced $\\theta_1$ and $\\theta_2$ are small enough that one can set $\\theta_1/\\theta_2 = \\sin\\theta_1/\\sin\\theta_2$. The above equation can then be written:\n$$\n\\begin{equation}\nT_c=\\Biggl[\\cfrac{1}{T_j}+\\cfrac{d_j}{d_j-d_{j+1}}\\biggl(\\cfrac{1}{T_{j+1}}-\\cfrac{1}{T_j}\\biggr)\\Biggr]^{-1}\n\\end{equation}\n$$\nThe colour.uv_to_CCT_Robertson1968 definition is used to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ ($d_i$):", "colour.temperature.uv_to_CCT_Robertson1968((0.19783451566098664, 0.31221744678060825))", "colour.uv_to_CCT definition is implemented as a wrapper for various correlated colour temperature computation methods:", "colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'Robertson 1968')", "Note: 'robertson1968' is defined as a convenient alias for 'Robertson 1968':", "colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'robertson1968')", "Converting from correlated colour temperature $T_{cp}$ and distance $D_{uv}$ to chomaticity coordinates $uv$:", "colour.CCT_to_uv(6503.03994225557, 'Robertson 1968', D_uv=0.0032556165414977167)\n\ncolour.CCT_to_uv(6503.03994225557, 'robertson1968', D_uv=0.0032556165414977167)", "Ohno (2013) Method\nOhno (2013) presented new practical accurate methods to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ with an error of 1 $K$ in $T_{cp}$ range from 1000 to 20,000 and $\\pm$0.03 in $D_{uv}$. <a name=\"back_reference_4\"></a><a href=\"#reference_4\">[4]</a>\nTriangular Solution\nThe correlated colour temperature is calculated by searching the closest point on the Planckian locus on the CIE 1960 UCS chromaticity diagram but without the complexity of Roberston (1968) method.\nA table of coordinates ($U_i$, $V_i$) of Planckian locus (Planckian ($u$, $v$) table) in the estimated range of correlated colour temperature needed is generated and then the distance $d_i$ from the chromaticity coordinates ($U_x$, $V_x$) of a test light source is calculated.\nThe point $i = m$ is the point where $d_i$ is the smallest in the table ensuring that the correlated colour temperature to be obtained lies between $T_{m-1}$ and $T_{m+1}$.\nThe previous computation is repeated $n$ times through cascade expansion in order to reduce errors.\nA triangle is then formed by the chromaticity point ($U_x$, $V_x$) of the test light soure and the chromaticity points on Planckian locus at $T_{m-1}$ and $T_{m+1}$. The blackbody temperature $T_x$ for the closest point to the line between $T_{m-1}$ and $T_{m+1}$ is calculated as follows: <a name=\"back_reference_4\"></a><a href=\"#reference_4\">[4]</a>\n$$\n\\begin{equation}\nT_x=T_{m-1}+(T_{m+1}-T_{m-1})\\cdot\\cfrac{x}{l}\n\\end{equation}\n$$\nwith\n$$\n\\begin{equation}\n\\begin{aligned}\nx&=\\cfrac{d_{m-1}^2-d_{m+1}^2+l^2}{2l}\\\nl&=\\sqrt{(u_{m+1}-u_{m-1})^2+(v_{m+1}-v_{m-1})^2}\n\\end{aligned}\n\\end{equation}\n$$\n$D_{uv}$ is then calculated as follows:\n$$\n\\begin{equation}\nD_{uv}=(d_{m-1}^2-x^2)^{1/2}\\cdot sgn(v_x-v_{T_x})\n\\end{equation}\n$$\nwith\n$$\n\\begin{equation}\n\\begin{aligned}\nv_{T_x}&=v_{m-1}+{v_{m+1}-v_{m-1}}\\cdot x/l\\\nSIGN(z)&=1\\ for\\ z \\geq0\\ and\\ SIGN(z)=-1\\ for\\ z <0\n\\end{aligned}\n\\end{equation}\n$$\nErrors due to the non linearity of the correlated colour temperature scale on ($u$, $v$) coordinates are reduced by applying the following correction:\n$$\n\\begin{equation}\nT_{x,cor}=T_x\\times 0.99991\n\\end{equation}\n$$\nThis correction is not needed for Planckian ($u$, $v$) table with steps of 0.25% or smaller.\nParabolic Solution\nAfter finding $T_{m-1}$ and $T_{m+1}$ as described in the triangular solution method above, $d_{m−1}$, $d_m$, $d_{m+1}$ are fitted to a parabolic function. The polynomial is derived from $d_{m−1}$, $d_m$, $d_{m+1}$ and $T_{m−1}$, $T_m$, $T_{m+1}$ as: <a name=\"back_reference_4\"></a><a href=\"#reference_4\">[4]</a>\n$$\n\\begin{equation}\nd(T)=aT^2+bT+c\n\\end{equation}\n$$\nwhere\n$$\n\\begin{equation}\n\\begin{aligned}\na&\\ =[T_{m-1}(d_{m+1}-d_m)+T_m(d_{m-1}-d_{m+1})+T_{m+1}(d_m-d_{m-1})]\\cdot X^{-1}\\\nb&\\ =-[T_{m-1}^2(d_{m+1}-d_m)+T_m^2(d_{m-1}-d_{m+1})+T_{m+1}^2(d_m-d_{m-1})]\\cdot X^{-1}\\\nc&\\ =-[d_{m-1}(T_{m+1}-T_m)\\cdot T_m\\cdot T_{m+1}+d_m(T_{m-1}-T_{m+1})\\cdot T_{m-1}\\cdot T_{m+1}+d_{m+1}(T_m-T_{m-1})\\cdot T_{m-1}\\cdot T_m]\\cdot X^{-1}\n\\end{aligned}\n\\end{equation}\n$$\nwith\n$$\n\\begin{equation}\nX=(T_{m+1}-T_m)(T_{m-1}-T_{m+1})(T_m-T_{m-1})\n\\end{equation}\n$$\nThe correlated colour temperature $T=T_x$ is then obtained as follows:\n$$\n\\begin{equation}\nT_X=-\\cfrac{b}{2a}\\qquad\\because d^\\prime(T)=2aT_x+b=0\n\\end{equation}\n$$\nThe correction factor $T_{x,cor}$ for nonlinearity is applied as described in the triangular solution method.\n$D_{uv}$ is then calculated as follows:\n$$\n\\begin{equation}\nD_{uv}=SIGN(v_x-v_{T_x})\\cdot(aT_{x,cor}^2+bT_{x,cor}+c)\n\\end{equation}\n$$\nwith\n$$\n\\begin{equation}\nSIGN(z)=1\\ for\\ z \\geq0\\ and\\ SIGN(z)=-1\\ for\\ z <0\n\\end{equation}\n$$\nCombined Solution\nThe parabolic solution works accurately except in on or near the Planckian locus. Taking triangular solution results for $|D_{uv}| < 0.002$ and the parabolic solution results for other regions solves that problem. <a name=\"back_reference_4\"></a><a href=\"#reference_4\">[4]</a>\nThe colour.uv_to_CCT_Ohno2013 definition is used to calculate the correlated colour temperature $T_{cp}$ and distance $D_{uv}$ ($d_i$):", "colour.temperature.uv_to_CCT_Ohno2013((0.19783451566098664, 0.31221744678060825))", "Precision can be changed by passing a value to the iterations argument:", "for i in range(10):\n print(colour.temperature.uv_to_CCT_Ohno2013(\n (0.19783451566098664, 0.31221744678060825), iterations=i + 1))", "Using the colour.uv_to_CCT wrapper definition:", "colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'Ohno 2013')", "Note: 'ohno2013' is defined as a convenient alias for 'Ohno 2013':", "colour.uv_to_CCT((0.19783451566098664, 0.31221744678060825), 'ohno2013')", "Converting from correlated colour temperature $T_{cp}$ and distance $D_{uv}$ to chomaticity coordinates $uv$:", "colour.CCT_to_uv(6503.03994225557, 'Ohno 2013', D_uv=0.0032556165414977167, )\n\ncolour.CCT_to_uv(6503.03994225557, 'Ohno 2013', D_uv=0.0032556165414977167)", "McCamy (1992) Method\nMcCamy (1992) proposed an equation to compute correlated colour temperature $T_{cp}$ from CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ by using a chromaticity epicenter ($x_e$, $y_e$) where the isotemperature lines in some of the correlated colour temperature range converge and the inverse slope of the line $n$ that connects it to $x$, $y$. <a name=\"back_reference_5\"></a><a href=\"#reference_5\">[5]</a>\nThe cubic approximation equation is defined as follows: <a name=\"back_reference_5\"></a><a href=\"#reference_5\">[5]</a>\n$$\n\\begin{equation}\nT_{cp}=-449n^3+3525n^2-6823.3n+5520.33\n\\end{equation}\n$$\nwhere\n$$\n\\begin{equation}\nn=\\cfrac{x-x_e}{y-ye}\\\nx_e=0.3320\\qquad y_e=0.1858\n\\end{equation}\n$$\nThe colour.xy_to_CCT_mccamy definition is used to calculate the correlated colour temperature $T_{cp}$ from CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:", "colour.temperature.xy_to_CCT_McCamy1992((0.31271, 0.32902))", "The colour.xy_to_CCT definition is implemented as a wrapper for various correlated colour temperature computation methods from CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:", "colour.xy_to_CCT((0.31271, 0.32902), 'McCamy 1992')", "Note: 'mccamy1992' is defined as a convenient alias for 'McCamy 1992':", "colour.xy_to_CCT((0.31271, 0.32902), 'mccamy1992')", "Hernandez-Andres, Lee and Romero (1999) Method\nHernandez-Andres, Lee and Romero (1999) extended McCamy (1992) work by using a second epicenter to extend the accuracy over a wider correlated colour temperature and chromaticity coordinates range ($3000$–$10^6K$). <a name=\"back_reference_6\"></a><a href=\"#reference_6\">[6]</a>\nThe new extended equation to calculate the correlated colour temperature $T_{cp}$ is defined as follows: <a name=\"back_reference_6\"></a><a href=\"#reference_6\">[6]</a>\n$$\n\\begin{equation}\nT_{cp}=A_0+A_1exp(-n/t_1)+A_2exp(-n/t_2)+A_3exp(-n/t_3)\n\\end{equation}\n$$\nwhere\n$$\n\\begin{equation}\nn=\\cfrac{x-x_e}{y-ye}\\\n\\end{equation}\n$$\nwith\n| Constants | $T_{cp}$ Range ($K$) $3000$-$50,000$ | $T_{cp}$ Range ($K$) $50,000$-$8\\times10^5$ |\n|:---------:|:------------------------------------:|:-------------------------------------------:|\n| $A_0$ | $-949.86315$ | $36284.48953$ |\n| $A_1$ | $6253.80338$ | $0.00228$ |\n| $t_1$ | $0.92159$ | $0.07861$ |\n| $A_2$ | $28.70599$ | $5.4535\\times10^{-36}$ |\n| $t_2$ | $0.20039$ | $0.01543$ |\n| $A_3$ | $0.00004$ | |\n| $t_3$ | $0.07125$ | |\n| $x_e$ | $0.3366$ | $0.3356$ |\n| $y_e$ | $0.1735$ | $0.1691$ |\nThe colour.xy_to_CCT_hernandez definition is used to calculate the correlated colour temperature $T_{cp}$ from CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$:", "colour.temperature.xy_to_CCT_Hernandez1999((0.31271, 0.32902))", "Using the colour.xy_to_CCT wrapper definition:", "colour.xy_to_CCT((0.31271, 0.32902), 'Hernandez 1999')", "Note: 'hernandez1999' is defined as a convenient alias for 'Hernandez 1999':", "colour.xy_to_CCT((0.31271, 0.32902), 'hernandez1999')", "Krystek (1985) Method\nKrystek (1985) proposed a polynomial approximation valid from $1000K$ to $15000K$. <a name=\"back_reference_7\"></a><a href=\"#reference_7\">[7]</a>\nThe CIE UCS colourspace chromaticity coordinates $u$, $v$ are given by the following equations: <a name=\"back_reference_7\"></a><a href=\"#reference_7\">[7]</a>\n$$\n\\begin{equation}\n\\begin{aligned}\nu&\\ =\\cfrac{0.860117757 + 1.54118254 \\times 10^{-4} T + 1.28641212 \\times 10^{-7} T^2}{1 + 8.42420235 \\times 10^{-4} T + 7.08145163 \\times 10^{-7} T^2}\\\nv&\\ =\\cfrac{0.317398726 + 4.22806245 \\times 10^{-5} T + 4.20481691 \\times 10^{-8} T^2}{1 - 2.89741816 \\times 10^{-5} T + 1.61456053 \\times 10^{-7} T^2}\n\\end{aligned}\n\\end{equation}\n$$\nThe colour.CCT_to_uv_Krystek1985 definition is used to calculate the CIE UCS colourspace chromaticity coordinates $u$, $v$ from correlated colour temperature $T_{cp}$:", "colour.temperature.CCT_to_uv_Krystek1985(6504.389383048972)", "Using the colour.CCT_to_uv wrapper definition:", "colour.CCT_to_uv(6504.389383048972, 'Krystek 1985')", "Kang et al. (2002) Method\nKang et al. (2002) proposed an advanced colour-temperature control system for HDTV applications in the range from $1667K$ to $25000K$. <a name=\"back_reference_8\"></a><a href=\"#reference_8\">[8]</a>\nThe CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ are given by the following equations: <a name=\"back_reference_8\"></a><a href=\"#reference_8\">[8]</a>\n$$\n\\begin{equation}\n\\begin{aligned}\nx&\\ =\\begin{cases}-0.2661239\\cfrac{10^9}{T_{cp}^3}-0.2343589\\cfrac{10^6}{T_{cp}^2}+0.8776956\\cfrac{10^3}{T_{cp}}+0.179910 & for\\ 1667K\\leq T_{cp}\\leq4000k\\\n-3.0258469\\cfrac{10^9}{T_{cp}^3}+2.1070379\\cfrac{10^6}{T_{cp}^2}+0.2226347\\cfrac{10^3}{T_{cp}}+0.24039 & for\\ 4000K\\leq T_{cp}\\leq25000k\\end{cases}\\\ny&\\ =\\begin{cases}-1.1063814x^3-1.34811020x^2+2.18555832x-0.20219683 & for\\ 1667K\\leq T_{cp}\\leq2222k\\\n-0.9549476x^3-1.37418593x^2+2.09137015x-0.16748867 & for\\ 2222K\\leq T_{cp}\\leq4000k\\\n3.0817580x^3-5.8733867x^2+3.75112997x-0.37001483 & for\\ 4000K\\leq T_{cp}\\leq25000k\\end{cases}\n\\end{aligned}\n\\end{equation}\n$$\nThe colour.CCT_to_xy_kang definition is used to calculate the CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ from correlated colour temperature $T_{cp}$:", "colour.temperature.CCT_to_xy_Kang2002(6504.389383048972)", "The colour.CCT_to_xy definition is implemented as a wrapper for various CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ computation from correlated colour temperature:", "colour.CCT_to_xy(6504.389383048972, 'Kang 2002')", "Note: 'kang2002' is defined as a convenient alias for 'Kang 2002':", "colour.CCT_to_xy(6504.389383048972, 'kang2002')", "CIE Illuminant D Series Method\nJudd et al. (1964) defined the following equations to calculate the CIE 1931 2° Standard Observer chromaticity coordinates $x_D$, $y_D$ of a CIE Illuminant D Series: <a name=\"back_reference_9\"></a><a href=\"#reference_9\">[9]</a>\n$$\n\\begin{equation}\n\\begin{aligned}\nx_D&\\ =\\begin{cases}-4,6070\\cfrac{10^9}{T_{cp}^3}+2.9678\\cfrac{10^6}{T_{cp}^2}+0.09911\\cfrac{10^3}{T_{cp}}+0.244063 & for\\ 4000K\\leq T_{cp}\\leq7000k\\\n-2.0064\\cfrac{10^9}{T_{cp}^3}+1.9018\\cfrac{10^6}{T_{cp}^2}+0.24748\\cfrac{10^3}{T_{cp}}+0.237040 & for\\ 7000K\\leq T_{cp}\\leq25000k\\end{cases}\\\ny_D&\\ =-3.000x_D^2+0.2.870x_D-0.275\n\\end{aligned}\n\\end{equation}\n$$\nThe colour.CCT_to_xy_CIE_D definition is used to calculate the CIE 1931 2° Standard Observer chromaticity coordinates $x$, $y$ of a CIE Illuminant D Series from correlated colour temperature $T_{cp}$:", "colour.temperature.CCT_to_xy_CIE_D(6504.389383048972)", "Using the colour.CCT_to_xy wrapper definition:", "colour.CCT_to_xy(6504.389383048972, 'CIE Illuminant D Series')", "Note: 'cie_d' is defined as a convenient alias for 'CIE Illuminant D Series':", "colour.CCT_to_xy(6504.389383048972, 'cie_d')", "Bibliography\n\n<a href=\"#back_reference_1\">^<a> <a name=\"reference_1\"></a>CIE. (n.d.). 17-231 colour temperature [Tc]. Retrieved from http://eilv.cie.co.at/term/231\n<a href=\"#back_reference_2\">^<a> <a name=\"reference_2\"></a>CIE. (n.d.). 17-258 correlated colour temperature [Tcp]. Retrieved from http://eilv.cie.co.at/term/258\n<a href=\"#back_reference_3\">^<a> <a name=\"reference_3\"></a>Wyszecki, G., & Stiles, W. S. (2000). DISTRIBUTION TEMPERATURE, COLOR TEMPERATURE, AND CORRELATED COLOR TEMPERATURE. In Color Science: Concepts and Methods, Quantitative Data and Formulae (pp. 224–229). Wiley. ISBN:978-0471399186\n<a href=\"#back_reference_4\">^<a> <a name=\"reference_4\"></a>Ohno, Y. (2014). Practical Use and Calculation of CCT and Duv. LEUKOS, 10(1), 47–55. doi:10.1080/15502724.2014.839020\n<a href=\"#back_reference_5\">^<a> <a name=\"reference_5\"></a>Wikipedia. (n.d.). Approximation. Retrieved June 28, 2014, from http://en.wikipedia.org/wiki/Color_temperature#Approximation\n<a href=\"#back_reference_6\">^<a> <a name=\"reference_6\"></a>Hernández-Andrés, J., Lee, R. L., & Romero, J. (1999). Calculating correlated color temperatures across the entire gamut of daylight and skylight chromaticities. Applied Optics, 38(27), 5703–5709. doi:10.1364/AO.38.005703\n<a href=\"#back_reference_7\">^<a> <a name=\"reference_7\"></a>Krystek, M. (1985). An algorithm to calculate correlated colour temperature. Color Research & Application, 10(1), 38–40. doi:10.1002/col.5080100109\n<a href=\"#back_reference_8\">^<a> <a name=\"reference_8\"></a>Kang, B., Moon, O., Hong, C., Lee, H., Cho, B., & Kim, Y. (2002). Design of advanced color: Temperature control system for HDTV applications. Journal of the Korean …, 41(6), 865–871. Retrieved from http://cat.inist.fr/?aModele=afficheN&cpsidt=14448733\n<a href=\"#back_reference_9\">^<a> <a name=\"reference_9\"></a>Wyszecki, G., & Stiles, W. S. (2000). CIE Method of Calculating D-Illuminants. In Color Science: Concepts and Methods, Quantitative Data and Formulae (pp. 145–146). Wiley. ISBN:978-0471399186" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/introduction_to_tensorflow/labs/basic_intro_logistic_regression.ipynb
apache-2.0
[ "Basic Introduction to Logistic Regression\nLearning Objectives\n\nBuild a model\nTrain this model on example data\nUse the model to make predictions about unknown data\n\nIntroduction\nIn this notebook, you use machine learning to categorize Iris flowers by species. It uses TensorFlow to:\n\nUse TensorFlow's default eager execution development environment\nImport data with the Datasets API\nBuild models and layers with TensorFlow's Keras API\n\nHere firstly we will Import and parse the dataset, then select the type of model. After that Train the model.\nAt last we will Evaluate the model's effectiveness and then use the trained model to make predictions.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\n# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1", "Configure imports\nImport TensorFlow and the other required Python modules. By default, TensorFlow uses eager execution to evaluate operations immediately, returning concrete values instead of creating a computational graph that is executed later. If you are used to a REPL or the python interactive console, this feels familiar.", "import os\n\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\n\nprint(f\"TensorFlow version: {tf.__version__}\")\nprint(f\"Eager execution: {tf.executing_eagerly()}\")", "The Iris classification problem\nImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their sepals and petals.\nThe Iris genus entails about 300 species, but our program will only classify the following three:\n\nIris setosa\nIris virginica\nIris versicolor\n\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/iris_three_species.jpg\"\n alt=\"Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 1.</b> <a href=\"https://commons.wikimedia.org/w/index.php?curid=170298\">Iris setosa</a> (by <a href=\"https://commons.wikimedia.org/wiki/User:Radomil\">Radomil</a>, CC BY-SA 3.0), <a href=\"https://commons.wikimedia.org/w/index.php?curid=248095\">Iris versicolor</a>, (by <a href=\"https://commons.wikimedia.org/wiki/User:Dlanglois\">Dlanglois</a>, CC BY-SA 3.0), and <a href=\"https://www.flickr.com/photos/33397993@N05/3352169862\">Iris virginica</a> (by <a href=\"https://www.flickr.com/photos/33397993@N05\">Frank Mayfield</a>, CC BY-SA 2.0).<br/>&nbsp;\n </td></tr>\n</table>\n\nFortunately, someone has already created a dataset of 120 Iris flowers with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.\nImport and parse the training dataset\nDownload the dataset file and convert it into a structure that can be used by this Python program.\nDownload the dataset\nDownload the training dataset file using the tf.keras.utils.get_file function. This returns the file path of the downloaded file:", "train_dataset_url = \"https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv\"\n\ntrain_dataset_fp = tf.keras.utils.get_file(\n fname=os.path.basename(train_dataset_url), origin=train_dataset_url\n)\n\nprint(f\"Local copy of the dataset file: {train_dataset_fp}\")", "Inspect the data\nThis dataset, iris_training.csv, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the head -n5 command to take a peek at the first five entries:", "!head -n5 {train_dataset_fp}", "From this view of the dataset, notice the following:\n\nThe first line is a header containing information about the dataset:\nThere are 120 total examples. Each example has four features and one of three possible label names.\nSubsequent rows are data records, one example per line, where:\nThe first four fields are features: these are the characteristics of an example. Here, the fields hold float numbers representing flower measurements.\nThe last column is the label: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.\n\nLet's write that out in code:", "# column order in CSV file\ncolumn_names = [\n \"sepal_length\",\n \"sepal_width\",\n \"petal_length\",\n \"petal_width\",\n \"species\",\n]\n\nfeature_names = column_names[:-1]\nlabel_name = column_names[-1]\n\nprint(f\"Features: {feature_names}\")\nprint(f\"Label: {label_name}\")", "Each label is associated with string name (for example, \"setosa\"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:\n\n0: Iris setosa\n1: Iris versicolor\n2: Iris virginica\n\nFor more information about features and labels, see the ML Terminology section of the Machine Learning Crash Course.", "class_names = [\"Iris setosa\", \"Iris versicolor\", \"Iris virginica\"]", "Create a tf.data.Dataset\nTensorFlow's Dataset API handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training.\nSince the dataset is a CSV-formatted text file, use the tf.data.experimental.make_csv_dataset function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (shuffle=True, shuffle_buffer_size=10000), and repeat the dataset forever (num_epochs=None). We also set the batch_size parameter:", "batch_size = 32\n\ntrain_dataset = tf.data.experimental.make_csv_dataset(\n train_dataset_fp,\n batch_size,\n column_names=column_names,\n label_name=label_name,\n num_epochs=1,\n)", "The make_csv_dataset function returns a tf.data.Dataset of (features, label) pairs, where features is a dictionary: {'feature_name': value}\nThese Dataset objects are iterable. Let's look at a batch of features:", "features, labels = next(iter(train_dataset))\n\nprint(features)", "Notice that like-features are grouped together, or batched. Each example row's fields are appended to the corresponding feature array. Change the batch_size to set the number of examples stored in these feature arrays.\nYou can start to see some clusters by plotting a few features from the batch:", "plt.scatter(\n features[\"petal_length\"], features[\"sepal_length\"], c=labels, cmap=\"viridis\"\n)\n\nplt.xlabel(\"Petal length\")\nplt.ylabel(\"Sepal length\")\nplt.show()", "To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: (batch_size, num_features).\nThis function uses the tf.stack method which takes values from a list of tensors and creates a combined tensor at the specified dimension:", "def pack_features_vector(features, labels):\n \"\"\"Pack the features into a single array.\"\"\"\n features = tf.stack(list(features.values()), axis=1)\n return features, labels", "Then use the tf.data.Dataset#map method to pack the features of each (features,label) pair into the training dataset:", "train_dataset = train_dataset.map(pack_features_vector)", "The features element of the Dataset are now arrays with shape (batch_size, num_features). Let's look at the first few examples:", "features, labels = next(iter(train_dataset))\n\nprint(features[:5])", "Select the type of model\nWhy model?\nA model is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.\nCould you determine the relationship between the four features and the Iris species without using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach determines the model for you. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.\nSelect the model\nWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. Neural networks can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more hidden layers. Each hidden layer consists of one or more neurons. There are several categories of neural networks and this program uses a dense, or fully-connected neural network: the neurons in one layer receive input connections from every neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:\n<table>\n <tr><td>\n <img src=\"https://www.tensorflow.org/images/custom_estimators/full_network.png\"\n alt=\"A diagram of the network architecture: Inputs, 2 hidden layers, and outputs\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/>&nbsp;\n </td></tr>\n</table>\n\nWhen the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called inference. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: 0.02 for Iris setosa, 0.95 for Iris versicolor, and 0.03 for Iris virginica. This means that the model predicts—with 95% probability—that an unlabeled example flower is an Iris versicolor.\nCreate a model using Keras\nThe TensorFlow tf.keras API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.\nThe tf.keras.Sequential model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two tf.keras.layers.Dense layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's input_shape parameter corresponds to the number of features from the dataset, and is required:\nLab Task #1: Building the model", "# TODO 1\n# TODO -- Your code here.", "The activation function determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many tf.keras.activations, but ReLU is common for hidden layers.\nThe ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.\nUsing the model\nLet's have a quick look at what this model does to a batch of features:", "predictions = model(features)\npredictions[:5]", "Here, each example returns a logit for each class.\nTo convert these logits to a probability for each class, use the softmax function:", "tf.nn.softmax(predictions[:5])", "Taking the tf.argmax across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions:", "print(f\"Prediction: {tf.argmax(predictions, axis=1)}\")\nprint(f\"Labels: {labels}\")", "Train the model\nTraining is the stage of machine learning when the model is gradually optimized, or the model learns the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn too much about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.\nThe Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels. In unsupervised machine learning, the examples don't contain labels. Instead, the model typically finds patterns among the features.\nDefine the loss and gradient function\nBoth training and evaluation stages need to calculate the model's loss. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.\nOur model will calculate its loss using the tf.keras.losses.SparseCategoricalCrossentropy function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.\nLab Task #2: Training Model on example data.", "loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\n\ndef loss(model, x, y, training):\n# TODO 2\n# TODO -- Your code here.", "Use the tf.GradientTape context to calculate the gradients used to optimize your model:", "def grad(model, inputs, targets):\n with tf.GradientTape() as tape:\n loss_value = loss(model, inputs, targets, training=True)\n return loss_value, tape.gradient(loss_value, model.trainable_variables)", "Create an optimizer\nAn optimizer applies the computed gradients to the model's variables to minimize the loss function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.\n<table>\n <tr><td>\n <img src=\"https://cs231n.github.io/assets/nn3/opt1.gif\" width=\"70%\"\n alt=\"Optimization algorithms visualized over time in 3D space.\">\n </td></tr>\n <tr><td align=\"center\">\n <b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href=\"http://cs231n.github.io/neural-networks-3/\">Stanford class CS231n</a>, MIT License, Image credit: <a href=\"https://twitter.com/alecrad\">Alec Radford</a>)\n </td></tr>\n</table>\n\nTensorFlow has many optimization algorithms available for training. This model uses the tf.keras.optimizers.SGD that implements the stochastic gradient descent (SGD) algorithm. The learning_rate sets the step size to take for each iteration down the hill. This is a hyperparameter that you'll commonly adjust to achieve better results.\nLet's setup the optimizer:", "optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)", "We'll use this to calculate a single optimization step:", "loss_value, grads = grad(model, features, labels)\n\nprint(\n \"Step: {}, Initial Loss: {}\".format(\n optimizer.iterations.numpy(), loss_value.numpy()\n )\n)\n\noptimizer.apply_gradients(zip(grads, model.trainable_variables))\n\nprint(\n \"Step: {},Loss: {}\".format(\n optimizer.iterations.numpy(),\n loss(model, features, labels, training=True).numpy(),\n )\n)", "Training loop\nWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:\n\nIterate each epoch. An epoch is one pass through the dataset.\nWithin an epoch, iterate over each example in the training Dataset grabbing its features (x) and label (y).\nUsing the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.\nUse an optimizer to update the model's variables.\nKeep track of some stats for visualization.\nRepeat for each epoch.\n\nThe num_epochs variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. num_epochs is a hyperparameter that you can tune. Choosing the right number usually requires both experience and experimentation:", "## Note: Rerunning this cell uses the same model variables\n\n# Keep results for plotting\ntrain_loss_results = []\ntrain_accuracy_results = []\n\nnum_epochs = 201\n\nfor epoch in range(num_epochs):\n epoch_loss_avg = tf.keras.metrics.Mean()\n epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()\n\n # Training loop - using batches of 32\n for x, y in train_dataset:\n # Optimize the model\n loss_value, grads = grad(model, x, y)\n optimizer.apply_gradients(zip(grads, model.trainable_variables))\n\n # Track progress\n epoch_loss_avg.update_state(loss_value) # Add current batch loss\n # Compare predicted label to actual label\n # training=True is needed only if there are layers with different\n # behavior during training versus inference (e.g. Dropout).\n epoch_accuracy.update_state(y, model(x, training=True))\n\n # End epoch\n train_loss_results.append(epoch_loss_avg.result())\n train_accuracy_results.append(epoch_accuracy.result())\n\n if epoch % 50 == 0:\n print(\n \"Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}\".format(\n epoch, epoch_loss_avg.result(), epoch_accuracy.result()\n )\n )", "Visualize the loss function over time\nWhile it's helpful to print out the model's training progress, it's often more helpful to see this progress. TensorBoard is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the matplotlib module.\nInterpreting these charts takes some experience, but you really want to see the loss go down and the accuracy go up:", "fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))\nfig.suptitle(\"Training Metrics\")\n\naxes[0].set_ylabel(\"Loss\", fontsize=14)\naxes[0].plot(train_loss_results)\n\naxes[1].set_ylabel(\"Accuracy\", fontsize=14)\naxes[1].set_xlabel(\"Epoch\", fontsize=14)\naxes[1].plot(train_accuracy_results)\nplt.show()", "Evaluate the model's effectiveness\nNow that the model is trained, we can get some statistics on its performance.\nEvaluating means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's predictions against the actual label. For example, a model that picked the correct species on half the input examples has an accuracy of 0.5. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:\n<table cellpadding=\"8\" border=\"0\">\n <colgroup>\n <col span=\"4\" >\n <col span=\"1\" bgcolor=\"lightblue\">\n <col span=\"1\" bgcolor=\"lightgreen\">\n </colgroup>\n <tr bgcolor=\"lightgray\">\n <th colspan=\"4\">Example features</th>\n <th colspan=\"1\">Label</th>\n <th colspan=\"1\" >Model prediction</th>\n </tr>\n <tr>\n <td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr>\n <td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align=\"center\">2</td><td align=\"center\">2</td>\n </tr>\n <tr>\n <td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align=\"center\">0</td><td align=\"center\">0</td>\n </tr>\n <tr>\n <td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align=\"center\">1</td><td align=\"center\" bgcolor=\"red\">2</td>\n </tr>\n <tr>\n <td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align=\"center\">1</td><td align=\"center\">1</td>\n </tr>\n <tr><td align=\"center\" colspan=\"6\">\n <b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/>&nbsp;\n </td></tr>\n</table>\n\nSetup the test dataset\nEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate test set rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.\nThe setup for the test Dataset is similar to the setup for training Dataset. Download the CSV text file and parse that values, then give it a little shuffle:", "test_url = (\n \"https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv\"\n)\n\ntest_fp = tf.keras.utils.get_file(\n fname=os.path.basename(test_url), origin=test_url\n)\n\ntest_dataset = tf.data.experimental.make_csv_dataset(\n test_fp,\n batch_size,\n column_names=column_names,\n label_name=\"species\",\n num_epochs=1,\n shuffle=False,\n)\n\ntest_dataset = test_dataset.map(pack_features_vector)", "Evaluate the model on the test dataset\nUnlike the training stage, the model only evaluates a single epoch of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set:", "test_accuracy = tf.keras.metrics.Accuracy()\n\nfor (x, y) in test_dataset:\n # training=False is needed only if there are layers with different\n # behavior during training versus inference (e.g. Dropout).\n logits = model(x, training=False)\n prediction = tf.argmax(logits, axis=1, output_type=tf.int32)\n test_accuracy(prediction, y)\n\nprint(f\"Test set accuracy: {test_accuracy.result():.3%}\")", "We can see on the last batch, for example, the model is usually correct:", "tf.stack([y, prediction], axis=1)", "Use the trained model to make predictions\nWe've trained a model and \"proven\" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on unlabeled examples; that is, on examples that contain features but not a label.\nIn real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:\n\n0: Iris setosa\n1: Iris versicolor\n2: Iris virginica\n\nLab Task #3: Use model to make predictions", "# TODO 3\n# TODO -- Your code here." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fastai/fastai
nbs/44_tutorial.tabular.ipynb
apache-2.0
[ "#|hide\n#|skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab", "Tabular training\n\nHow to use the tabular application in fastai\n\nTo illustrate the tabular application, we will use the example of the Adult dataset where we have to predict if a person is earning more or less than $50k per year using some general data.", "from fastai.tabular.all import *", "We can download a sample of this dataset with the usual untar_data command:", "path = untar_data(URLs.ADULT_SAMPLE)\npath.ls()", "Then we can have a look at how the data is structured:", "df = pd.read_csv(path/'adult.csv')\ndf.head()", "Some of the columns are continuous (like age) and we will treat them as float numbers we can feed our model directly. Others are categorical (like workclass or education) and we will convert them to a unique index that we will feed to embedding layers. We can specify our categorical and continuous column names, as well as the name of the dependent variable in TabularDataLoaders factory methods:", "dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names=\"salary\",\n cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],\n cont_names = ['age', 'fnlwgt', 'education-num'],\n procs = [Categorify, FillMissing, Normalize])", "The last part is the list of pre-processors we apply to our data:\n\nCategorify is going to take every categorical variable and make a map from integer to unique categories, then replace the values by the corresponding index.\nFillMissing will fill the missing values in the continuous variables by the median of existing values (you can choose a specific value if you prefer)\nNormalize will normalize the continuous variables (subtract the mean and divide by the std)\n\nTo further expose what's going on below the surface, let's rewrite this utilizing fastai's TabularPandas class. We will need to make one adjustment, which is defining how we want to split our data. By default the factory method above used a random 80/20 split, so we will do the same:", "splits = RandomSplitter(valid_pct=0.2)(range_of(df))\n\nto = TabularPandas(df, procs=[Categorify, FillMissing,Normalize],\n cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'],\n cont_names = ['age', 'fnlwgt', 'education-num'],\n y_names='salary',\n splits=splits)", "Once we build our TabularPandas object, our data is completely preprocessed as seen below:", "to.xs.iloc[:2]", "Now we can build our DataLoaders again:", "dls = to.dataloaders(bs=64)", "Later we will explore why using TabularPandas to preprocess will be valuable.\n\nThe show_batch method works like for every other application:", "dls.show_batch()", "We can define a model using the tabular_learner method. When we define our model, fastai will try to infer the loss function based on our y_names earlier. \nNote: Sometimes with tabular data, your y's may be encoded (such as 0 and 1). In such a case you should explicitly pass y_block = CategoryBlock in your constructor so fastai won't presume you are doing regression.", "learn = tabular_learner(dls, metrics=accuracy)", "And we can train that model with the fit_one_cycle method (the fine_tune method won't be useful here since we don't have a pretrained model).", "learn.fit_one_cycle(1)", "We can then have a look at some predictions:", "learn.show_results()", "Or use the predict method on a row:", "row, clas, probs = learn.predict(df.iloc[0])\n\nrow.show()\n\nclas, probs", "To get prediction on a new dataframe, you can use the test_dl method of the DataLoaders. That dataframe does not need to have the dependent variable in its column.", "test_df = df.copy()\ntest_df.drop(['salary'], axis=1, inplace=True)\ndl = learn.dls.test_dl(test_df)", "Then Learner.get_preds will give you the predictions:", "learn.get_preds(dl=dl)", "Note: Since machine learning models can't magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training\n\nfastai with Other Libraries\nAs mentioned earlier, TabularPandas is a powerful and easy preprocessing tool for tabular data. Integration with libraries such as Random Forests and XGBoost requires only one extra step, that the .dataloaders call did for us. Let's look at our to again. Its values are stored in a DataFrame like object, where we can extract the cats, conts, xs and ys if we want to:", "to.xs[:3]", "Now that everything is encoded, you can then send this off to XGBoost or Random Forests by extracting the train and validation sets and their values:", "X_train, y_train = to.train.xs, to.train.ys.values.ravel()\nX_test, y_test = to.valid.xs, to.valid.ys.values.ravel()", "And now we can directly send this in!" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
phoebe-project/phoebe2-docs
development/tutorials/distributions.ipynb
gpl-3.0
[ "Distributions\nDistributions are mostly useful when using samplers (which we'll see in the next tutorial on solving the inverse problem) - but can also be useful to propagate any set of distributions (whether those be uncertainties in the literature, etc) through the forward model.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.4,<2.5\"\n\nimport phoebe\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()\nb.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))", "Adding Distributions\nDistributions can be attached to most any FloatParameter in the Bundle. To see a list of these available parameters, we can call b.get_adjustable_parameters. Note the exclude_constrained option which defaults to True: we can set distributions on constrained parameters (for priors, for example), but those will not be able to be sampled from in the forward model or while fitting. We'll come back to this in the next tutorial when looking at priors.", "b.get_adjustable_parameters()", "add_distribution is quite flexible and accepts several different syntaxes to add multiple distributions in one line. Here we'll just attach a distribution to a single parameter at a time. Just like when calling add_dataset or add_compute, add_distribution optionally takes a distribution tag -- and in the cases of distributions, we can attach distributions to multiple parameters with the same distribution tag.\nThe values of the DistributionParameters are distl distribution objects -- the most common of which are conveniently available at the top-level of PHOEBE:\n\nphoebe.gaussian\nphoebe.gaussian_around\nphoebe.uniform\nphoebe.uniform_around\n\nFor an overview of the different available types as they apply in PHOEBE, see Advanced: Distribution Types.\nNow let's attach a gaussian distribution on the temperature of the primary star.", "b.add_distribution(qualifier='teff', component='primary', \n value=phoebe.gaussian(6000,100), \n distribution='mydist')", "As you probably can expect by now, we also have methods to:\n\nget_distribution\nrename_distribution\nremove_distribution", "print(b.get_distribution(distribution='mydist'))", "Now let's add another distribution, with the same distribution tag, to the inclination of the binary.", "b.add_distribution(qualifier='incl', component='binary',\n value=phoebe.uniform(80,90),\n distribution='mydist')\n\nprint(b.get_distribution(distribution='mydist'))", "Accessing & Plotting Distributions\nThe parameters we've created and attached are DistributionParameters and live in context='distribution', with all other tags matching the parameter they're referencing. For example, let's filter and look at the distributions we've added.", "print(b.filter(context='distribution'))\n\nprint(b.get_parameter(context='distribution', qualifier='incl'))\n\nprint(b.get_parameter(context='distribution', qualifier='incl').tags)", "The \"value\" of the parameter, is the distl distributon object itself.", "b.get_value(context='distribution', qualifier='incl')", "And because of that, we can call any method on the distl object, including plotting the distribution.", "_ = b.get_value(context='distribution', qualifier='incl').plot(show=True)", "If we want to see how multiple individual distributions interact and are correlated with each other via a corner plot, we can access the combined \"distribution collection\" from any number of distribution tags. This may not be terribly useful now, but is very useful when trying to create multivariate priors.\n\nb.get_distribution_collection\nb.plot_distribution_collection", "_ = b.plot_distribution_collection(distribution='mydist', show=True)", "Sampling Distributions\nWe can also sample from these distributions - either manually by calling sample on the distl or in bulk by respecting any covariances in the \"distributon collection\" via:\n\nb.sample_distribution_collection", "b.sample_distribution_collection(distribution='mydist')", "By default this just returns a dictionary with the twigs and sampled values. But if we wanted, we could have these applied immediately to the face-values by passing set_value=True, in which case a ParameterSet of changed parameters (including those via constraints) is returned instead.", "changed_params = b.sample_distribution_collection(distribution='mydist', set_value=True)\n\nprint(changed_params)", "Propagating Distributions through Forward Model\nLastly, we can have PHOEBE automatically draw from a \"distribution collection\" multiple times and expose the distribution of the model itself.", "print(b.get_parameter(qualifier='sample_from', context='compute'))", "Once sample_from is set, sample_num and sample_mode are exposed as visible parameters", "b.set_value('sample_from', value='mydist')\n\nprint(b.filter(qualifier='sample*'))", "Now when we call run_compute, 10 different instances of the forward model will be computed from 10 random draws from the \"distribution collection\" but only the median and 1-sigma uncertainties will be exposed in the model.", "b.run_compute(irrad_method='none')\n\n_ = b.plot(show=True)", "Next\nNext up: let's learn about solving the inverse problem\nOr more about these advanced distributions topics:\n* Advanced: Distribution Types\n* Advanced: Distribution Propagation\n* Advanced: Latex Representation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jamesfolberth/NGC_STEM_camp_AWS
notebooks/data8_notebooks/lab04/lab04.ipynb
bsd-3-clause
[ "Functions and Visualizations\nIn the past week, you've learned a lot about using tables to work with datasets. With your tools so far, you can:\n\nLoad a dataset from the web;\nWork with (extract, add, drop, relabel) columns from the dataset;\nFilter and sort it according to certain criteria;\nPerform arithmetic on columns of numbers;\nGroup rows by columns of categories, counting the number of rows in each category;\nMake a bar chart of the categories.\n\nThese tools are fairly powerful, but they're not quite enough for all the analysis and data we'll eventually be doing in this course. Today we'll learn a tool that dramatically expands this toolbox: the table method apply. We'll also see how to make histograms, which are like bar charts for numerical data.", "# Run this cell to set up the notebook, but please don't change it.\n\n# These lines import the Numpy and Datascience modules.\nimport numpy as np\nfrom datascience import *\n\n# These lines do some fancy plotting magic.\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\nimport warnings\nwarnings.simplefilter('ignore', FutureWarning)\n\n# These lines load the tests.\nfrom client.api.assignment import load_assignment \ntests = load_assignment('lab04.ok')", "1. Functions and CEO Incomes\nIn Which We Write Down a Recipe for Cake\nLet's start with a real data analysis task. We'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data were compiled for a Los Angeles Times analysis here, and ultimately came from filings mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.\nWe've copied the data in raw form from the LA Times page into a file called raw_compensation.csv. (The page notes that all dollar amounts are in millions of dollars.)", "raw_compensation = Table.read_table('raw_compensation.csv')\nraw_compensation", "Question 1. When we first loaded this dataset, we tried to compute the average of the CEOs' pay like this:\nnp.average(raw_compensation.column(\"Total Pay\"))\n\nExplain why that didn't work. Hint: Try looking at some of the values in the \"Total Pay\" column.\nWrite your answer here, replacing this text.", "...", "Question 2. Extract the first value in the \"Total Pay\" column. It's Mark Hurd's pay in 2015, in millions of dollars. Call it mark_hurd_pay_string.", "mark_hurd_pay_string = ...\nmark_hurd_pay_string\n\n_ = tests.grade('q1_2')", "Question 3. Convert mark_hurd_pay_string to a number of dollars. The string method strip will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of \"100%\".strip(\"%\") is the string \"100\". You'll also need the function float, which converts a string that looks like a number to an actual number. Last, remember that the answer should be in dollars, not millions of dollars.", "mark_hurd_pay = ...\nmark_hurd_pay\n\n_ = tests.grade('q1_3')", "To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.\nThis is where functions come in. First, we'll define our own function that packages together the code we wrote to convert a pay string to a pay number. This has its own benefits. Later in this lab we'll see a bigger payoff: we can call that function on every pay string in the dataset at once.\nQuestion 4. Below we've written code that defines a function that converts pay strings to pay numbers, just like your code above. But it has a small error, which you can correct without knowing what all the other stuff in the cell means. Correct the problem.", "def convert_pay_string_to_number(pay_string):\n \"\"\"Converts a pay string like '$100 ' (in millions) to a number of dollars.\"\"\"\n return float(pay_string.strip(\"$\"))\n\n_ = tests.grade('q1_4')", "Running that cell doesn't convert any particular pay string.\nRather, think of it as defining a recipe for converting a pay string to a number. Writing down a recipe for cake doesn't give you a cake. You have to gather the ingredients and get a chef to execute the instructions in the recipe to get a cake. Similarly, no pay string is converted to a number until we call our function on a particular pay string (which tells Python, our lightning-fast chef, to execute it).\nWe can call our function just like we call the built-in functions we've seen. (Almost all of those functions are defined in this way, in fact!) It takes one argument, a string, and it returns a number.", "convert_pay_string_to_number(mark_hurd_pay_string)\n\n# We can also compute Safra Catz's pay in the same way:\nconvert_pay_string_to_number(raw_compensation.where(\"Name\", are.equal_to(\"Safra A. Catz*\")).column(\"Total Pay\").item(0))", "What have we gained? Well, without the function, we'd have to copy that 10**6 * float(pay_string.strip(\"$\")) stuff each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing.\nWe'd still have to call the function 102 times to convert all the salaries, which we'll fix next.\nBut for now, let's write some more functions.\n2. Defining functions\nIn Which We Write a Lot of Recipes\nLet's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of to_percentage(.5) should be the number 50. (No percent sign.)\nA function definition has a few parts.\ndef\nIt always starts with def (short for define):\ndef\n\nName\nNext comes the name of the function. Let's call our function to_percentage.\ndef to_percentage\n\nSignature\nNext comes something called the signature of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. to_percentage should take one argument, and we'll call that argument proportion since it should be a proportion.\ndef to_percentage(proportion)\n\nWe put a colon after the signature to tell Python it's over.\ndef to_percentage(proportion):\n\nDocumentation\nFunctions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing a triple-quoted string:\ndef to_percentage(proportion):\n \"\"\"Converts a proportion to a percentage.\"\"\"\n\nBody\nNow we start writing code that runs when the function is called. This is called the body of the function. We can write anything we could write anywhere else. First let's give a name to the number we multiply a proportion by to get a percentage.\ndef to_percentage(proportion):\n \"\"\"Converts a proportion to a percentage.\"\"\"\n factor = 100\n\nreturn\nThe special instruction return in a function's body tells Python to make the value of the function call equal to whatever comes right after return. We want the value of to_percentage(.5) to be the proportion .5 times the factor 100, so we write:\ndef to_percentage(proportion):\n \"\"\"Converts a proportion to a percentage.\"\"\"\n factor = 100\n return proportion * factor\n\nQuestion 1. Define to_percentage in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage twenty_percent.", "...\n ...\n ...\n ...\n\ntwenty_percent = ...\ntwenty_percent\n\n_ = tests.grade('q2_1')", "Like the built-in functions, you can use named values as arguments to your function.\nQuestion 2. Use to_percentage again to convert the proportion named a_proportion (defined below) to a percentage called a_percentage.\nNote: You don't need to define to_percentage again! Just like other named things, functions stick around after you define them.", "a_proportion = 2**(.5) / 2\na_percentage = ...\na_percentage\n\n_ = tests.grade('q2_2')", "Here's something important about functions: Each time a function is called, it creates its own \"space\" for names that's separate from the main space where you normally define names. (Exception: all the names from the main space get copied into it.) So even though you defined factor = 100 inside to_percentage above and then called to_percentage, you can't refer to factor anywhere except inside the body of to_percentage:", "# You should see an error when you run this. (If you don't, you might\n# have defined factor somewhere above.)\nfactor", "As we've seen with the built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.\nQuestion 3. Define a function called disemvowel. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters \"a\", \"e\", \"i\", \"o\", and \"u\".)\nHint: To remove all the \"a\"s from a string, you can use that_string.replace(\"a\", \"\"). And you can call replace multiple times.", "def disemvowel(a_string):\n ...\n ...\n\n# An example call to your function. (It's often helpful to run\n# an example call from time to time while you're writing a function,\n# to see how it currently works.)\ndisemvowel(\"Can you read this without vowels?\")\n\n_ = tests.grade('q2_3')", "Calls on calls on calls\nJust as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.\nThis is like a recipe for cake telling you to follow another recipe to make the frosting, and another to make the sprinkles. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.\nFor example, suppose you want to count the number of characters that aren't vowels in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.\nQuestion 4. Write a function called num_non_vowels. It should take a string as its argument and return a number. The number should be the number of characters in the argument string that aren't vowels.\nHint: Recall that the function len takes a string as its argument and returns the number of characters in it.", "def num_non_vowels(a_string):\n \"\"\"The number of characters in a string, minus the vowels.\"\"\"\n ...\n\n_ = tests.grade('q2_4')", "Functions can also encapsulate code that does things rather than just computing values. For example, if you call print inside a function, and then call that function, something will get printed.\nThe movies_by_year dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:", "movies_by_year = Table.read_table(\"movies_by_year.csv\")\nrank = 5\nfifth_from_top_movie_year = movies_by_year.sort(\"Total Gross\", descending=True).column(\"Year\").item(rank-1)\nprint(\"Year number\", rank, \"for total gross movie sales was:\", fifth_from_top_movie_year)", "After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.\nQuestion 5. Write a function called print_kth_top_movie_year. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above. It shouldn't have a return statement.", "def print_kth_top_movie_year(k):\n # Our solution used 2 lines.\n ...\n ...\n\n# Example calls to your function:\nprint_kth_top_movie_year(2)\nprint_kth_top_movie_year(3)\n\n_ = tests.grade('q2_5')", "3. applying functions\nIn Which Python Bakes 102 Cakes\nYou'll get more practice writing functions, but let's move on. \nDefining a function is a lot like giving a name to a value with =. In fact, a function is a value just like the number 1 or the text \"the\"!\nFor example, we can make a new name for the built-in function max if we want:", "our_name_for_max = max\nour_name_for_max(2, 6)", "The old name for max is still around:", "max(2, 6)", "Try just writing max or our_name_for_max (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.", "max", "Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.", "make_array(max, np.average, are.equal_to)", "Question 1. Make an array containing any 3 other functions you've seen. Call it some_functions.", "some_functions = ...\nsome_functions\n\n_ = tests.grade('q3_1')", "Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why this works:", "make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)", "Here's a simpler example that's actually useful: the table method apply.\napply calls a function many times, once on each element in a column of a table. It produces an array of the results. Here we use apply to convert every CEO's pay to a number, using the function you defined:", "raw_compensation.apply(convert_pay_string_to_number, \"Total Pay\")", "Here's an illustration of what that did:\n<img src=\"apply.png\"/>\nNote that we didn't write something like convert_pay_string_to_number() or convert_pay_string_to_number(\"Total Pay\"). The job of apply is to call the function we give it, so instead of calling convert_pay_string_to_number ourselves, we just write its name as an argument to apply.\nQuestion 2. Using apply, make a table that's a copy of raw_compensation with one more column called \"Total Pay (\\$)\". It should be the result of applying convert_pay_string_to_number to the \"Total Pay\" column, as we did above. Call the new table compensation.", "compensation = raw_compensation.with_column(\n \"Total Pay ($)\",\n ...\ncompensation\n\n_ = tests.grade('q3_2')", "Now that we have the pay in numbers, we can compute things about them.\nQuestion 3. Compute the average total pay of the CEOs in the dataset.", "average_total_pay = ...\naverage_total_pay\n\n_ = tests.grade('q3_3')", "Question 4. Companies pay executives in a variety of ways: directly in cash; by granting stock or other \"equity\" in the company; or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)", "cash_proportion = ...\ncash_proportion\n\n_ = tests.grade('q3_4')", "Check out the \"% Change\" column in compensation. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says \"(No previous year)\". The values in this column are strings, not numbers, so like the \"Total Pay\" column, it's not usable without a bit of extra work.\nGiven your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is \\$100 this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\\frac{\\$100}{1 + \\frac{50}{100}}$, or around \\$66.66.\nQuestion 5. Create a new table called with_previous_compensation. It should be a copy of compensation, but with the \"(No previous year)\" CEOs filtered out, and with an extra column called \"2014 Total Pay ($)\". That column should have each CEO's pay in 2014.\nHint: This question takes several steps, but each one is still something you've seen before. Take it one step at a time, using as many lines as you need. You can print out your results after each step to make sure you're on the right track.\nHint 2: You'll need to define a function. You can do that just above your other code.", "# For reference, our solution involved more than just this one line of code\n...\n\nwith_previous_compensation = ...\nwith_previous_compensation\n\n_ = tests.grade('q3_5')", "Question 6. What was the average pay of these CEOs in 2014? Does it make sense to compare this number to the number you computed in question 3?", "average_pay_2014 = ...\naverage_pay_2014\n\n_ = tests.grade('q3_6')", "Question 7. A skeptical student asks:\n\n\"I already knew lots of ways to operate on each element of an array at once. For example, I can multiply each element of some_array by 100 by writing 100*some_array. What good is apply?\n\nHow would you answer? Discuss with a neighbor.\n4. Histograms\nEarlier, we computed the average pay among the CEOs in our 102-CEO dataset. The average doesn't tell us everything about the amounts CEOs are paid, though. Maybe just a few CEOs make the bulk of the money, even among these 102.\nWe can use a histogram to display more information about a set of numbers. The table method hist takes a single argument, the name of a column of numbers. It produces a histogram of the numbers in that column.\nQuestion 1. Make a histogram of the pay of the CEOs in compensation.", "...", "Question 2. Looking at the histogram, how many CEOs made more than \\$30 million? (Answer the question by filling in your answer manually. You'll have to do a bit of arithmetic; feel free to use Python as a calculator.)", "num_ceos_more_than_30_million = ...", "Question 3. Answer the same question with code. Hint: Use the table method where and the property num_rows.", "num_ceos_more_than_30_million_2 = ...\nnum_ceos_more_than_30_million_2\n\n_ = tests.grade('q4_3')", "Question 4. Do most CEOs make around the same amount, or are there some who make a lot more than the rest? Discuss with someone near you.\n5. Randomness\nData scientists also have to be able to understand randomness. For example, they have to be able to assign individuals to treatment and control groups at random, and then try to say whether any observed differences in the outcomes of the two groups are simply due to the random assignment or genuinely due to the treatment.\nTo start off, we will use Python to make choices at random. In numpy there is a sub-module called random that contains many functions that involve random selection. One of these functions is called choice. It picks one item at random from an array, and it is equally likely to pick any of the items. The function call is np.random.choice(array_name), where array_name is the name of the array from which to make the choice.\nThus the following code evaluates to treatment with chance 50%, and control with chance 50%. Run the next code block several times and see what happens.", "two_groups = make_array('treatment', 'control')\nnp.random.choice(two_groups)", "The big difference between the code above and all the other code we have run thus far is that the code above doesn't always return the same value. It can return either treatment or control, and we don't know ahead of time which one it will pick. We can repeat the process by providing a second argument, the number of times to repeat the process. In the choice function we just used, we can add an optional second argument that tells the function how many times to make a random selection. Try it below:", "np.random.choice(two_groups, 10)", "If we wanted to determine whether the random choice made by the function random is really fair, we could make a random selection a bunch of times and then count how often each selection shows up. In the next few code blocks, write some code that calls the choice function on the two_groups array one thousand times. Then, print out the percentage of occurrences for each of treatment and control. A useful function called Counter will be helpful; look at the code comments to see how it works!", "# replace ... with code that will run the 'choice' function 1000 times;\n# the resulting array of choices will then have the name 'exp_results'\nexp_results = ...\n\nfrom collections import Counter\n\nCounter(exp_results) \n# the output from Counter tells you how many times 'treatment' and 'control' appear in the array\n# produced by 'choice'; run this cell to see the output\n\n# use the info provided by 'Counter' to print the percentage of times 'treatment' and 'control'\n# were selected\nprint(...) # print percentage for 'treatment' here\nprint(...) # print percentage for 'control' here", "A fundamental question about random events is whether or not they occur. For example:\n\nDid an individual get assigned to the treatment group, or not?\nIs a gambler going to win money, or not?\nHas a poll made an accurate prediction, or not?\n\nOnce the event has occurred, you can answer \"yes\" or \"no\" to all these questions. In programming, it is conventional to do this by labeling statements as True or False. For example, if an individual did get assigned to the treatment group, then the statement, \"The individual was assigned to the treatment group\" would be True. If not, it would be False.\n6. Booleans and Comparison\nIn Python, Boolean values, named for the logician George Boole, represent truth and take only two possible values: True and False. Whether problems involve randomness or not, Boolean values most often arise from comparison operators. Python includes a variety of operators that compare values. For example, 3 is larger than 1 + 1. Run the following cell.", "3 > 1 + 1", "The value True indicates that the comparison is valid; Python has confirmed this simple fact about the relationship between 3 and 1+1. The full set of common comparison operators are listed below.\n<img src=\"comparison_operators.png\">\nNotice the two equal signs == in the comparison to determine equality. This is necessary because Python already uses = to mean assignment to a name, as we have seen. It can't use the same symbol for a different purpose. Thus if you want to check whether 5 is equal to the 10/2, then you have to be careful: 5 = 10/2 returns an error message because Python assumes you are trying to assign the value of the expression 10/2 to a name that is the numeral 5. Instead, you must use 5 == 10/2, which evaluates to True. Run these blocks of code to see for yourself.", "5 = 10/2\n\n5 == 10/2", "An expression can contain multiple comparisons, and they all must hold in order for the whole expression to be True. For example, we can express that 1+1 is between 1 and 3 using the following expression.", "1 < 1 + 1 < 3", "The average of two numbers is always between the smaller number and the larger number. We express this relationship for the numbers x and y below. Try different values of x and y to confirm this relationship.", "x = 12\ny = 5\nmin(x, y) <= (x+y)/2 <= max(x, y)", "7 Comparing Strings\nStrings can also be compared, and their order is alphabetical. A shorter string is less than a longer string that begins with the shorter string.", "'Dog' > 'Catastrophe' > 'Cat'", "Let's return to random selection. Recall the array two_groups which consists of just two elements, treatment and control. To see whether a randomly assigned individual went to the treatment group, you can use a comparison:", "np.random.choice(two_groups) == 'treatment'", "As before, the random choice will not always be the same, so the result of the comparison won't always be the same either. It will depend on whether treatment or control was chosen. With any cell that involves random selection, it is a good idea to run the cell several times to get a sense of the variability in the result.\n8. Conditional Statements\nIn many situations, actions and results depends on a specific set of conditions being satisfied. For example, individuals in randomized controlled trials receive the treatment if they have been assigned to the treatment group. A gambler makes money if she wins her bet.\nIn this section we will learn how to describe such situations using code. A conditional statement is a multi-line statement that allows Python to choose among different alternatives based on the truth value of an expression. While conditional statements can appear anywhere, they appear most often within the body of a function in order to express alternative behavior depending on argument values.\nA conditional statement always begins with an if header, which is a single line followed by an indented body. The body is only executed if the expression directly following if (called the if expression) evaluates to a True value. If the if expression evaluates to a False value, then the body of the if is skipped.\nLet us start defining a function that returns the sign of a number.", "def sign(x):\n\n if x > 0:\n return 'Positive'\nsign(3)", "This function returns the correct sign if the input is a positive number. But if the input is not a positive number, then the if expression evaluates to a False value, and so the return statement is skipped and the function call has no value. See what happens when you run the next block.", "sign(-3)", "So let us refine our function to return Negative if the input is a negative number. We can do this by adding an elif clause, where elif is Python's shorthand for the phrase \"else, if\".", "def sign(x):\n if x > 0:\n return 'Positive'\n\n elif x < 0:\n return 'Negative'", "Now sign returns the correct answer when the input is -3:", "sign(-3)", "What if the input is 0? To deal with this case, we can add another elif clause:", "def sign(x):\n\n if x > 0:\n return 'Positive'\n\n elif x < 0:\n return 'Negative'\n\n elif x == 0:\n return 'Neither positive nor negative'\nsign(0)", "Run the previous code block for different inputs to our sign() function to make sure it does what we want it to.\nEquivalently, we can replaced the final elif clause by an else clause, whose body will be executed only if all the previous comparisons are False; that is, if the input value is equal to 0.", "def sign(x):\n\n if x > 0:\n return 'Positive'\n\n elif x < 0:\n return 'Negative'\n\n else:\n return 'Neither positive nor negative'\nsign(0)", "9. The General Form\nA conditional statement can also have multiple clauses with multiple bodies, and only one of those bodies can ever be executed. The general format of a multi-clause conditional statement appears below.\nif &lt;if expression&gt;:\n &lt;if body&gt;\nelif &lt;elif expression 0&gt;:\n &lt;elif body 0&gt;\nelif &lt;elif expression 1&gt;:\n &lt;elif body 1&gt;\n...\nelse:\n &lt;else body&gt;\nThere is always exactly one if clause, but there can be any number of elif clauses. Python will evaluate the if and elif expressions in the headers in order until one is found that is a True value, then execute the corresponding body. The else clause is optional. When an else header is provided, its else body is executed only if none of the header expressions of the previous clauses are true. The else clause must always come at the end (or not at all).\n10 Example: Pick a Card\nWe will now use conditional statements to define a function that we could use as part of a card game analysis application. Every time we run the function, we want it to print out a random card from a standard 52-card deck. Specifically, we should randomly choose a suit and a numeric value (1-13 for Ace-King) and print these values to the screen. Finish writing the function in code block below:", "def draw_card():\n\n \"\"\"\n Print out a random suit and numeric value representing a card from a standard 52-card deck.\n \"\"\"\n \n # pick a random number to determine the suit\n suit_num = np.random.uniform(0,1) # this function returns a random decimal number\n # between 0 and 1\n \n ### TODO: write an 'if' statement that prints out 'heart' if 0 < suit_num < 0.25,\n ### 'spade' if 0.25 < suit_num < 0.5,\n ### 'club' if 0.5 < suit_num < 0.75,\n ### 'diamond' if 0.75 < suit_num < 1\n \n # pick a random number to determine the suit\n val_num = np.random.uniform(0,13)\n \n ### TODO: write an if statement so that if 2 < val_num <= 12, \n ### you print out the floor of val_num\n ### (you can use the floor() function)\n \n ### TODO: write an 'if' statement that prints out the value of the card for the\n ### non-numeric possibilities'A' for ace, 'J' for jack, 'Q' for 'queen', 'K'\n ### for king; \n \n return\n \n\n# test your function by running this block; do it multiple times and see what happens!\ndraw_card()", "11. Iteration\nIt is often the case in programming – especially when dealing with randomness – that we want to repeat a process multiple times. For example, to check whether np.random.choice does in fact pick at random, we might want to run the following cell many times to see if Heads occurs about 50% of the time.", "np.random.choice(make_array('Heads', 'Tails'))", "We might want to re-run code with slightly different input or other slightly different behavior. We could copy-paste the code multiple times, but that's tedious and prone to typos, and if we wanted to do it a thousand times or a million times, forget it.\nA more automated solution is to use a for statement to loop over the contents of a sequence. This is called iteration. A for statement begins with the word for, followed by a name we want to give each item in the sequence, followed by the word in, and ending with an expression that evaluates to a sequence. The indented body of the for statement is executed once for each item in that sequence.", "for i in np.arange(3):\n print(i)", "It is instructive to imagine code that exactly replicates a for statement without the for statement. (This is called unrolling the loop.) A for statement simple replicates the code inside it, but before each iteration, it assigns a new value from the given sequence to the name we chose. For example, here is an unrolled version of the loop above:", "i = np.arange(3).item(0)\nprint(i)\ni = np.arange(3).item(1)\nprint(i)\ni = np.arange(3).item(2)\nprint(i)", "Notice that the name i is arbitrary, just like any name we assign with =.\nHere we use a for statement in a more realistic way: we print 5 random choices from an array.", "coin = make_array('Heads', 'Tails')\n\nfor i in np.arange(5):\n print(np.random.choice(make_array('Heads', 'Tails')))", "In this case, we simply perform exactly the same (random) action several times, so the code inside our for statement does not actually refer to i.\n12. Augmenting Arrays\nWhile the for statement above does simulate the results of five tosses of a coin, the results are simply printed and aren't in a form that we can use for computation. Thus a typical use of a for statement is to create an array of results, by augmenting it each time.\nThe append method in numpy helps us do this. The call np.append(array_name, value) evaluates to a new array that is array_name augmented by value. When you use append, keep in mind that all the entries of an array must have the same type.", "pets = make_array('Cat', 'Dog')\nnp.append(pets, 'Another Pet')", "This keeps the array pets unchanged:", "pets", "But often while using for loops it will be convenient to mutate an array – that is, change it – when augmenting it. This is done by assigning the augmented array to the same name as the original.", "pets = np.append(pets, 'Another Pet')\npets", "Example: Counting the Number of Heads\nWe can now simulate five tosses of a coin and place the results into an array. We will start by creating an empty array and then appending the result of each toss.", "coin = make_array('Heads', 'Tails')\n\ntosses = make_array()\n\nfor i in np.arange(5):\n tosses = np.append(tosses, np.random.choice(coin))\n\ntosses", "Let us rewrite the cell with the for statement unrolled:", "coin = make_array('Heads', 'Tails')\n\ntosses = make_array()\n\ni = np.arange(5).item(0)\ntosses = np.append(tosses, np.random.choice(coin))\ni = np.arange(5).item(1)\ntosses = np.append(tosses, np.random.choice(coin))\ni = np.arange(5).item(2)\ntosses = np.append(tosses, np.random.choice(coin))\ni = np.arange(5).item(3)\ntosses = np.append(tosses, np.random.choice(coin))\ni = np.arange(5).item(4)\ntosses = np.append(tosses, np.random.choice(coin))\n\ntosses", "By capturing the results in an array we have given ourselves the ability to use array methods to do computations. For example, we can use np.count_nonzero to count the number of heads in the five tosses.", "np.count_nonzero(tosses == 'Heads')", "Iteration is a powerful technique. For example, by running exactly the same code for 1000 tosses instead of 5, we can count the number of heads in 1000 tosses.", "tosses = make_array()\n\nfor i in np.arange(1000):\n tosses = np.append(tosses, np.random.choice(coin))\n\nnp.count_nonzero(tosses == 'Heads')", "Example: Number of Heads in 100 Tosses\nIt is natural to expect that in 100 tosses of a coin, there will be 50 heads, give or take a few.\nBut how many is \"a few\"? What's the chance of getting exactly 50 heads? Questions like these matter in data science not only because they are about interesting aspects of randomness, but also because they can be used in analyzing experiments where assignments to treatment and control groups are decided by the toss of a coin.\nIn this example we will simulate 10,000 repetitions of the following experiment:\nToss a coin 100 times and record the number of heads.\nThe histogram of our results will give us some insight into how many heads are likely.\nAs a preliminary, note that np.random.choice takes an optional second argument that specifies the number of choices to make. By default, the choices are made with replacement. Here is a simulation of 10 tosses of a coin:", "np.random.choice(coin, 10)", "Now let's study 100 tosses. We will start by creating an empty array called heads. Then, in each of the 10,000 repetitions, we will toss a coin 100 times, count the number of heads, and append it to heads.", "N = 10000\n\nheads = make_array()\n\nfor i in np.arange(N):\n tosses = np.random.choice(coin, 100)\n heads = np.append(heads, np.count_nonzero(tosses == 'Heads'))\n\nheads", "Let us collect the results in a table and draw a histogram.", "results = Table().with_columns(\n 'Repetition', np.arange(1, N+1),\n 'Number of Heads', heads\n)\n\nresults", "Here is a histogram of the data, with bins of width 1 centered at each value of the number of heads.", "results.select('Number of Heads').hist(bins=np.arange(30.5, 69.6, 1))", "Not surprisingly, the histogram looks roughly symmetric around 50 heads. The height of the bar at 50 is about 8% per unit. Since each bin is 1 unit wide, this is the same as saying that about 8% of the repetitions produced exactly 50 heads. That's not a huge percent, but it's the largest compared to the percent at every other number of heads.\nThe histogram also shows that in almost all of the repetitions, the number of heads in 100 tosses was somewhere between 35 and 65. Indeed, the bulk of the repetitions produced numbers of heads in the range 45 to 55.\nWhile in theory it is possible that the number of heads can be anywhere between 0 and 100, the simulation shows that the range of probable values is much smaller.\nThis is an instance of a more general phenomenon about the variability in coin tossing, as we will see later in the course.\nExercise: Challenge!\nYour task is to write Python code which will find those numbers between 1500 and 2700 inclusive, which are divisible by both 5 and 7. Have your code store each such number in an array (call it whatever you want) and then print out the array at the end.\nThis will require you to use both for loops, if statements, and array manipulation discussed in this notebook. Good luck!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/thu/cmip6/models/ciesm/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: THU\nSource ID: CIESM\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'ciesm', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google/eng-edu
ml/cc/prework/ko/hello_world.ipynb
apache-2.0
[ "Copyright 2017 Google LLC.", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "# 사전 작업: Hello World\n학습 목표: 브라우저에서 텐서플로우 프로그램을 실행합니다.\n다음은 'Hello World' 텐서플로우 프로그램입니다.", "from __future__ import print_function\n\nimport tensorflow as tf\n\nc = tf.constant('Hello, world!')\n\nwith tf.Session() as sess:\n\n print(sess.run(c))", "## 이 프로그램을 실행하려면 다음을 실행하세요.\n\n\n코드 블록의 아무 곳이나 클릭합니다. 예: import 단어.\n\n\n코드 블록 왼쪽 상단에서 오른쪽을 보고 있는 삼각형 아이콘을 클릭하거나 ⌘/Ctrl-Enter를 누릅니다.\n몇 초 후 프로그램이 실행됩니다. 문제 없이 진행되면 코드 블록 바로 아래에 Hello, world!라는 문구가 나타납니다\\n\n이 프로그램은 전체가 하나의 코드 블록으로 이루어져 있습니다. 하지만 대다수의 실습은 여러 코드 블록으로 이루어져 있기 때문에 위에서 아래의 순서로 개별적으로 코드 블록을 실행해야 합니다. \n\n\n순서에 맞지 않게 코드 블록을 실행하면 보통 오류가 발생합니다.\n## 유용한 단축키\n\n⌘/Ctrl+m,b:는 현재 선택된 셀 아래에 빈 코드 셀을 만듭니다.\n⌘/Ctrl+m,i:는 실행 중인 셀을 중단시킵니다.\n⌘/Ctrl+m,h:는 모든 단축키 목록을 표시합니다.\n텐서플로우 API 메서드에 관한 문서를 보려면 여는 괄호 바로 오른쪽에 커서를 두고 Tab을 누르세요." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
hvillanua/deep-learning
batch-norm/Batch_Normalization_Exercises.ipynb
mit
[ "Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.", "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)", "Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer", "Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.", "def fully_connected(prev_layer, num_units, training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=None)\n layer = tf.layers.batch_normalization(layer, training=training)\n layer = tf.nn.relu(layer)\n return layer", "TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.", "def conv_layer(prev_layer, layer_depth, training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None)\n conv_layer = tf.layers.batch_normalization(conv_layer, training=training)\n conv_layer = tf.nn.relu(conv_layer)\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n training = tf.placeholder(tf.bool)\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, training)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, training: True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n training: False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, training: False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,\n training: False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,\n training: False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],\n training: False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.", "def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.", "def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n bias = tf.Variable(tf.zeros(out_channels))\n\n conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n conv_layer = tf.nn.relu(conv_layer)\n\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nwilbert/async-examples
notebook/aio34.ipynb
mit
[ "Simple I/O Loop Examples for Python 3.4\nCreate an event loop (which automatically becomes the default event loop in the context).", "import asyncio\nloop = asyncio.get_event_loop()", "Run a simple callback as soon as possible:", "def hello_world():\n print('Hello World!')\n loop.stop()\n\nloop.call_soon(hello_world)\nloop.run_forever()", "Coroutines can be scheduled in the eventloop (internally they are wrapped in a Task).\nThe decorator is not necessary, but has several advantages:\n* documents that this is a coroutine (instead of scanning the code for yield)\n* provides some debugging magic, to detect unscheduled coroutines", "@asyncio.coroutine\ndef hello_world():\n yield from asyncio.sleep(1.0)\n print('Hello World!')\n\nloop.run_until_complete(hello_world())", "Interconnect a Future and a coroutine, and wrap a courotine in a Task (a subclass of Future).", "@asyncio.coroutine\ndef slow_operation(future):\n yield from asyncio.sleep(1)\n future.set_result('Future is done!')\n\ndef got_result(future):\n print(future.result())\n loop.stop()\n \nfuture = asyncio.Future()\nfuture.add_done_callback(got_result)\n\n# wrap the coro in a special Future (a Task) and schedule it\nloop.create_task(slow_operation(future))\n# could use asyncio.async, but this is deprecated\n\nloop.run_forever()", "Futures implement the coroutine interface, so they can be yielded from (yield from actually calls __iter__ before the iteration).", "future = asyncio.Future()\nprint(hasattr(future, '__iter__'))", "Internally the asyncio event loop works with Handle instances, which wrap callbacks. The Task class is used to schedule / step through courotines (via its _step method and using call_soon)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.23/_downloads/b96d98f7c704193a3ede176aaf9433d2/85_brainstorm_phantom_ctf.ipynb
bsd-3-clause
[ "%matplotlib inline", "Brainstorm CTF phantom dataset tutorial\nHere we compute the evoked from raw for the Brainstorm CTF phantom\ntutorial dataset. For comparison, see :footcite:TadelEtAl2011 and:\nhttps://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf\n\nReferences\n.. footbibliography::", "# Authors: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_ctf\nfrom mne.io import read_raw_ctf\n\nprint(__doc__)", "The data were collected with a CTF system at 2400 Hz.", "data_path = bst_phantom_ctf.data_path(verbose=True)\n\n# Switch to these to use the higher-SNR data:\n# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')\n# dip_freq = 7.\nraw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')\ndip_freq = 23.\nerm_path = op.join(data_path, 'emptyroom_20150709_01.ds')\nraw = read_raw_ctf(raw_path, preload=True)", "The sinusoidal signal is generated on channel HDAC006, so we can use\nthat to obtain precise timing.", "sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]\nplt.figure()\nplt.plot(times[times < 1.], sinusoid.T[times < 1.])", "Let's create some events using this signal by thresholding the sinusoid.", "events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp\nevents = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T", "The CTF software compensation works reasonably well:", "raw.plot()", "But here we can get slightly better noise suppression, lower localization\nbias, and a better dipole goodness of fit with spatio-temporal (tSSS)\nMaxwell filtering:", "raw.apply_gradient_compensation(0) # must un-do software compensation first\nmf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)\nraw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)\nraw.plot()", "Our choice of tmin and tmax should capture exactly one cycle, so\nwe can make the unusual choice of baselining using the entire epoch\nwhen creating our evoked data. We also then crop to a single time point\n(@t=0) because this is a peak in our signal.", "tmin = -0.5 / dip_freq\ntmax = -tmin\nepochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,\n baseline=(None, None))\nevoked = epochs.average()\nevoked.plot(time_unit='s')\nevoked.crop(0., 0.)", "Let's use a sphere head geometry model &lt;eeg_sphere_model&gt;\nand let's see the coordinate alignment and the sphere location.", "sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)\n\nmne.viz.plot_alignment(raw.info, subject='sample',\n meg='helmet', bem=sphere, dig=True,\n surfaces=['brain'])\ndel raw, epochs", "To do a dipole fit, let's use the covariance provided by the empty room\nrecording.", "raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)\nraw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',\n **mf_kwargs)\ncov = mne.compute_raw_covariance(raw_erm)\ndel raw_erm\n\ndip, residual = fit_dipole(evoked, cov, sphere, verbose=True)", "Compare the actual position with the estimated one.", "expected_pos = np.array([18., 0., 49.])\ndiff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))\nprint('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))\nprint('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))\nprint('Difference: %0.1f mm' % diff)\nprint('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))\nprint('GOF: %0.1f %%' % dip.gof[0])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rvuduc/cse6040-ipynbs
20--sparse-plus-least-squares.ipynb
bsd-3-clause
[ "CSE 6040, Fall 2015 [20]: Sparsity in Numpy/SciPy (wrap-up) + Least Squares (new topic)\nToday's lab continues Lab 19, which introduced different ways of storing a sparse matrix. We used these as a vehicle for thinking a little bit more about the costs of code.\nBy the way, a partial solution set for Lab 19 is also available here.\nTo repeat, the recommended importing convention for Numpy is (execute this now):\nGetting started\nThe following code cells repeat some of the things we need from Lab 19 to finish the topic.", "import numpy as np\nimport pandas as pd\nfrom IPython.display import display\nimport cse6040utils as cse6040\n\nedges = pd.read_csv ('UserEdges-1M.csv')\n\nV_names = set (edges.Source)\nV_names.update (set (edges.Target))\n\nname2id = {v: k for (k, v) in enumerate (V_names)}\nA_numbered_keys = cse6040.sparse_matrix ()\nfor (k, row) in edges.iterrows ():\n i = name2id[row['Source']]\n j = name2id[row['Target']]\n A_numbered_keys[i][j] = 1.\n A_numbered_keys[j][i] = 1.\n\nnnz = len (edges) # Number of non-zeros (edges)\nn = len (V_names) # Matrix dimension\n\n# Build a dense vector\nx = cse6040.dense_vector (n)\n\n%timeit cse6040.spmv (n, A_numbered_keys, x)", "Review: COO format\nTake a look at the slides that we just started in the last class, which cover the basics of sparse matrix storage formats: link\nThese are available as native formats in SciPy. However, last time we went ahead and implemented COO using pure native Python objects. The goals of doing so were two-fold:\n\nLearn about an alternative to the \"nested dictionary\" approach to storing a sparse matrix.\nEstablish a baseline for comparison against a native Numpy/SciPy implementation.\n\nThe following code reminds you how to build a matrix in COO format and measures the performance of a native Python implementation of sparse matrix-vector multiply that operates on COO matrices.", "coo_rows = [name2id[e] for e in edges['Source']]\ncoo_cols = [name2id[e] for e in edges['Target']]\ncoo_vals = [1.] * len (coo_rows)\n\nassert len (coo_vals) == nnz # Sanity check against the raw data\n\ndef coo_spmv (n, R, C, V, x):\n \"\"\"\n Returns y = A*x, where A has 'n' rows and is stored in\n COO format by the array triples, (R, C, V).\n \"\"\"\n assert n > 0\n assert type (x) is list\n assert type (R) is list\n assert type (C) is list\n assert type (V) is list\n assert len (R) == len (C) == len (V)\n \n y = cse6040.dense_vector (n)\n \n for k in range (len (V)):\n i = R[k]\n j = C[k]\n aij = V[k]\n y[i] += aij * x[j]\n \n return y\n\n%timeit coo_spmv (n, coo_rows, coo_cols, coo_vals, x)", "What follows picks up from last time.\n\nCSR format\nThe compressed sparse row (CSR) format is an alternative to COO. The basic idea is to compress COO a little, by recognizing that there is redundancy in the row indices. To see that redundancy, the example in the slides sorts COO format by row.\nExercise. Now create a CSR data structure, again using native Python lists. Name your output CSR lists csr_ptrs, csr_inds, and csr_vals, starting from the COO representation.", "# Aside: What does this do? Try running it to see.\n\nz1 = [ 'q', 'v', 'c' ]\nz2 = [ 3 , 1 , 2 ]\nz3 = ['dog', 7 , 'man']\n\nZ = list (zip (z1, z2, z3))\nprint \"==> Before:\"\nprint Z\n\nZ.sort (key=lambda z: z[1])\nprint \"\\n==> After:\"\nprint Z\n\n# Note: Alternative to using a lambda (anonymous) function:\ndef get_second_coord (z):\n return z[1]\n\nZ.sort (key=get_second_coord)\n\nC = list (zip (coo_rows, coo_cols, coo_vals))\nC.sort (key=lambda t: t[0])\n\nassert len (C) == nnz\nassert n == (C[-1][0] + 1) # Why?\n\ncsr_inds = [j for (i, j, a_ij) in C]\ncsr_vals = [a_ij for (i, j, a_ij) in C]\n\ncsr_ptrs = [0] * (n+1)\ni = 0 # next row to update\nfor j in range (nnz):\n while C[j][0] >= i:\n csr_ptrs[i] = j\n i += 1\ncsr_ptrs[n] = nnz\n\n# Alternative solution: See https://piazza.com/class/idap9v1ktp94u9?cid=89\n\n# Some checks on your implementation: Look at the first 10 rows\nassert len (csr_ptrs) == (n+1)\n\nprint (\"==> csr_ptrs[:10]:\\n\")\nprint (csr_ptrs[:10])\n\nfirst_ten_tuples = [\"[%d] %s\" % (i, str (t))\n for (i, t) in enumerate (C[:csr_ptrs[10]])]\nprint (\"==> First ten tuples, C[:%d]:\" % csr_ptrs[10])\nprint (\"\\n\".join (first_ten_tuples))\n\nFIRST_TEN = [0, 1, 3, 60, 66, 72, 73, 74, 78, 82]\nassert all ([a==b for (a, b) in zip (csr_ptrs[0:10], FIRST_TEN)])\nprint (\"\\n==> Passed quick test\")", "Exercise. Now implement a CSR-based sparse matrix-vector multiply.", "def csr_spmv (n, ptr, ind, val, x):\n assert n > 0\n assert type (ptr) == list\n assert type (ind) == list\n assert type (val) == list\n assert type (x) == list\n assert len (ptr) >= (n+1) # Why?\n assert len (ind) >= ptr[n] # Why?\n assert len (val) >= ptr[n] # Why?\n \n y = cse6040.dense_vector (n)\n \n # @YOUSE: Insert your implementation here\n for i in range (n):\n for k in range (ptr[i], ptr[i+1]):\n y[i] += val[k] * x[ind[k]]\n \n return y\n\n%timeit csr_spmv (n, csr_ptrs, csr_inds, csr_vals, x)", "Sparse matrix storage using SciPy (Numpy)\nLet's implement and time some of these routines below.", "import scipy.sparse as sp", "Per the notes, here is how we can convert our COO representation from before into a SciPy implementation.", "A_coo = sp.coo_matrix ((coo_vals, (coo_rows, coo_cols)))", "Now measure the time to do a sparse matrix-vector multiply in the COO representation. How does it compare to the nested default dictionary approach?", "x_np = np.array (x)\n\n%timeit A_coo.dot (x_np)", "Exercise. Repeat the same experiment for SciPy-based CSR.", "# @YOUSE: Solution\nA_csr = A_coo.tocsr ()\n\n%timeit A_csr.dot (x_np)", "Linear regression and least squares\nYay! Time for a new topic: linear regression by the method of least squares.\nFor this topic, let's use the following dataset, which is a crimes dataset from 1960: http://cse6040.gatech.edu/fa15/uscrime.csv\nThis dataset comes from: http://www.statsci.org/data/general/uscrime.html", "df = pd.read_csv ('uscrime.csv', skiprows=1)\ndisplay (df.head ())", "Each row of this dataset is a US State. The columns are described here: http://www.statsci.org/data/general/uscrime.html", "import seaborn as sns\n%matplotlib inline\n\n# Look at a few relationships\nsns.pairplot (df[['Crime', 'Wealth', 'Ed', 'U1']])", "Suppose we wish to build a model of some quantity, called the response variable, given some set of predictors. In the US crimes dataset, the response might be the crime rate (Crime), which we wish to predict from the predictors of income (Wealth), education (Ed), and the unemployment rate of young males (U1).\nIn a linear regression model, we posit that the response is a linear function of the predictors. That is, suppose there are $m$ observations in total and consider the $i$-th observation. Let $b_i$ be the response of that observation. Then denote the $n$ predictors for observation $i$ as ${a_{i,1}, a_{i,2}, \\ldots, a_{i,n}}$. From this starting point, we might then posit a linear model of $b$ having the form,\n$b_i = x_0 + a_{i,1} x_1 + a_{i,2} x_2 + \\cdots + a_{i,n} x_n$,\nwhere we wish to compute the \"best\" set of coefficients, ${x_0, x_1, \\ldots, x_n}$. Note that this model includes a constant offset term, $x_0$. Since we want this model to hold for observations, then we effectively want to solve the system of equations,\n$\\left(\n \\begin{array}{c}\n b_1 \\\n b_2 \\\n \\vdots \\\n b_m\n \\end{array}\n \\right)\n$\n=\n$\\left(\n \\begin{array}{ccccc}\n 1. & a_{1,1} & a_{1,2} & \\ldots & a_{1,n} \\\n 1. & a_{2,1} & a_{2,2} & \\ldots & a_{2,n} \\\n & & \\cdots & & \\\n 1. & a_{m,1} & a_{m,2} & \\ldots & a_{m,n}\n \\end{array}\n \\right).\n$\nTypically, $m \\gg n$." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
balarsen/pymc_learning
Regression/Poisson Regression.ipynb
bsd-3-clause
[ "Looking into Poisson regression\nstarting from https://docs.pymc.io/notebooks/GLM-linear.html", "%matplotlib inline\n\nfrom pymc3 import *\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\n\nsns.set(font_scale=1.5)", "Start with regular to understand tools", "\nsize = 200\ntrue_intercept = 1\ntrue_slope = 2\n\nx = np.linspace(0, 1, size)\n# y = a + b*x\ntrue_regression_line = true_intercept + true_slope * x\n# add noise\ny = true_regression_line + np.random.normal(scale=.5, size=size)\n\ndata = dict(x=x, y=y)\n\n\ndf = pd.DataFrame(data)\ndf.head()\n\nfig = plt.figure(figsize=(7, 7))\nax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')\nax.plot(x, y, 'x', label='sampled data')\nax.plot(x, true_regression_line, label='true regression line', lw=2.)\nplt.legend(loc=0);\n\nsns.lmplot('x','y', data=df)\n\nwith Model() as model:\n # specify glm and pass in data. The resulting linear model, its likelihood and\n # and all its parameters are automatically added to our model.\n glm.GLM.from_formula('y ~ x', data)\n trace = sample(3000, cores=2) # draw 3000 posterior samples using NUTS sampling\n\nplt.figure(figsize=(7, 7))\ntraceplot(trace[100:])\nplt.tight_layout();\n\nplt.figure(figsize=(7, 7))\nplt.plot(x, y, 'x', label='data')\nplot_posterior_predictive_glm(trace, samples=100,\n label='posterior predictive regression lines')\nplt.plot(x, true_regression_line, label='true regression line', lw=3., c='y')\n\nplt.title('Posterior predictive regression lines')\nplt.legend(loc=0)\nplt.xlabel('x')\nplt.ylabel('y');", "and now look into this\nsomething is not quite right with my undrstanding", "df = pd.read_csv('http://stats.idre.ucla.edu/stat/data/poisson_sim.csv', index_col=0)\ndf['x'] = df['math']\ndf['y'] = df['num_awards']\ndf.head()\n\ndf.plot(kind='scatter', x='math', y='num_awards')\n\nwith Model() as model:\n # specify glm and pass in data. The resulting linear model, its likelihood and\n # and all its parameters are automatically added to our model.\n glm.GLM.from_formula('y ~ x', df)\n trace = sample(3000, cores=2) # draw 3000 posterior samples using NUTS sampling\n \n\nplt.figure(figsize=(7, 7))\ntraceplot(trace[100:])\nplt.tight_layout();\n\nfig, ax = plt.subplots(figsize=(7, 7))\ndf.plot(kind='scatter', x='x', y='y', ax=ax)\nplot_posterior_predictive_glm(trace, eval=np.linspace(0, 80, 100), samples=100)\n\n\n\nwith Model() as model:\n # specify glm and pass in data. The resulting linear model, its likelihood and\n # and all its parameters are automatically added to our model.\n glm.GLM.from_formula('y ~ x', df, family=glm.families.NegativeBinomial())\n step = NUTS()\n trace = sample(3000, cores=2, step=step) # draw 3000 posterior samples using NUTS sampling\n \n\nplt.figure(figsize=(7, 7))\ntraceplot(trace[100:])\nplt.tight_layout();\n\nautocorrplot(trace);\n\nfig, ax = plt.subplots(figsize=(7, 7))\ndf.plot(kind='scatter', x='x', y='y', ax=ax)\nplot_posterior_predictive_glm(trace, eval=np.linspace(0, 80, 100), samples=100)\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
science-of-imagination/nengo-buffer
Project/trained_mental_manipulations_ens_inhibition.ipynb
gpl-3.0
[ "Using the trained weights in an ensemble of neurons\n\nOn the function points branch of nengo\nOn the vision branch of nengo_extras", "import nengo\nimport numpy as np\nimport cPickle\nfrom nengo_extras.data import load_mnist\nfrom nengo_extras.vision import Gabor, Mask\nfrom matplotlib import pylab\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nfrom scipy import linalg", "Load the MNIST database", "# --- load the data\nimg_rows, img_cols = 28, 28\n\n(X_train, y_train), (X_test, y_test) = load_mnist()\n\nX_train = 2 * X_train - 1 # normalize to -1 to 1\nX_test = 2 * X_test - 1 # normalize to -1 to 1\n", "Each digit is represented by a one hot vector where the index of the 1 represents the number", "temp = np.diag([1]*10)\n\nZERO = temp[0]\nONE = temp[1]\nTWO = temp[2]\nTHREE= temp[3]\nFOUR = temp[4]\nFIVE = temp[5]\nSIX = temp[6]\nSEVEN =temp[7]\nEIGHT= temp[8]\nNINE = temp[9]\n\nlabels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]\n\ndim =28", "Load the saved weight matrices that were created by training the model", "label_weights = cPickle.load(open(\"label_weights5000.p\", \"rb\"))\nactivity_to_img_weights = cPickle.load(open(\"activity_to_img_weights5000.p\", \"rb\"))\nrotated_clockwise_after_encoder_weights = cPickle.load(open(\"rotated_after_encoder_weights_clockwise5000.p\", \"r\"))\nrotated_counter_after_encoder_weights = cPickle.load(open(\"rotated_after_encoder_weights5000.p\", \"r\"))\n\n#scale_up_after_encoder_weights = cPickle.load(open(\"scale_up_after_encoder_weights1000.p\",\"r\"))\n#scale_down_after_encoder_weights = cPickle.load(open(\"scale_down_after_encoder_weights1000.p\",\"r\"))\n#translate_up_after_encoder_weights = cPickle.load(open(\"translate_up_after_encoder_weights1000.p\",\"r\"))\n#translate_down_after_encoder_weights = cPickle.load(open(\"translate_down_after_encoder_weights1000.p\",\"r\"))\n#translate_left_after_encoder_weights = cPickle.load(open(\"translate_left_after_encoder_weights1000.p\",\"r\"))\n#translate_right_after_encoder_weights = cPickle.load(open(\"translate_right_after_encoder_weights1000.p\",\"r\"))\n\n\n\n\n#identity_after_encoder_weights = cPickle.load(open(\"identity_after_encoder_weights1000.p\",\"r\"))\n", "Functions to perform the inhibition of each ensemble", " #A value of zero gives no inhibition\n\ndef inhibit_rotate_clockwise(t):\n if t < 1:\n return dim**2\n else:\n return 0\n \ndef inhibit_rotate_counter(t):\n if t < 1:\n return 0\n else:\n return dim**2\n \ndef inhibit_identity(t):\n if t < 1:\n return dim**2\n else:\n return dim**2\n \ndef inhibit_scale_up(t):\n return dim**2\ndef inhibit_scale_down(t):\n return dim**2\n\ndef inhibit_translate_up(t):\n return dim**2\ndef inhibit_translate_down(t):\n return dim**2\ndef inhibit_translate_left(t):\n return dim**2\ndef inhibit_translate_right(t):\n return dim**2\n", "The network where the mental imagery and rotation occurs\n\nThe state, seed and ensemble parameters (including encoders) must all be the same for the saved weight matrices to work\nThe number of neurons (n_hid) must be the same as was used for training\nThe input must be shown for a short period of time to be able to view the rotation\nThe recurrent connection must be from the neurons because the weight matices were trained on the neuron activities", "def add_manipulation(main_ens,weights,inhibition_func):\n #create ensemble for manipulation\n ens_manipulation = nengo.Ensemble(n_hid,dim**2,seed=3,encoders=encoders, **ens_params)\n #create node for inhibition\n inhib_manipulation = nengo.Node(inhibition_func)\n #Connect the main ensemble to each manipulation ensemble and back with appropriate transformation\n nengo.Connection(main_ens.neurons, ens_manipulation.neurons, transform = weights.T, synapse=0.1)\n nengo.Connection(ens_manipulation.neurons, main_ens.neurons, transform = weights.T,synapse = 0.1)\n #connect inhibition\n nengo.Connection(inhib_manipulation, ens_manipulation.neurons, transform=[[-1]] * n_hid)\n \n #return ens_manipulation,inhib_manipulation\n\nrng = np.random.RandomState(9)\nn_hid = 1000\nmodel = nengo.Network(seed=3)\nwith model:\n #Stimulus only shows for brief period of time\n stim = nengo.Node(lambda t: ONE if t < 0.1 else 0) #nengo.processes.PresentInput(labels,1))#\n \n ens_params = dict(\n eval_points=X_train,\n neuron_type=nengo.LIF(), #Why not use LIF?\n intercepts=nengo.dists.Choice([-0.5]),\n max_rates=nengo.dists.Choice([100]),\n )\n \n \n # linear filter used for edge detection as encoders, more plausible for human visual system\n encoders = Gabor().generate(n_hid, (11, 11), rng=rng)\n encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)\n\n\n #Ensemble that represents the image with different transformations applied to it\n ens = nengo.Ensemble(n_hid, dim**2, seed=3, encoders=encoders, **ens_params)\n \n\n #Connect stimulus to ensemble, transform using learned weight matrices\n nengo.Connection(stim, ens, transform = np.dot(label_weights,activity_to_img_weights).T)\n \n #Recurrent connection on the neurons of the ensemble to perform the rotation\n #nengo.Connection(ens.neurons, ens.neurons, transform = rotated_counter_after_encoder_weights.T, synapse=0.1) \n\n \n #add_manipulation(ens,rotated_clockwise_after_encoder_weights, inhibit_rotate_clockwise)\n add_manipulation(ens,rotated_counter_after_encoder_weights, inhibit_rotate_counter)\n add_manipulation(ens,scale_up_after_encoder_weights, inhibit_scale_up)\n #add_manipulation(ens,scale_down_after_encoder_weights, inhibit_scale_down)\n #add_manipulation(ens,translate_up_after_encoder_weights, inhibit_translate_up)\n #add_manipulation(ens,translate_down_after_encoder_weights, inhibit_translate_down)\n #add_manipulation(ens,translate_left_after_encoder_weights, inhibit_translate_left)\n #add_manipulation(ens,translate_right_after_encoder_weights, inhibit_translate_right)\n \n \n\n \n #Collect output, use synapse for smoothing\n probe = nengo.Probe(ens.neurons,synapse=0.1)\n \n\nsim = nengo.Simulator(model)\n\nsim.run(5)", "The following is not part of the brain model, it is used to view the output for the ensemble\nSince it's probing the neurons themselves, the output must be transformed from neuron activity to visual image", "'''Animation for Probe output'''\nfig = plt.figure()\n\noutput_acts = []\nfor act in sim.data[probe]:\n output_acts.append(np.dot(act,activity_to_img_weights))\n\ndef updatefig(i):\n im = pylab.imshow(np.reshape(output_acts[i],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)\n \n return im,\n\nani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)\nplt.show()\n\nprint(len(sim.data[probe]))\n\nplt.subplot(161)\nplt.title(\"100\")\npylab.imshow(np.reshape(output_acts[100],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(162)\nplt.title(\"500\")\npylab.imshow(np.reshape(output_acts[500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(163)\nplt.title(\"1000\")\npylab.imshow(np.reshape(output_acts[1000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(164)\nplt.title(\"1500\")\npylab.imshow(np.reshape(output_acts[1500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(165)\nplt.title(\"2000\")\npylab.imshow(np.reshape(output_acts[2000],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(166)\nplt.title(\"2500\")\npylab.imshow(np.reshape(output_acts[2500],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()", "Pickle the probe's output if it takes a long time to run", "#The filename includes the number of neurons and which digit is being rotated\nfilename = \"mental_rotation_output_ONE_\" + str(n_hid) + \".p\"\ncPickle.dump(sim.data[probe], open( filename , \"wb\" ) )", "Testing", "testing = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(121)\npylab.imshow(np.reshape(testing,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n#Get image\ntesting = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\n\n\n#Get activity of image\n_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=testing)\n\n#Get rotated encoder outputs\ntesting_rotate = np.dot(testing_act,rotated_after_encoder_weights)\n\n#Get activities\ntesting_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)\n\nfor i in range(5):\n testing_rotate = np.dot(testing_rotate,rotated_after_encoder_weights)\n testing_rotate = ens.neuron_type.rates(testing_rotate, sim.data[ens].gain, sim.data[ens].bias)\n\n\n#testing_rotate = np.dot(testing_rotate,rotation_weights)\n\ntesting_rotate = np.dot(testing_rotate,activity_to_img_weights)\n\nplt.subplot(122)\npylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()\n\n\nplt.subplot(121)\npylab.imshow(np.reshape(X_train[0],(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\n#Get activity of image\n_, testing_act = nengo.utils.ensemble.tuning_curves(ens, sim, inputs=X_train[0])\n\ntesting_rotate = np.dot(testing_act,activity_to_img_weights)\n\nplt.subplot(122)\npylab.imshow(np.reshape(testing_rotate,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()", "Just for fun", "letterO = np.dot(ZERO,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(161)\npylab.imshow(np.reshape(letterO,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterL = np.dot(SEVEN,label_weights)\nfor _ in range(30):\n letterL = np.dot(letterL,rotation_weights)\nletterL = np.dot(letterL,activity_to_img_weights)\nplt.subplot(162)\npylab.imshow(np.reshape(letterL,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterI = np.dot(ONE,np.dot(label_weights,activity_to_img_weights))\nplt.subplot(163)\npylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\nplt.subplot(165)\npylab.imshow(np.reshape(letterI,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterV = np.dot(SEVEN,label_weights)\nfor _ in range(40):\n letterV = np.dot(letterV,rotation_weights)\nletterV = np.dot(letterV,activity_to_img_weights)\nplt.subplot(164)\npylab.imshow(np.reshape(letterV,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nletterA = np.dot(SEVEN,label_weights)\nfor _ in range(10):\n letterA = np.dot(letterA,rotation_weights)\nletterA = np.dot(letterA,activity_to_img_weights)\nplt.subplot(166)\npylab.imshow(np.reshape(letterA,(dim, dim), 'F').T, cmap=plt.get_cmap('Greys_r'))\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nicoa/showcase
pydatabln_2018_schedule2cal/pydatabln2018_filter_and_overview.ipynb
mit
[ "Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Query-Data\" data-toc-modified-id=\"Query-Data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Query Data</a></div><div class=\"lev1 toc-item\"><a href=\"#visualize-some-stuff\" data-toc-modified-id=\"visualize-some-stuff-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>visualize some stuff</a></div>", "import requests as rq\nimport pandas as pd\nimport matplotlib.pyplot as mpl\nimport bs4\nimport os\n\nfrom tqdm import tqdm_notebook\n\nfrom datetime import time\n\n%matplotlib inline", "Query Data\nGrab schedule page:", "base_url = \"https://pydata.org\"\nr = rq.get(base_url + \"/berlin2018/schedule/\")\nbs = bs4.BeautifulSoup(r.text, \"html.parser\")", "Let's query every talk description:", "data = {}\nfor ahref in tqdm_notebook(bs.find_all(\"a\")):\n if 'schedule/presentation' in ahref.get(\"href\"):\n url = ahref.get(\"href\")\n else:\n continue\n data[url] = {}\n resp = bs4.BeautifulSoup(rq.get(base_url + url).text, \"html.parser\")\n title = resp.find(\"h2\").text\n resp = resp.find_all(attrs={'class':\"container\"})[1]\n\n when, who = resp.find_all(\"h4\")\n date_info = when.string.split(\"\\n\")[1:]\n day_info = date_info[0].strip()\n time_inf = date_info[1].strip()\n room_inf = date_info[3].strip()[3:]\n speaker = who.find(\"a\").text\n level = resp.find(\"dd\").text\n abstract = resp.find(attrs={'class':'abstract'}).text\n description = resp.find(attrs={'class':'description'}).text\n data[url] = {\n 'day_info': day_info, \n 'title': title,\n 'time_inf': time_inf, \n 'room_inf': room_inf, \n 'speaker': speaker, \n 'level': level, \n 'abstract': abstract, \n 'description': description\n }", "Okay, make a dataframe and add some helpful columns:", "df = pd.DataFrame.from_dict(data, orient='index')\ndf.reset_index(drop=True, inplace=True)\n\n# Tutorials on Friday\ndf.loc[df.day_info=='Friday', 'tutorial'] = True\ndf['tutorial'].fillna(False, inplace=True)\n\n# time handling\ndf['time_from'], df['time_to'] = zip(*df.time_inf.str.split(u'\\u2013'))\ndf.time_from = pd.to_datetime(df.time_from).dt.time\ndf.time_to = pd.to_datetime(df.time_to).dt.time\ndel df['time_inf']\n\ndf.to_json('./data.json')\n\ndf.head(3)\n\n# Example: Let's query all non-novice talks on sunday, starting at 4 pm\ntmp = df.query(\"(level!='Novice') & (day_info=='Sunday')\")\ntmp[tmp.time_from >= time(16)]", "visualize some stuff", "plt.style.use('seaborn-darkgrid')#'seaborn-darkgrid')\n\nplt.rcParams['savefig.dpi'] = 200\nplt.rcParams['figure.dpi'] = 120\n\nplt.rcParams['figure.autolayout'] = False\nplt.rcParams['figure.figsize'] = 10, 5\nplt.rcParams['axes.labelsize'] = 17\nplt.rcParams['axes.titlesize'] = 20\nplt.rcParams['font.size'] = 16\nplt.rcParams['lines.linewidth'] = 2.0\nplt.rcParams['lines.markersize'] = 8\nplt.rcParams['legend.fontsize'] = 11\n\nplt.rcParams['font.family'] = \"serif\"\nplt.rcParams['font.serif'] = \"cm\"\nplt.rcParams['text.latex.preamble'] = \"\\\\usepackage{subdepth}, \\\\usepackage{type1cm}\"\nplt.rcParams['text.usetex'] = True\n\nax = df.level.value_counts().plot.bar(rot=0)\nax.set_ylabel(\"number of talks\")\nax.set_title(\"levels of the talks where:\")\nplt.show()\n\nax = df.rename(columns={'day_info': 'dayinfo'}).groupby(\"dayinfo\")['level'].value_counts(normalize=True).round(2).unstack(level=0).plot.bar(rot=0)\nax.set_xlabel('')\nax.set_title('So the last day is more kind of \"fade-out\"?')\nplt.show()\n\nax = df.groupby(\"tutorial\")['level'].value_counts(normalize=True).round(2).unstack(level=0).T.plot.bar(rot=0)\nax.set_title('the percentage of experienced slots is higher for tutorials!\\n\\\\small{So come on fridays for experienced level ;-)}')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CopernicusMarineInsitu/INSTACTraining
PythonNotebooks/PlatformPlots/plot_CMEMS_vessel.ipynb
mit
[ "The objective of this notebook is to show how to read and plot the data obtained with a vessel.", "%matplotlib inline\nimport netCDF4\nfrom netCDF4 import num2date\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\nfrom mpl_toolkits.basemap import Basemap", "Data reading\nThe data file is located in the datafiles directory.", "datadir = './datafiles/'\ndatafile = 'GL_PR_ML_EXRE0065_2010.nc'", "We extract only the spatial coordinates:", "with netCDF4.Dataset(datadir + datafile) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]\nprint lon.shape", "Location of the profiles\nIn this first plot we want to see the location of the profiles obtained with the profiler.<br/>\nWe create a Mercator projection using the coordinates we just read.", "m = Basemap(projection='merc', llcrnrlat=lat.min()-0.5, urcrnrlat=lat.max()+0.5,\n llcrnrlon=lon.min()-0.5, urcrnrlon=lon.max()+0.5, lat_ts=0.5*(lon.min()+lon.max()), resolution='h')", "Once we have the projection, the coordinates have to be changed into this projection:", "lon2, lat2 = m(lon, lat)", "The locations of the vessel stations are added on a map with the coastline and the land mask.", "mpl.rcParams.update({'font.size': 16})\nfig = plt.figure(figsize=(8,8))\nm.plot(lon2, lat2, 'ko', ms=2)\n\nm.drawcoastlines(linewidth=0.5, zorder=3)\nm.fillcontinents(zorder=2)\n\nm.drawparallels(np.arange(-90.,91.,0.5), labels=[1,0,0,0], zorder=1)\nm.drawmeridians(np.arange(-180.,181.,0.5), labels=[0,0,1,0], zorder=1)\nplt.show()", "Profile plot\nWe read the temperature, salinity and depth variables.", "with netCDF4.Dataset(datadir + datafile) as nc:\n depth = nc.variables['DEPH'][:]\n temperature = nc.variables['TEMP'][:]\n temperature_name = nc.variables['TEMP'].long_name\n temperature_units = nc.variables['TEMP'].units\n salinity = nc.variables['PSAL'][:]\n salinity_name = nc.variables['PSAL'].long_name\n salinity_units = nc.variables['PSAL'].units\n time = nc.variables['TIME'][:]\n time_units = nc.variables['TIME'].units\n\nprint depth.shape\nprint temperature.shape\n\nnprofiles, ndepths = depth.shape\n\nfig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nfor nn in range(0, nprofiles):\n plt.plot(temperature[nn,:], depth[nn,:], 'k-', linewidth=0.5)\nplt.gca().invert_yaxis()\nplt.show()", "We observe different types of profiles. As the covered region is rather small, this may be because the measurements were done at different time of the year. The time variable will tell us.<br/>\nWe create a plot of time versus temperature (first measurement of each profile).", "dates = num2date(time, units=time_units)\nfig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nplt.plot(dates, temperature[:,0], 'ko')\nfig.autofmt_xdate()\nplt.ylabel(\"%s (%s)\" % (temperature_name, temperature_units))\nplt.show()", "The graph confirms that we have data obtained during different periods:\n* July-August 2010,\n* December 2010.\nT-S diagram\nThe x and y labels for the plot are directly taken from the netCDF variable attributes.", "fig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nplt.plot(temperature, salinity, 'ko', markersize=2)\nplt.xlabel(\"%s (%s)\" % (temperature_name, temperature_units))\nplt.ylabel(\"%s (%s)\" % (salinity_name, salinity_units))\nplt.ylim(32, 36)\nplt.grid()\nplt.show()", "3-D plot\nWe illustrate with a simple example how to have a 3-dimensional representation of the profiles.<br/>\nFirst we import the required modules.", "from mpl_toolkits.mplot3d import Axes3D", "Then the plot is easily obtained by specifying the coordinates (x, y, z) and the variables (salinity) to be plotted.", "cmap = plt.cm.Spectral_r\nnorm = colors.Normalize(vmin=32, vmax=36)\n\nfig = plt.figure(figsize=(8,8))\nax = fig.add_subplot(111, projection='3d')\nfor ntime in range(0, nprofiles):\n plt.scatter(lon[ntime]*np.ones(ndepths), lat[ntime]*np.ones(ndepths), zs=-depth[ntime,:], zdir='z', \n s=20, c=salinity[ntime,:], edgecolor='None', cmap=cmap, norm=norm)\nplt.colorbar(cmap=cmap, norm=norm)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
konstantinstadler/pymrio
doc/source/notebooks/advanced_group_stressors.ipynb
gpl-3.0
[ "Advanced functionality - pandas groupby with pymrio satellite accounts\nThis notebook examplifies how to directly apply Pandas core functions (in this case groupby and aggregation) to the pymrio system.\nWIOD material extension aggregation - stressor w/o compartment info\nHere we use the WIOD MRIO system (see the notebook \"Automatic downloading of MRIO databases\" for how to automatically retrieve this database) and will aggregate the WIOD material stressor for used and unused materials. We assume, that the WIOD system is available at", "wiod_folder = '/tmp/mrios/WIOD2013'", "To get started we import pymrio", "import pymrio", "For the example here, we use the data from 2009:", "wiod09 = pymrio.parse_wiod(path=wiod_folder, year=2009)", "WIOD includes multiple material accounts, specified for the \"Used\" and \"Unused\" category, as well as information on the total. We will use the latter to confirm our calculations:", "wiod09.mat.F", "To aggregate these with the Pandas groupby function, we need to specify the groups which should be grouped by Pandas.\nPymrio contains a helper function which builds such a matching dictionary.\nThe matching can also include regular expressions to simplify the build:", "groups = wiod09.mat.get_index(as_dict=True, grouping_pattern = {'.*_Used': 'Material Used', \n '.*_Unused': 'Material Unused'})\ngroups", "Note, that the grouping contains the rows which do not match any of the specified groups. \nThis allows to easily aggregates only parts of a specific stressor set. To actually omit these groups\ninclude them in the matching pattern and provide None as value.\nTo have the aggregated data alongside the original data, we first copy the detailed satellite account:", "wiod09.mat_agg = wiod09.mat.copy(new_name='Aggregated matrial accounts')", "Then, we use the pymrio get_DataFrame iterator together with the pandas groupby and sum functions to aggregate the stressors.\nFor the dataframe containing the unit information, we pass a custom function which concatenate non-unique unit strings.", "for df_name, df in zip(wiod09.mat_agg.get_DataFrame(data=False, with_unit=True, with_population=False),\n wiod09.mat_agg.get_DataFrame(data=True, with_unit=True, with_population=False)):\n if df_name == 'unit':\n wiod09.mat_agg.__dict__[df_name] = df.groupby(groups).apply(lambda x: ' & '.join(x.unit.unique()))\n else:\n wiod09.mat_agg.__dict__[df_name] = df.groupby(groups).sum()\n\nwiod09.mat_agg.F\n\nwiod09.mat_agg.unit", "Use with stressors including compartment information:\nThe same regular expression grouping can be used to aggregate stressor data which is given per compartment.\nTo do so, the matching dict needs to consist of tuples corresponding to a valid index value in the DataFrames. \nEach position in the tuple is interprested as a regular expression. \nUsing the get_index method gives a good indication how a valid grouping dict should look like:", "tt = pymrio.load_test()\ntt.emissions.get_index(as_dict=True)", "With that information, we can now build our own grouping dict, e.g.:", "agg_groups = {('emis.*', '.*'): 'all emissions'}\n\ngroup_dict = tt.emissions.get_index(as_dict=True,\n grouping_pattern=agg_groups)\ngroup_dict", "Which can then be used to aggregate the satellite account:", "for df_name, df in zip(tt.emissions.get_DataFrame(data=False, with_unit=True, with_population=False),\n tt.emissions.get_DataFrame(data=True, with_unit=True, with_population=False)):\n if df_name == 'unit':\n tt.emissions.__dict__[df_name] = df.groupby(group_dict).apply(lambda x: ' & '.join(x.unit.unique()))\n else:\n tt.emissions.__dict__[df_name] = df.groupby(group_dict).sum()", "In this case we loose the information on the compartment. To reset the index do:", "import pandas as pd\ntt.emissions.set_index(pd.Index(tt.emissions.get_index(), name='stressor'))\n\ntt.emissions.F" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mri/cmip6/models/sandbox-2/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: SANDBOX-2\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'sandbox-2', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
YaleDHLab/lab-workshops
machine-learning/machine-learning.ipynb
mit
[ "Getting Started\nThis file is a \"Jupyter Notebook\". Jupyter Notebooks are files that allow one to write and evaluate Python (and R, and Julia...) alongside documentation, which makes them great for exploratory code investigations.\nTo run this notebook locally on your machine, we recommend that you follow these steps.\nInstalling Anaconda (Optional)\nTo follow along, the first step will be to install Anaconda, a distribution of the Python programming language that helps make managing Python easier.\nOnce Anaconda is installed, open a new terminal window. (If you are on Windows, you should open an Anaconda terminal by going to Programs -> Anaconda3 (64-bit) -> Anaconda Prompt). Then you can create and activate a virtual environment:\n```\ncreate a virtual environment with Python 3.6 named \"3.6\"\nconda create python=3.6 --name=3.6\nactivate the virtual environment\nsource activate 3.6\n```\nRunning the Workshop Notebook\nYou should now see (3.6) prepended on your path. Once you see that prefix, you can start the notebook with the following commands:\ngit clone https://github.com/YaleDHLab/lab-workshops\ncd lab-workshops/machine-learning\npip install -r requirements.txt\njupyter notebook machine-learning.ipynb\nOnce the notebook is open, you can evaluate a code cell by clicking on that cell, then clicking Cell -&gt; Run Cells. Alternatively, after clicking on a cell you can hold Control and press Enter to execute the code in the cell. To run all the cells in a notebook (which I recommend you do for this notebook), you can click Cell -&gt; Run All.\nIf you want to add a new cell, click the \"<b>+</b>\" button near the top of the page (below and between File and Edit). In that new cell, you can type Python code, like import this, then run the cell and immediately see the output. I encourage you to add and modify cells as we move through the discussion of machine learning below, as interacting with the code is one of the best ways to grow comfortable with the techniques discussed below.\nIntroduction to Machine Learning\nAndrew Ng, a prominent machine learning expert, has defined machine learning as \"the science of getting computers to act without being explicitly programmed.\" This workshop is meant to give a quick introduction to some of the techniques one can use to build algorithms that meet this criterion. Specifically, we will discuss the following sub-fields within machine learning:\n * Classification (for using labelled data to infer labels for unlabelled data)\n * Anomoly Detection (for finding outliers in a dataset)\n * Dimensionality Reduction (for analyzing and visualizing high-dimensional datasets)\n * Clustering (for grouping similar objects in a high dimensional space)\nLet's dive in!\nClassification\nThe goal of a classification task is to predict whether a given observation in a dataset possesses some particular property or attribute. To make these predictions, we measure the attributes of several labelled data observations, then compare new unlabelled observations to those measurements.\nLet's suppose we have a collection of <b>100 labelled books</b>&mdash;50 are science fiction books, and the other 50 are romance novels. Suppose as well we get a new delivery of <b>1000 unlabelled books</b>. A classification algorithm can help us use the labelled books to predict which of the new books are works of science fiction or romance.\nTo prepare to classify the new books, let's suppose we count the number of times the words \"laser\" and \"love\" occur in each of our 100 labelled books. We tally up the count of each word for each book, producing a spreadsheet with 100 rows and 2 columns.\nLet's replicate this scenario below with some fake data:\n<code><b>X</b></code> will represent our spreadsheet. Each row represents the counts of the words \"laser\" and \"love\" in a single book.\n<code><b>labels</b></code> contains one value for each row in <code>X</code>: 0 for sci-fi, 1 for romance.", "# import the make_blobs function from the sklearn module/package\nfrom sklearn.datasets.samples_generator import make_blobs\n\n# use the function we imported to generate a matrix with 100 rows and 2 columns\n# n_samples=100 specifies the number of rows in the returned matrix\n# n_features=2 specifies the number of columns in the returned matrix\n# centers=2 specifies the number of centroids, or attraction points, in the returned matrix\n# random_state=0 makes the random data generator reproducible\n# center_box=(0,20) specifies we want the centers in X to be between 0,20\nX, labels = make_blobs(n_samples=100, n_features=2, centers=2, random_state=0, center_box=(0,20))\n\n# display the first three rows in X and their genre labels\nprint(X[:3], '\\n\\n', labels[:3])", "To get some intuitions about the data, let's plot the 100 labelled books, using the counts of the words \"laser\" and \"love\" as the x and y axes:", "# commands prefaced by a % in Jupyter are called \"magic\"\n# these \"magic\" commands allow us to do special things only related to jupyter\n\n# %matplotlib inline - allows one to display charts from the matplotlib library in a notebook\n# %load_ext autoreload - automatically reloads imported modules if they change\n# %autoreload 2 - automatically reloads imported modules if they change\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\n# import code from matplotlib, a popular data visualization library\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# get the 0th column of the matrix (i.e. counts of the word \"laser\")\nx_vals = X[:,0]\n\n# get the 1st column of the matrix (i.e. counts of the word \"love\")\ny_vals = X[:,1]\n\n# create a \"scatterplot\" of the data in X\n# the first argument to plt.scatter is a list of x-axis values\n# the second argument to plt.scatter is a list of y-axis values\n# c=labels specifies we want to use the list of labels to color each point\n# cmap=plt.cm.RdYlBu specifies we want to use the Red Yellow Blue colors in the chart\nplt.scatter(x_vals, y_vals, c=labels, cmap=plt.cm.RdYlBu)\n\n# add axis labels and a plot title\nplt.xlabel('occurrences of word laser')\nplt.ylabel('occurrences of word love')\nplt.title('Science Fiction and Romance Books')", "This plot shows each of our 100 labelled books, positioned according to the counts of the words \"laser\" and \"love\" in the book, and colored by the book's genre label. Romance books are red; scifi books are blue. As we can see, the two genres appear pretty distinct here, which means we can expect pretty good classification accuracy!\nThe important thing about the data above is that we know the genre label of each book. In classification tasks, we leverage labelled data in order to make informed predictions about unlabelled data. One of the simplest ways to make this kind of prediction is to use a K-Nearest Neighbor classifier.\nK-Nearest Neighbors Classifiers\nWith a K-Nearest Neighbors Classifier, we start with a labelled dataset (e.g. 100 books with genre labels). We then add new, unlabelled observations to the dataset. For each, we consult the K labelled observations to which the unlabelled observation is closest, where K is an odd integer we use for all classifications. We then find the most common label among those K observations (the \"K nearest neighbors\") and give the new observation that label.\nThe following diagram shows this scenario. Our new observation (represented by the question mark) has some points near it that are labelled with a triangle or star. Suppose we have chosen to use 3 for our value of K. In that case, we consult the 3 nearest labelled points near the question mark. Those 3 nearest neighbors have labels: star, triangle, triangle. Using a majority vote, we give the question mark a triangle label.\n<img src='images/knn.gif'>\nExamining the plot above, we can see that if K were set to 1, we would classify the question mark as a star, but if K is 3 or 5, we would classify the question mark as a triangle. That is to say, K is an important parameter in a K Nearest Neighbors classifier.\nTo show how to execute this classification in Python, let's show how we can use our labelled book data to classify an unlabelled book:", "from sklearn.neighbors import KNeighborsClassifier\nimport numpy as np\n\n# create a KNN classifier using 3 as the value of K\nclf = KNeighborsClassifier(3)\n\n# \"train\" the classifier by showing it our labelled data\nclf.fit(X, labels)\n\n# predict the genre label of a new, unlabelled book\nclf.predict(np.array([[14.2, 10.3]]))", "For each observation we pass as input to <code>clf.predict()</code>, the function returns one label (either 0 or 1). In the snippet above, we pass in only a single observation, so we get only a single label back. The example observation above gets a label 1, which means the model thought this particular book was a work of science-fiction. Just like that, we've trained a machine learning classifier and classified some new data!\nThe classification example above shows how we can classify just a single point in space, but suppose we want to analyze the way a classifier would classify each possible point in some space. To do so, we can transform our space into a grid of units, then classify each point in that grid. Analyzing a space in this way is known as identifying a classifier's decision boundary, because this analysis shows one the boundaries between different classification outcomes in the space. This kind of analysis is very helpful in training machine learning models, because studying a classifier's decision boundary can help one see how to improve the classifier.\nLet's plot our classifier's decision boundary below:", "from sklearn.neighbors import KNeighborsClassifier\n\n# import some custom helper code\nimport helpers\n\n# create and train a KNN model\nclf = KNeighborsClassifier(3)\nclf.fit(X, labels)\n\n# use a helper function to plot the trained classifier's decision boundary\nhelpers.plot_decision_boundary(clf, X, labels)\n\n# add a title and axis labels to the chart\nplt.title('K-Nearest Neighbors: Classifying Science Fiction and Romance')\nplt.xlabel('occurrences of word laser')\nplt.ylabel('occurrences of word love')", "For each pixel in the plot above, we retrieve the 3 closest points with known labels. We then use a majority vote of those labels to assign the label of the pixel. This is exactly analogous to predicting a label for unlabelled point&mdash;in both cases, we take a majority vote of the 3 closest points with known labels. Working in this way, we can use labelled data to classify unlabelled data. That's all there is to K-Nearest Neighbors classification!\nIt's worth noting that K-Nearest Neighbors is only one of many popular classification algorithms. From a high-level point of view, each classification algorithm works in a similar way&mdash;each requires a certain number of observations with known labels, and each uses those labelled observations to classify unlabelled observations. However, different classification algorithms use different logic to assign unlabelled observations to groups, which means different classification algorithms have very different decision boundaries. In the chart below [source], each row plots the decision boundaries several classifiers give the same dataset. Notice how some classifiers work better with certain data shapes:\n<img src='images/scikit_decision_boundaries.png'>\nFor an intuitive introduction to many of these classifiers, including Support Vector Machines, Decision Trees, Neural Networks, and Naive Bayes classifiers, see Luis Serrano's introduction to machine learning video discussed in the Going Further section below.\nAnomaly Detection\nAnomaly detection refers to the identification of anomalies, or outliers, in datasets. While detecting anomalies in a single dimension can be quite simple, finding anomalies in high-dimensional datasets is a difficult problem.\nOne technique for classifying anomalies in high-dimensional datasets is an Isolation Forest. An Isolation Forest identifies outliers in a dataset by randomly dividing a space until each point is isolated from each other. After repeating this proceedure several times, the Isolation Forest identifies points that are quickly isolated from other points as outliers.\nThe illustration below attempts to illustrate the method by which these outliers are quickly identified. Isolated points are colored green and labelled with the iteration on which they were isolated. If you repeat the procedure several times, you'll see the outlier is consistently isolated quickly, which allows the Isolation Forest to identify that point as an outlier.", "from IPython.display import IFrame\n\nIFrame(src='https://s3.amazonaws.com/duhaime/blog/visualizations/isolation-forests.html', width=700, height=640)", "If we run the simulation above a number of times, we should see the \"outlier\" point is consistently isolated quickly, while it usually takes more iterations to isolate the other points. This is the chief intuition behind the Isolation Forests outlier classification strategy&mdash;outliers are isolated quickly because they are farther from other points in the dataset.\nLet's build a sample dataset and use Isolation Forests to classify the outliers in that dataset.", "from sklearn.ensemble import IsolationForest\nfrom sklearn.datasets.samples_generator import make_blobs\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n\n# seed a random number generator for consistent random values\nrng = np.random.RandomState(1)\n\n# generate 100 \"training\" data observations\nn_training = 500\nX, _ = make_blobs(random_state=6, n_samples=n_training)\n\n# create the IsolationForest classifier\nclf = IsolationForest(max_samples=500, random_state=1, n_jobs=-1)\n\n# train the classifier on the training data\nclf.fit(X)\n\n# generate 100 new observations\nnew_vals = rng.uniform(low=(-10, -12), high=(10, 4), size=(100, 2))\n\n# get classification results for the new observations; `result` contains\n# one observation for each value in `new_vals`: a 1 means the point was\n# in the training distribution, -1 means the point is an outlier\nresult = clf.predict(new_vals)\n\n# plot the classification results\nhelpers.plot_iforest_decision_boundary(clf, X, new_vals, result)", "In just a few lines of code, we can create, train, and deploy a machine learning model for detecting outliers in high-dimensional data!\nDimension Reduction\nSo far we've seen data with observations in two dimensions (the scifi vs. romance books example) and observations in 50 dimensions (the word vector example). While each observation in the dataset above has only two components, some datasets are comprised of observations with hundreds or even thousands of components. These \"high-dimensional\" datasets can be quite hard to work with and reason about. High dimensional datasets also pose specific challenges to many machine learning models (see The Curse of Dimensionality). To work around these challenges, it's often helpful to reduce the number of dimensions required to express a given dataset.\nOne popular way to reduce the dimensionality of a dataset is to use a technique called Principal Component Analysis. PCA tries to find a lower dimensional representation of a dataset by projecting that dataset down into a smaller dimensional space in a way that minimizes loss of information.\nTo get an intuition about PCA, suppose you have points in two dimensions, and you wish to reduce the dimensionality of your dataset to a single dimension. To do so, you could find the center of the points then create a line $L$ with a random orientation that passes through that center. One can then project each point onto $L$ such that an imaginary line between the point and $L$ form a right angle. Within this \"projection\", each 2D point can be represented with just its position along the 1D $L$, effectively giving us a 1D representation of the point's position in its original space. Furthermore, we can use the difference between the largest and smallest values of points projected onto $L$ as a measure of the amount of \"variance\" or \"spread\" within the data captured in $L$&mdash;the greater this spread, the greater the amount of \"signal\" from the original dataset is represented in the projection.\nIf one were to slowly rotate $L$ and continue measuring the delta between the greatest and smallest values on $L$ at each orientation, one could find the orientation of the projection line that minimizes information loss. (This line of minimal information loss is shown in pink below.) Once that line is discovered, we can actually project all of our points onto that lower-dimensional embedding (see the red points below when the black line is colinear with the pink line):\n<img src='images/pca.gif'>\nFor a beginner-friendly deep dive into the mechanics behind this form of dimension reduction, check out Josh Starmer's step-by-step guide to PCA.\nWhat makes this kind of dimension reduction useful for research? There are two primary uses for dimension reduction: data exploration and data analysis.\nExploratory Data Visualization with UMAP\nIn many applications, dimension reduction is very helpful for visualization tasks. For example, let's suppose we are working on a digital forensics mystery. Given texts A, B, and C, where we know the authors of A and B, but don't know the author of C, our task is to determine which author (A or B) has a writing style that's more similar to that of C.\nIn the code below, we'll create a Term Document Matrix, or a matrix in which each row is a passage of 1000 words from a novel, each column is a distinct word that exists within the dataset, and each cell value indicates the number of times the given word occurs in the given sequence of 1000 words. This matrix will reprsent each passage of 1000 words as a high dimensional vector, far too high to visualize. We'll then use a dimension reduction technique related to PCA to project each passage down into a 2D space so we can visualize each passage. Finally, we'll plot a third text in this 2D projection so we can determine which author's style to which the third text is most similar.", "import pandas as pd\nfrom bs4 import BeautifulSoup\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom nltk import ngrams\nfrom requests import get\n\ndef get_passages(url, chunk_size=1000):\n text = BeautifulSoup( get(url).text, 'html.parser' ).get_text().lower()\n words = ''.join([c for c in text if c.isalpha() or c == ' ']).split()\n return [' '.join(i) for idx, i in enumerate(ngrams(words, chunk_size)) if\n idx % chunk_size == 0 and idx > 0]\n\nausten = get_passages('https://www.gutenberg.org/files/1342/1342-h/1342-h.htm')\ndickens = get_passages('https://www.gutenberg.org/files/98/98-h/98-h.htm')\nmystery = get_passages('https://s3.amazonaws.com/duhaime/blog/machine-learning-intro/ppz.txt')\n\nvec = CountVectorizer()\nX = vec.fit_transform(austen + dickens).toarray()\n\nprint(X.shape) # prints the number of rows then the number of columns in X", "Now that we have represented each passage of 1000 words with a high-dimensional vector, let's project those vectors down into two dimensions to visualize the similarity between our three author's styles:", "from matplotlib.lines import Line2D\nfrom umap import UMAP\n\nX = vec.fit_transform(austen + dickens + mystery).toarray()\n\nprojected = UMAP(random_state=2).fit_transform(X)\n\nlabels = ['green' for i in range(len(austen))] + \\\n ['orange' for i in range(len(dickens))] + \\\n ['purple' for i in range(len(mystery))]\n\nplt.scatter(projected[:,0], projected[:,1], c=labels)\nplt.title('Dimension Reduction from a Term Document Matrix')\n\n# add a legend\nplt.legend(handles=[\n Line2D([], [], label='Jane Austen Window', marker='o', markerfacecolor='green', color='white'),\n Line2D([], [], label='Charles Dickens Window', marker='o', markerfacecolor='orange', color='white'),\n Line2D([], [], label='Mystery Author Window', marker='o', markerfacecolor='purple', color='white'),\n])", "As we can see, the new points in purple have strong overlap with the green points, suggesting that the mystery author has a style quite similar to that of Austen. There's a good reason for that&mdash;the purple text is <i>Pride and Prejudice and Zombies</i>, which adapts the language and plot of Jane Austen's classic novel. When working with high-dimensional datasets, it's often helpful to create some quick visualizations of the data using a dimension reduction technique like UMAP as we have just done.\nClustering\nClustering is a powerful machine learning technique, and one that often requires some kind of distance metric. The goal of a clustering algorithm is to create some groups of observations, where each group contains similar observations.\nThere are a variety of methods for clustering vectors, including density-based clustering, hierarchical clustering, and centroid clustering. One of the most intuitive and most commonly used centroid-based methods is K-Means Clustering. Given a collection of points in a space, K-Means selects K \"centroid\" points randomly (colored green below), then assigns each non-centroid point to the centroid to which it's closest. Using these preliminary groupings, the next step is to find the geometric center of each group, using the same technique one would use to find the center of a square. These group centers become the new centroids, and again each point is assigned to the centroid to which it's closest. This process continues until centroid movement falls below some minimal movement threshold, after which the clustering is complete. Here’s a nice visual description of K-Means:\n<img src='images/kmeans.gif'>\nLet's get a taste of K-means clustering by using the technique to cluster some high-dimensional vectors. For this demo, we can use Stanford University's GloVe vectors, which provide a vector representation of each word in a corpus. In what follows below, we'll read in the GloVe file, split out the first n words and their corresponding 50 dimensional vectors, then examine the first word and its corresponding vector.", "from zipfile import ZipFile\nfrom collections import defaultdict\nfrom urllib.request import urlretrieve\nimport numpy as np\nimport json, os, codecs\n\n# download the vector files we'll use\nif not os.path.exists('glove.6B.50d.txt'):\n urlretrieve('http://nlp.stanford.edu/data/glove.6B.zip', 'glove.6B.zip')\n # unzip the downloaded zip archive\n ZipFile('glove.6B.zip').extractall(os.getcwd())\n\n# get the first n words and their vectors\nvectors = []\nwords = []\nn = 50000\nfor row_idx, row in enumerate(codecs.open('glove.6B.50d.txt', 'r', 'utf8')):\n if row_idx > n: break\n split_row = row.split()\n word, vector = ' '.join(split_row[:-50]), [float(i) for i in split_row[-50:]]\n words += [word]\n vectors += [vector]\n \n# check out a sample word and its vector\nprint(words[1700], vectors[1700], '\\n')", "As we can see above, <code>words</code> is just a list of words. For each of those words, <code>vectors</code> contains a corresponding 50-dimensional vector (or list of 50 numbers). Those vectors indicate the semantic meaning of a word. In other words, if the English language were a 50 dimensional vector space, each word in <code>words</code> would be positioned in that space by virtue of its corresponding vector.\nWords that have similar meaning should appear near one another within this vector space. To test this hypothesis, let's use K-Means clustering to identify 20 clusters of words within the 50 dimensional vector space discussed above. After building a K-Means model, we'll create a map named <code>groups</code> whose keys will be cluster ids (0-19) and whose values will be lists of words that belong to a given cluster number. After creating that variable, we'll print the first 10 words from each cluster:", "from sklearn.cluster import KMeans\n\n# cluster the word vectors\nkmeans = KMeans(n_clusters=20, random_state=0).fit(np.array(vectors))\n\n# `kmeans.labels_` is an array whos `i-th` member identifies the group to which\n# the `i-th` word in `words` is assigned\ngroups = defaultdict(list)\nfor idx, i in enumerate(kmeans.labels_):\n groups[i] += [words[idx]]\n\n# print the top 10 words contained in each group\nfor i in groups:\n print(groups[i][:10])", "The output above shows the top 10 words in each of the 20 clusters identified by K-Means. Examining each of these word lists, we can see each has a certain topical coherence. For example, some of the word clusters contain financial words, while others contain medical words. These clusters work out nicely because K-Means is able to cluster nearby word vectors in our vector space!\nGoing Further\nThe snippets above are meant only to give a brief introduction to some of the most popular techniques in machine learning so you can decide whether this kind of analysis might be useful in your research. If it seems like machine learning will be important in your work, you may want to check out some of the resources listed below (arranged roughly from least to most technical):\n\nA Friendly Introduction to Machine Learning\n\nIn this 30 minute video, Luis Serrano (head of machine learning at Udacity) offers intuitive, user-friendly introductions to the mechanics that drive a number of machine learning models, including Naive Bayes, Decision Tree, Logistic Regression, Neural Network, and Support Vector Machine classifiers. This video is a great place to start for those looking for quick intuitions about the ways these algorithms work.\n\nHands-On Machine Learning with Scikit-Learn and TensorFlow (OREILLY)\n\nThis OREILLY book offers a great high-level introduction to machine learning with Python. Aurélien Géron guides readers through ways one can use scikit-learn and other popular libraries to build machine learning models in Python. This is a great choice for those who just want to get work done, without necessarily unlocking the insights that would allow one to build models from scratch.\n\nMachine Learning Cheatsheets\n\nThis collection of \"cheat sheets\" gives concise overviews of the api's and models behind many of the most prominent packages and concepts in machine learning and its allied fields, including different neural network architectures, numerical optimization techniques, algorithms appropriate for different tasks, scikit-learn, pandas, scikit-learn, scipy, ggpot2, dplyr and tidyr, big O notation, and a number of other topics. Recently identified as the \"most popular\" article on machine learning in Medium.\n\nMining of Massive Datasets\n\nThis Stanford University course and digital publication offer introductions to a wide array of subtopics in machine learning. The authors focus on helping readers gain an intuitive understanding of how machine learning models work. One of the most lucid and concise treatments of machine learning available on the web.\n\nConvolutional Neural Networks for Visual Recognition\n\nThis Stanford University course offers a spectacular introduction to Convolutional Neural Networks, the cornerstone of modern machine learning in the domain of computer vision. If your work involves images or video materials, and you'd like to apply machine learning techniques to your data, this course will help you get up and running with state-of-the-art techniques in convnets.\n\nMachine Learning (Andrew Ng, Coursera)\n\nAndrew Ng's Coursera course on machine learning will help you master many of the fundamentals involved in modern machine learning. Professor Ng will guide you through a great deal of the math involved in contemporary machine learning, starting with simple linear classifiers and building up into complex neural network architectures. This class is ideal for those who like to understand the math behind the models they use." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rajul/tvb-library
tvb/simulator/demos/Monitoring with transformations.ipynb
gpl-2.0
[ "%pylab inline\n\nimport numpy\nimport matplotlib as mpl\nmpl.rc('savefig', dpi=300)\n\nfrom tvb.tests.library import setup_test_console_env\nsetup_test_console_env()\n\nfrom tvb.simulator.monitors import MonitorTransforms\nfrom tvb.simulator import models, coupling, integrators, noise, simulator\nfrom tvb.datatypes import connectivity\nfrom tvb.simulator.monitors import Raw\n", "Monitoring with transformations\nVery often it's useful to apply specific transformations to the state variables before applying the observation model of a monitor. Additionally, it can be useful to apply other transformations on the monitor's output.\nThe pre_expr and post_expr attributes of the Monitor classes allow for this.", "sim = simulator.Simulator(\n model=models.Generic2dOscillator(),\n connectivity=connectivity.Connectivity(load_default=True),\n coupling=coupling.Linear(),\n integrator=integrators.EulerDeterministic(),\n monitors=Raw(pre_expr='V;W;V**2;W-V', post_expr=';;sin(mon);exp(mon)'))\n\nsim.configure()\n\nts, ys = [], []\nfor (t, y), in sim(simulation_length=250):\n ts.append(t)\n ys.append(y)\nt = numpy.array(ts)\nv, w, sv2, ewmv = numpy.array(ys).transpose((1, 0, 2, 3))", "Plotting the results demonstrates the effect of the transformations of the state variables through the monitor. Here, a Raw monitor was used to make the effects clear, but the pre- and post-expressions can be provided to any of the Monitors.", "figure(figsize=(7, 5), dpi=600)\n\nsubplot(311)\nplot(t, v[:, 0, 0], 'k')\nplot(t, w[:, 0, 0], 'k')\nylabel('$V(t), W(t)$')\ngrid(True, axis='x')\nxticks(xticks()[0], [])\n\nsubplot(312)\nplot(t, sv2[:, 0, 0], 'k')\nylabel('$\\\\sin(G(V^2(t)))$')\ngrid(True, axis='x')\nxticks(xticks()[0], [])\n\nsubplot(313)\nplot(t, ewmv[:, 0, 0], 'k')\nylabel('$\\\\exp(G(W(t)-V(t)))$')\ngrid(True, axis='x')\nxlabel('Time (ms)')\n\ntight_layout()", "In this case, the chosen transformations are primarily visible during the transient." ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
newworldnewlife/TensorFlow-Tutorials
18_TFRecords_Dataset_API.ipynb
mit
[ "TensorFlow Tutorial #18\nTFRecords & Dataset API\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nIn the previous tutorials we used a so-called feed-dict for inputting data to the TensorFlow graph. It is a fairly simple input method but it is also a performance bottleneck because the data is read sequentially between training steps. This makes it hard to use the GPU at 100% efficiency because the GPU has to wait for new data to work on.\nInstead we want to read data in a parallel thread so new training data is always available whenever the GPU is ready. This used to be done with so-called QueueRunners in TensorFlow which was a very complicated system. Now it can be done with the Dataset API and a binary file-format called TFRecords, as described in this tutorial.\nThis builds on Tutorial #17 for the Estimator API.\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom matplotlib.image import imread\nimport tensorflow as tf\nimport numpy as np\nimport sys\nimport os", "This was developed using Python 3.6 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data", "import knifey", "The data dimensions have already been defined in the knifey module, so we just need to import the ones we need.", "from knifey import img_size, img_size_flat, img_shape, num_classes, num_channels", "Set the directory for storing the data-set on your computer.", "# knifey.data_dir = \"data/knifey-spoony/\"", "The Knifey-Spoony data-set is about 22 MB and will be downloaded automatically if it is not located in the given path.", "knifey.maybe_download_and_extract()", "Now load the data-set. This scans the sub-directories for all *.jpg images and puts the filenames into two lists for the training-set and test-set. This does not actually load the images.", "dataset = knifey.load()", "Get the class-names.", "class_names = dataset.class_names\nclass_names", "Training and Test-Sets\nThis function returns the file-paths for the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.\nIn this tutorial we will actually use the integer class-numbers and call them labels. This may be a little confusing but you can always add print-statements to see what the data actually is.", "image_paths_train, cls_train, labels_train = dataset.get_training_set()", "Print the first image-path to see if it looks OK.", "image_paths_train[0]", "Get the test-set.", "image_paths_test, cls_test, labels_test = dataset.get_test_set()", "Print the first image-path to see if it looks OK.", "image_paths_test[0]", "The Knifey-Spoony data-set has now been loaded and consists of 4700 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(image_paths_train)))\nprint(\"- Test-set:\\t\\t{}\".format(len(image_paths_test)))", "Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None, smooth=True):\n\n assert len(images) == len(cls_true)\n\n # Create figure with sub-plots.\n fig, axes = plt.subplots(3, 3)\n\n # Adjust vertical spacing.\n if cls_pred is None:\n hspace = 0.3\n else:\n hspace = 0.6\n fig.subplots_adjust(hspace=hspace, wspace=0.3)\n\n # Interpolation type.\n if smooth:\n interpolation = 'spline16'\n else:\n interpolation = 'nearest'\n\n for i, ax in enumerate(axes.flat):\n # There may be less than 9 images, ensure it doesn't crash.\n if i < len(images):\n # Plot image.\n ax.imshow(images[i],\n interpolation=interpolation)\n\n # Name of the true class.\n cls_true_name = class_names[cls_true[i]]\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true_name)\n else:\n # Name of the predicted class.\n cls_pred_name = class_names[cls_pred[i]]\n\n xlabel = \"True: {0}\\nPred: {1}\".format(cls_true_name,\n cls_pred_name)\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Helper-function for loading images\nThis dataset does not load the actual images, instead it has a list of the images in the training-set and another list for the images in the test-set. This helper-function loads some image-files.", "def load_images(image_paths):\n # Load the images from disk.\n images = [imread(path) for path in image_paths]\n\n # Convert to a numpy array and return it.\n return np.asarray(images)", "Plot a few images to see if data is correct", "# Load the first images from the test-set.\nimages = load_images(image_paths=image_paths_test[0:9])\n\n# Get the true classes for those images.\ncls_true = cls_test[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true, smooth=True)", "Create TFRecords\nTFRecords is the binary file-format used internally in TensorFlow which allows for high-performance reading and processing of datasets.\nFor this small dataset we will just create one TFRecords file for the training-set and another for the test-set. But if your dataset is very large then you can split it into several TFRecords files called shards. This will also improve the random shuffling, because the Dataset API only shuffles from a smaller buffer of e.g. 1024 elements loaded into RAM. So if you have e.g. 100 TFRecords files, then the randomization will be much better than for a single TFRecords file.\nFile-path for the TFRecords file holding the training-set.", "path_tfrecords_train = os.path.join(knifey.data_dir, \"train.tfrecords\")\npath_tfrecords_train", "File-path for the TFRecords file holding the test-set.", "path_tfrecords_test = os.path.join(knifey.data_dir, \"test.tfrecords\")\npath_tfrecords_test", "Helper-function for printing the conversion progress.", "def print_progress(count, total):\n # Percentage completion.\n pct_complete = float(count) / total\n\n # Status-message.\n # Note the \\r which means the line should overwrite itself.\n msg = \"\\r- Progress: {0:.1%}\".format(pct_complete)\n\n # Print it.\n sys.stdout.write(msg)\n sys.stdout.flush()", "Helper-function for wrapping an integer so it can be saved to the TFRecords file.", "def wrap_int64(value):\n return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))", "Helper-function for wrapping raw bytes so they can be saved to the TFRecords file.", "def wrap_bytes(value):\n return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))", "This is the function for reading images from disk and writing them along with the class-labels to a TFRecords file. This loads and decodes the images to numpy-arrays and then stores the raw bytes in the TFRecords file. If the original image-files are compressed e.g. as jpeg-files, then the TFRecords file may be many times larger than the original image-files.\nIt is also possible to save the compressed image files directly in the TFRecords file because it can hold any raw bytes. We would then have to decode the compressed images when the TFRecords file is being read later in the parse() function below.", "def convert(image_paths, labels, out_path):\n # Args:\n # image_paths List of file-paths for the images.\n # labels Class-labels for the images.\n # out_path File-path for the TFRecords output file.\n \n print(\"Converting: \" + out_path)\n \n # Number of images. Used when printing the progress.\n num_images = len(image_paths)\n \n # Open a TFRecordWriter for the output-file.\n with tf.python_io.TFRecordWriter(out_path) as writer:\n \n # Iterate over all the image-paths and class-labels.\n for i, (path, label) in enumerate(zip(image_paths, labels)):\n # Print the percentage-progress.\n print_progress(count=i, total=num_images-1)\n\n # Load the image-file using matplotlib's imread function.\n img = imread(path)\n \n # Convert the image to raw bytes.\n img_bytes = img.tostring()\n\n # Create a dict with the data we want to save in the\n # TFRecords file. You can add more relevant data here.\n data = \\\n {\n 'image': wrap_bytes(img_bytes),\n 'label': wrap_int64(label)\n }\n\n # Wrap the data as TensorFlow Features.\n feature = tf.train.Features(feature=data)\n\n # Wrap again as a TensorFlow Example.\n example = tf.train.Example(features=feature)\n\n # Serialize the data.\n serialized = example.SerializeToString()\n \n # Write the serialized data to the TFRecords file.\n writer.write(serialized)", "Note the 4 function calls required to write the data-dict to the TFRecords file. In the original code-example from the Google Developers, these 4 function calls were actually nested. The design-philosophy for TensorFlow generally seems to be: If one function call is good, then 4 function calls are 4 times as good, and if they are nested then it is exponential goodness!\nOf course, this is quite poor API design because the last function writer.write() should just be able to take the data-dict directly and then call the 3 other functions internally.\nConvert the training-set to a TFRecords-file. Note how we use the integer class-numbers as the labels instead of the One-Hot encoded arrays.", "convert(image_paths=image_paths_train,\n labels=cls_train,\n out_path=path_tfrecords_train)", "Convert the test-set to a TFRecords-file:", "convert(image_paths=image_paths_test,\n labels=cls_test,\n out_path=path_tfrecords_test)", "Input Functions for the Estimator\nThe TFRecords files contain the data in a serialized binary format which needs to be converted back to images and labels of the correct data-type. We use a helper-function for this parsing:", "def parse(serialized):\n # Define a dict with the data-names and types we expect to\n # find in the TFRecords file.\n # It is a bit awkward that this needs to be specified again,\n # because it could have been written in the header of the\n # TFRecords file instead.\n features = \\\n {\n 'image': tf.FixedLenFeature([], tf.string),\n 'label': tf.FixedLenFeature([], tf.int64)\n }\n\n # Parse the serialized data so we get a dict with our data.\n parsed_example = tf.parse_single_example(serialized=serialized,\n features=features)\n\n # Get the image as raw bytes.\n image_raw = parsed_example['image']\n\n # Decode the raw bytes so it becomes a tensor with type.\n image = tf.decode_raw(image_raw, tf.uint8)\n \n # The type is now uint8 but we need it to be float.\n image = tf.cast(image, tf.float32)\n\n # Get the label associated with the image.\n label = parsed_example['label']\n\n # The image and label are now correct TensorFlow types.\n return image, label", "Helper-function for creating an input-function that reads from TFRecords files for use with the Estimator API.", "def input_fn(filenames, train, batch_size=32, buffer_size=2048):\n # Args:\n # filenames: Filenames for the TFRecords files.\n # train: Boolean whether training (True) or testing (False).\n # batch_size: Return batches of this size.\n # buffer_size: Read buffers of this size. The random shuffling\n # is done on the buffer, so it must be big enough.\n\n # Create a TensorFlow Dataset-object which has functionality\n # for reading and shuffling data from TFRecords files.\n dataset = tf.data.TFRecordDataset(filenames=filenames)\n\n # Parse the serialized data in the TFRecords files.\n # This returns TensorFlow tensors for the image and labels.\n dataset = dataset.map(parse)\n\n if train:\n # If training then read a buffer of the given size and\n # randomly shuffle it.\n dataset = dataset.shuffle(buffer_size=buffer_size)\n\n # Allow infinite reading of the data.\n num_repeat = None\n else:\n # If testing then don't shuffle the data.\n \n # Only go through the data once.\n num_repeat = 1\n\n # Repeat the dataset the given number of times.\n dataset = dataset.repeat(num_repeat)\n \n # Get a batch of data with the given size.\n dataset = dataset.batch(batch_size)\n\n # Create an iterator for the dataset and the above modifications.\n iterator = dataset.make_one_shot_iterator()\n\n # Get the next batch of images and labels.\n images_batch, labels_batch = iterator.get_next()\n\n # The input-function must return a dict wrapping the images.\n x = {'image': images_batch}\n y = labels_batch\n\n return x, y", "This is the input-function for the training-set for use with the Estimator API:", "def train_input_fn():\n return input_fn(filenames=path_tfrecords_train, train=True)", "This is the input-function for the test-set for use with the Estimator API:", "def test_input_fn():\n return input_fn(filenames=path_tfrecords_test, train=False)", "Input Function for Predicting on New Images\nAn input-function is also needed for predicting the class of new data. As an example we just use a few images from the test-set.\nYou could load any images you want here. Make sure they are the same dimensions as expected by the TensorFlow model, otherwise you need to resize the images.", "some_images = load_images(image_paths=image_paths_test[0:9])", "These images are now stored as numpy arrays in memory, so we can use the standard input-function for the Estimator API. Note that the images are loaded as uint8 data but it must be input to the TensorFlow graph as floats so we do a type-cast.", "predict_input_fn = tf.estimator.inputs.numpy_input_fn(\n x={\"image\": some_images.astype(np.float32)},\n num_epochs=1,\n shuffle=False)", "The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below.", "some_images_cls = cls_test[0:9]", "Pre-Made / Canned Estimator\nWhen using a pre-made Estimator, we need to specify the input features for the data. In this case we want to input images from our data-set which are numeric arrays of the given shape.", "feature_image = tf.feature_column.numeric_column(\"image\",\n shape=img_shape)", "You can have several input features which would then be combined in a list:", "feature_columns = [feature_image]", "In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively.", "num_hidden_units = [512, 256, 128]", "The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved.", "model = tf.estimator.DNNClassifier(feature_columns=feature_columns,\n hidden_units=num_hidden_units,\n activation_fn=tf.nn.relu,\n n_classes=num_classes,\n model_dir=\"./checkpoints_tutorial18-1/\")", "Training\nWe can now train the model for a given number of iterations. This automatically loads and saves checkpoints so we can continue the training later.", "model.train(input_fn=train_input_fn, steps=200)", "Evaluation\nOnce the model has been trained, we can evaluate its performance on the test-set.", "result = model.evaluate(input_fn=test_input_fn)\n\nresult\n\nprint(\"Classification accuracy: {0:.2%}\".format(result[\"accuracy\"]))", "Predictions\nThe trained model can also be used to make predictions on new data.\nNote that the TensorFlow graph is recreated and the checkpoint is reloaded every time we make predictions on new data. If the model is very large then this could add a significant overhead.\nIt is unclear why the Estimator is designed this way, possibly because it will always use the latest checkpoint and it can also be distributed easily for use on multiple computers.", "predictions = model.predict(input_fn=predict_input_fn)\n\ncls = [p['classes'] for p in predictions]\n\ncls_pred = np.array(cls, dtype='int').squeeze()\ncls_pred\n\nplot_images(images=some_images,\n cls_true=some_images_cls,\n cls_pred=cls_pred)", "Predictions for the Entire Test-Set\nIt appears that the model maybe classifies all images as 'spoony'. So let us see the predictions for the entire test-set. We can do this simply by using its input-function:", "predictions = model.predict(input_fn=test_input_fn)\n\ncls = [p['classes'] for p in predictions]\n\ncls_pred = np.array(cls, dtype='int').squeeze()", "The test-set contains 530 images in total and they have all been predicted as class 2 (spoony). So this model does not work at all for classifying the Knifey-Spoony dataset.", "np.sum(cls_pred == 2)", "New Estimator\nIf you cannot use one of the built-in Estimators, then you can create an arbitrary TensorFlow model yourself. To do this, you first need to create a function which defines the following:\n\nThe TensorFlow model, e.g. a Convolutional Neural Network.\nThe output of the model.\nThe loss-function used to improve the model during optimization.\nThe optimization method.\nPerformance metrics.\n\nThe Estimator can be run in three modes: Training, Evaluation, or Prediction. The code is mostly the same, but in Prediction-mode we do not need to setup the loss-function and optimizer.\nThis is another aspect of the Estimator API that is poorly designed and resembles how we did ANSI C programming using structs in the old days. It would probably have been more elegant to split this into several functions and sub-classed the Estimator-class.", "def model_fn(features, labels, mode, params):\n # Args:\n #\n # features: This is the x-arg from the input_fn.\n # labels: This is the y-arg from the input_fn.\n # mode: Either TRAIN, EVAL, or PREDICT\n # params: User-defined hyper-parameters, e.g. learning-rate.\n \n # Reference to the tensor named \"image\" in the input-function.\n x = features[\"image\"]\n\n # The convolutional layers expect 4-rank tensors\n # but x is a 2-rank tensor, so reshape it.\n net = tf.reshape(x, [-1, img_size, img_size, num_channels]) \n\n # First convolutional layer.\n net = tf.layers.conv2d(inputs=net, name='layer_conv1',\n filters=32, kernel_size=3,\n padding='same', activation=tf.nn.relu)\n net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)\n\n # Second convolutional layer.\n net = tf.layers.conv2d(inputs=net, name='layer_conv2',\n filters=32, kernel_size=3,\n padding='same', activation=tf.nn.relu)\n net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2) \n\n # Flatten to a 2-rank tensor.\n net = tf.contrib.layers.flatten(net)\n # Eventually this should be replaced with:\n # net = tf.layers.flatten(net)\n\n # First fully-connected / dense layer.\n # This uses the ReLU activation function.\n net = tf.layers.dense(inputs=net, name='layer_fc1',\n units=128, activation=tf.nn.relu) \n\n # Second fully-connected / dense layer.\n # This is the last layer so it does not use an activation function.\n net = tf.layers.dense(inputs=net, name='layer_fc_2',\n units=num_classes)\n\n # Logits output of the neural network.\n logits = net\n\n # Softmax output of the neural network.\n y_pred = tf.nn.softmax(logits=logits)\n \n # Classification output of the neural network.\n y_pred_cls = tf.argmax(y_pred, axis=1)\n\n if mode == tf.estimator.ModeKeys.PREDICT:\n # If the estimator is supposed to be in prediction-mode\n # then use the predicted class-number that is output by\n # the neural network. Optimization etc. is not needed.\n spec = tf.estimator.EstimatorSpec(mode=mode,\n predictions=y_pred_cls)\n else:\n # Otherwise the estimator is supposed to be in either\n # training or evaluation-mode. Note that the loss-function\n # is also required in Evaluation mode.\n \n # Define the loss-function to be optimized, by first\n # calculating the cross-entropy between the output of\n # the neural network and the true labels for the input data.\n # This gives the cross-entropy for each image in the batch.\n cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,\n logits=logits)\n\n # Reduce the cross-entropy batch-tensor to a single number\n # which can be used in optimization of the neural network.\n loss = tf.reduce_mean(cross_entropy)\n\n # Define the optimizer for improving the neural network.\n optimizer = tf.train.AdamOptimizer(learning_rate=params[\"learning_rate\"])\n\n # Get the TensorFlow op for doing a single optimization step.\n train_op = optimizer.minimize(\n loss=loss, global_step=tf.train.get_global_step())\n\n # Define the evaluation metrics,\n # in this case the classification accuracy.\n metrics = \\\n {\n \"accuracy\": tf.metrics.accuracy(labels, y_pred_cls)\n }\n\n # Wrap all of this in an EstimatorSpec.\n spec = tf.estimator.EstimatorSpec(\n mode=mode,\n loss=loss,\n train_op=train_op,\n eval_metric_ops=metrics)\n \n return spec", "Create an Instance of the Estimator\nWe can specify hyper-parameters e.g. for the learning-rate of the optimizer.", "params = {\"learning_rate\": 1e-4}", "We can then create an instance of the new Estimator.\nNote that we don't provide feature-columns here as it is inferred automatically from the data-functions when model_fn() is called.\nIt is unclear from the TensorFlow documentation why it is necessary to specify the feature-columns when using DNNClassifier in the example above, when it is not needed here.", "model = tf.estimator.Estimator(model_fn=model_fn,\n params=params,\n model_dir=\"./checkpoints_tutorial18-2/\")", "Training\nNow that our new Estimator has been created, we can train it.", "model.train(input_fn=train_input_fn, steps=200)", "Evaluation\nOnce the model has been trained, we can evaluate its performance on the test-set.", "result = model.evaluate(input_fn=test_input_fn)\n\nresult\n\nprint(\"Classification accuracy: {0:.2%}\".format(result[\"accuracy\"]))", "Predictions\nThe model can also be used to make predictions on new data.", "predictions = model.predict(input_fn=predict_input_fn)\n\ncls_pred = np.array(list(predictions))\ncls_pred\n\nplot_images(images=some_images,\n cls_true=some_images_cls,\n cls_pred=cls_pred)", "Predictions for the Entire Test-Set\nTo get the predicted classes for the entire test-set, we just use its input-function:", "predictions = model.predict(input_fn=test_input_fn)\n\ncls_pred = np.array(list(predictions))\ncls_pred", "The Convolutional Neural Network predicts different classes for the images, although most have just been classified as 0 (forky), so the accuracy is horrible.", "np.sum(cls_pred == 0)\n\nnp.sum(cls_pred == 1)\n\nnp.sum(cls_pred == 2)", "Conclusion\nThis tutorial showed how to use TensorFlow's binary file-format TFRecords with the Dataset and Estimator APIs. This should simplify the process of training models with very large datasets while getting high usage of the GPU. However, the API could have been simpler in many ways.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nTrain the Convolutional Neural Network for much longer. Does it get any better at classifying the Knifey-Spoony dataset?\nSave the One-Hot-encoded label instead of the class-integer in the TFRecord and modify the rest of the code to use it.\nMake shards so you save multiple TFRecord files instead of just one.\nSave jpeg-files in the TFRecord instead of the decoded image. You will then need to decode the jpeg-image in the parse() function. What are the pro's and con's of doing this?\nTry using another dataset.\nUse a dataset where the images are different sizes. Would you resize before or after converting to the TFRecords file? Why?\nTry and use numpy input-functions instead of TFRecords for the Estimator API. What is the performance difference?\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016-2017 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
analog-rl/Easy21
Joe #2 Monte-Carlo Control in Easy21/easy21 tests.ipynb
mit
[ "from easy21 import *", "from 1st module\nTest; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)", "import matplotlib.pyplot as plt\n%matplotlib notebook\nplt.figure(1)\n\nvalues = []\nfor i in xrange(0,100000):\n values.append(Card().absolute_value)\n # values.append(random.randint(1,10))\n\nplt.title('Test; Each draw from the deck results in a value between 1 and 10 (uniformly distributed)')\nplt.hist(values)\n \n # , c='g', s=20, alpha=0.25, label='true positive')\nplt.show()\nplt.savefig(\"#1-test1.png\")", "Each draw from the deck results in a colour of red (probability 1/3) or black (probability 2/3).", "import matplotlib.pyplot as plt\n%matplotlib notebook\nplt.figure(2)\n\nvalues = []\nfor i in xrange(0,100000):\n if (Card().is_black):\n values.append(0.6666666)\n else:\n values.append(0.3333333)\n\nplt.title('Test; red (probability 1/3) or black (probability 2/3)')\nplt.hist(values)\n \n # , c='g', s=20, alpha=0.25, label='true positive')\nplt.show()\nplt.savefig(\"#1-test2.png\")", "Test: If the player’s sum exceeds 21, or becomes less than 1, then she “goes bust” and loses the game (reward -1)", "def play_test_player_bust():\n s = State(Card(True),Card(True))\n a = Actions.hit\n e = Environment()\n \n while not s.term:\n s, r = e.step(s, a)\n # print (\"state = %s, %s, %s\" % (s.player, s.dealer, s.term))\n return s, r\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook\nplt.figure(3)\n\nvalues = []\nfor i in xrange(0,100000):\n s, r = play_test_player_bust()\n if s.player > 21:\n values.append(1)\n elif s.player < 1:\n values.append(1)\n else:\n values.append(-1)\n print \"error!!!!\"\n \n\nplt.title('Test; player busts > 21 or <1')\nplt.hist(values)\n \n # , c='g', s=20, alpha=0.25, label='true positive')\nplt.show()\nplt.savefig(\"#1-test3.png\")", "Test: If the player sticks then the dealer starts taking turns. The dealer always sticks on any sum of 17 or greater, and hits otherwise. If the dealer goes bust, then the player wins; otherwise, the outcome – win (reward +1), lose (reward -1), or draw (reward 0) – is the player with the largest sum.", "def play_test_player_stick():\n s = State(Card(True),Card(True))\n a = Actions.hit\n e = Environment()\n \n a = Actions.stick\n while not s.term:\n s, r = e.step(s, a)\n # print (\"state = %s, %s, %s\" % (s.player, s.dealer, s.term))\n return s, r\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook\nplt.figure(4)\n\nvalues = []\nfor i in xrange(0,100000):\n s, r = play_test_player_stick()\n if s.dealer > 21 or s.dealer < 1:\n if r == 1:\n values.append(1)\n else:\n print \"error, player should have won\"\n print (\"state = %s, %s, %s. result = %s\" % (s.player, s.dealer, s.term, r))\n values.append(-1)\n elif s.player == s.dealer:\n if r == 0:\n values.append(1)\n else:\n print \"error, player should have drawn\"\n print (\"state = %s, %s, %s. result = %s\" % (s.player, s.dealer, s.term, r))\n values.append(-2)\n elif s.player > s.dealer:\n if r == 1:\n values.append(1)\n else:\n print \"error, player should have won\"\n print (\"state = %s, %s, %s. result = %s\" % (s.player, s.dealer, s.term, r))\n values.append(-3)\n elif s.player < s.dealer:\n if r == -1:\n values.append(1)\n else:\n print \"error, player should have lost\"\n print (\"state = %s, %s, %s. result = %s\" % (s.player, s.dealer, s.term, r))\n values.append(-4)\n else:\n print \"all cases should have been dealt with\"\n print (\"state = %s, %s, %s. result = %s\" % (s.player, s.dealer, s.term, r))\n values.append(-5)\n\n \n\nplt.title('Test; player sticks')\nplt.hist(values)\nplt.show()\nplt.savefig(\"#1-test4.png\")" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
numerical-mooc/assignment-bank-2015
cdigangi8/Managing_Epidemics_Model.ipynb
mit
[ "Copyright (c)2015 DiGangi, C.\nManaging Epidemics Through Mathematical Modeling\nThis lesson will examine the spread of an epidemic over time using Euler's method. The model is a system of non-linear ODEs which is based on the classic Susceptible, Infected, Recovered (SIR) model. This model introduces a new parameter to include vaccinations. We will examine the various paremeters of the model and define conditions necessary to erradicate the epidemic.\nIn this module we will also introduce ipywigets, an IPython library that allows you to add widgets to your notebooks and make them interactive! We will be using widgets to vary our parameters and see how changing different parameters affects the results of the model. This is a great technique for making quick and easy comparisons because you don't have to re-run your cell for the widget to make changes to the graph. \nIntroducing Model Parameters\nThe most important part of understanding any model is understanding the nomenclature that is associated with it. Please review the below terms carefully and make sure you understand what each parameter represents.\n$S$: Susceptible Individuals\n$V$: Vaccinated Individuals\n$I$: Infected Individuals\n$R$: Recovered Individuals with Immunity (Cannot get infected again)\n$p$: Fraction of individuals who are vaccinated at birth \n$e$: Fraction of the vaccinated individuals that are successfully vaccinated\n$\\mu$: Average Death Rate\n$\\beta$: Contact Rate (Rate at which Susceptibles come into contact with Infected)\n$\\gamma$: Recovery Rate\n$R_0$: Basic Reporoduction Number\n$N$: Total Population ($S + V + I + R$)\nBasic SVIR Model\nModel Assumptions\nThe model will make the following assumptions:\n\nThe population N is held constant\nThe birth rate and death rate are equal\nThe death rate is the same across all individuals (Infected do not have higher death rate)\nA susceptible individual that comes in contact with an infected automatically becomes infected\nOnce an individual has recovered they are forever immune and not reintroduced into the susceptible population\nVaccination does not wear off (vaccinated cannot become infected)\n\nSusceptible Equation\nLet's examine the model by component. First we will breakdown the equation for susceptible individuals. In order to find the rate of change of susceptible individuals we must calculate the number of newborns that are not vaccinated:\n$$(1-ep) \\mu N$$\nThe number of Susceptible Individuals that become infected:\n$$ \\beta IS_{infections}$$\nand finally the number of Susceptibles that die:\n$$ \\mu S_{deaths}$$\nTherefore the change in Susceptible Indivduals becomes:\n$$\\frac{dS}{dt} = (1-ep) \\mu N - \\beta IS - \\mu S$$\nVaccinated Equation\nNow examining the vaccinated individuals we start with the newborns that are vaccinated:\n$$ep \\mu N$$\nAnd the number of vaccinated individuals that die:\n$$\\mu V$$\nThe change in vaccinated individuals becomes:\n$$\\frac{dV}{dt} = ep \\mu N - \\mu V$$\nInfected Equation\nFor the infected individuals we start with the number of Susceptible individuals that are exposed and become infected:\n$$\\beta IS_{infections}$$\nNext we need the number of Infected individuals that recovered:\n$$\\gamma I_{recoveries}$$\nFinally we examine the infected who die:\n$$\\mu I_{deaths}$$\nPutting this all together we get the following equation:\n$$\\frac{dI}{dt} = \\beta IS - \\gamma I - \\mu I$$\nRecovered Equation\nThe number of recovered individuals first relies on the infected who recover:\n$$\\gamma I$$\nNext it depeds on the recovered individuals who die:\n$$\\mu R$$\nPutting this together yeilds the equation:\n$$\\frac{dR}{dt} = \\gamma I - \\mu R$$\nModel Summary\nThe complete model is as follows:\n$$\\frac{dS}{dt} = (1-ep) \\mu N - \\beta IS - \\mu S$$\n$$\\frac{dV}{dt} = ep \\mu N - \\mu V$$\n$$\\frac{dI}{dt} = \\beta IS - \\gamma I - \\mu I$$\n$$\\frac{dR}{dt} = \\gamma I - \\mu R$$\nThis is a very simplified model because of the complexities of infectious diseases. \nImplementing Numerical Solution with Euler!\nFor the numerical solution we will be using Euler's method since we are only dealing with time derivatives. Just to review, for Euler's method we replace the time derivative by the following:\n$$\\frac{dS}{dt} = \\frac{S^{n+1} - S^n}{\\Delta t}$$\nwhere n represents the discretized time.\nTherefore after we discretize our model we have:\n$$\\frac{S^{n+1} - S^n}{\\Delta t} = (1-ep) \\mu N - \\beta IS^n - \\mu S^n$$\n$$\\frac{V^{n+1} - V^n}{\\Delta t} = ep \\mu N - \\mu V^n$$\n$$\\frac{I^{n+1} - I^n}{\\Delta t} = \\beta I^nS^n - \\gamma I^n - \\mu I^n$$\n$$\\frac{R^{n+1} - R^n}{\\Delta t} = \\gamma I^n - \\mu R^n$$\nAnd now solving for the value at the next time step yeilds:\n$$S^{n+1} = S^n + \\Delta t \\left((1-ep) \\mu N - \\beta IS^n - \\mu S^n \\right)$$\n$$V^{n+1} = V^n + \\Delta t ( ep \\mu N - \\mu V^n)$$\n$$I^{n+1} = I^n + \\Delta t (\\beta I^nS^n - \\gamma I^n - \\mu I^n)$$\n$$R^{n+1} = R^n + \\Delta t ( \\gamma I^n - \\mu R^n)$$\nIf we want to implement this into our code we can build arrays to hold our system of equations. Assuming u is our solution matrix and f(u) is our right hand side:\n\\begin{align}\nu & = \\begin{pmatrix} S \\ V \\ I \\ R \\end{pmatrix} & f(u) & = \\begin{pmatrix} S^n + \\Delta t \\left((1-ep) \\mu N - \\beta IS^n - \\mu S^n \\right) \\ V^n + \\Delta t ( ep \\mu N - \\mu V^n) \\ I^n + \\Delta t (\\beta I^nS^n - \\gamma I^n - \\mu I^n) \\ R^n + \\Delta t ( \\gamma I^n - \\mu R^n) \\end{pmatrix}.\n\\end{align}\nSolve!\nNow we will implement this solution below. First we will import the necessary python libraries", "%matplotlib inline\nimport numpy \nfrom matplotlib import pyplot\nfrom matplotlib import rcParams\nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16", "Let us first define our function $f(u)$ that will calculate the right hand side of our model. We will pass in the array $u$ which contains our different populations and set them individually in the function:", "def f(u):\n \"\"\"Returns the right-hand side of the epidemic model equations.\n \n Parameters\n ----------\n u : array of float\n array containing the solution at time n.\n u is passed in and distributed to the different components by calling the individual value in u[i]\n \n Returns\n -------\n du/dt : array of float\n array containing the RHS given u.\n \"\"\"\n \n S = u[0]\n V = u[1]\n I = u[2]\n R = u[3]\n \n return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,\n e*p*mu*N - mu*V,\n beta*I*S - gamma*I - mu*I,\n gamma*I - mu*R])", "Next we will define the euler solution as a function so that we can call it as we iterate through time.", "def euler_step(u, f, dt):\n \n \"\"\"Returns the solution at the next time-step using Euler's method.\n \n Parameters\n ----------\n u : array of float\n solution at the previous time-step.\n f : function\n function to compute the right hand-side of the system of equation.\n dt : float\n time-increment.\n \n Returns\n -------\n approximate solution at the next time step.\n \"\"\"\n \n return u + dt * f(u)", "Now we are ready to set up our initial conditions and solve! We will use a simplified population to start with.", "e = .1 #vaccination success rate\np = .75 # newborn vaccination rate\nmu = .02 # death rate\nbeta = .002 # contact rate\ngamma = .5 # Recovery rate\n\nS0 = 100 # Initial Susceptibles\nV0 = 50 # Initial Vaccinated\nI0 = 75 # Initial Infected\nR0 = 10 # Initial Recovered\n\nN = S0 + I0 + R0 + V0 #Total population (remains constant)", "Now we will implement our discretization using a for loop to iterate over time. We create a numpy array $u$ that will hold all of our values at each time step for each component (SVIR). We will use dt of 1 to represent 1 day and iterate over 365 days.", "T = 365 # Iterate over 1 year\ndt = 1 # 1 day\nN = int(T/dt)+1 # Total number of iterations\nt = numpy.linspace(0, T, N) # Time discretization\n\nu = numpy.zeros((N,4)) # Initialize the solution array with zero values\nu[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array\n\nfor n in range(N-1): # Loop through time steps\n u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function\n\n", "Now we use python's pyplot library to plot all of our results on the same graph:", "pyplot.figure(figsize=(15,5))\npyplot.grid(True)\npyplot.xlabel(r'time', fontsize=18)\npyplot.ylabel(r'population', fontsize=18)\npyplot.xlim(0, 500)\npyplot.title('Population of SVIR model over time', fontsize=18)\npyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');\npyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');\npyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');\npyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');\npyplot.legend();", "The graph is interesting because it exhibits some oscillating behavior. You can see that under the given parameters, the number of infected people drops within the first few days. Notice that the susceptible individuals grow until about 180 days. The return of infection is a result of too many susceptible people in the population. The number of infected looks like it goes to zero but it never quite reaches zero. Therfore, when we have $\\beta IS$, when $S$ gets large enough the infection will start to be reintroduced into the population.\nIf we want to examine how the population changes under new conditions, we could re-run the below cell with new parameters:", "#Changing the following parameters\ne = .5 #vaccination success rate\ngamma = .1 # Recovery rate\n\nS0 = 100 # Initial Susceptibles\nV0 = 50 # Initial Vaccinated\nI0 = 75 # Initial Infected\nR0 = 10 # Initial Recovered\n\nN = S0 + I0 + R0 + V0 #Total population (remains constant)\n\nT = 365 # Iterate over 1 year\ndt = 1 # 1 day\nN = int(T/dt)+1 # Total number of iterations\nt = numpy.linspace(0, T, N) # Time discretization\n\nu = numpy.zeros((N,4)) # Initialize the solution array with zero values\nu[0] = [S0, V0, I0, R0] # Set the initial conditions in the solution array\n\nfor n in range(N-1): # Loop through time steps\n u[n+1] = euler_step(u[n], f, dt) # Get the value for the next time step using our euler_step function\n \npyplot.figure(figsize=(15,5))\npyplot.grid(True)\npyplot.xlabel(r'time', fontsize=18)\npyplot.ylabel(r'population', fontsize=18)\npyplot.xlim(0, 500)\npyplot.title('Population of SVIR model over time', fontsize=18)\npyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');\npyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');\npyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');\npyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');\npyplot.legend();", "However, every time we want to examine new parameters we have to go back and change the values within the cell and re run our code. This is very cumbersome if we want to examine how different parameters affect our outcome. If only there were some solution we could implement that would allow us to change parameters on the fly without having to re-run our code...\nipywidgets!\nWell there is a solution we can implement! Using a python library called ipywidgets we can build interactive widgets into our notebook that allow for user interaction. If you do not have ipywidets installed, you can install it using conda by simply going to the terminal and typing:\nconda install ipywidgets\nNow we will import our desired libraries", "from ipywidgets import interact, HTML, FloatSlider\nfrom IPython.display import clear_output, display", "The below cell is a quick view of a few different interactive widgets that are available. Notice that we must define a function (in this case $z$) where we call the function $z$ and parameter $x$, where $x$ is passed into the function $z$.", "def z(x):\n print(x)\ninteract(z, x=True) # Checkbox\ninteract(z, x=10) # Slider \ninteract(z, x='text') # Text entry\n\n\n", "Redefining the Model to Accept Parameters\nIn order to use ipywidgets and pass parameters in our functions we have to slightly redefine our functions to accept these changing parameters. This will ensure that we don't have to re-run any code and our graph will update as we change parameters!\nWe will start with our function $f$. This function uses our initial parameters $p$, $e$, $\\mu$, $\\beta$, and $\\gamma$. Previously, we used the global definition of these variables so we didn't include them inside the function. Now we will be passing in both our array $u$ (which holds the different populations) and a new array called $init$ (which holds our initial parameters).", "def f(u, init):\n \n \"\"\"Returns the right-hand side of the epidemic model equations.\n \n Parameters\n ----------\n u : array of float\n array containing the solution at time n.\n u is passed in and distributed to the different components by calling the individual value in u[i]\n init : array of float\n array containing the parameters for the model\n \n Returns\n -------\n du/dt : array of float\n array containing the RHS given u.\n \"\"\"\n \n S = u[0]\n V = u[1]\n I = u[2]\n R = u[3]\n \n p = init[0]\n e = init[1]\n mu = init[2]\n beta = init[3]\n gamma = init[4]\n \n return numpy.array([(1-e*p)*mu*N - beta*I*S - mu*S,\n e*p*mu*N - mu*V,\n beta*I*S - gamma*I - mu*I,\n gamma*I - mu*R])", "Now we will change our $euler step$ function which calls our function $f$ to include the new $init$ array that we are passing.", "def euler_step(u, f, dt, init):\n return u + dt * f(u, init)", "In order to make changes to our parameters, we will use slider widgets. Now that we have our functions set up, we will build another function which we will use to update the graph as we move our slider parameters. First we must build the sliders for each parameter. Using the FloatSlider method from ipywidgets, we can specify the min and max for our sliders and a step to increment.\nNext we build the update function which will take in the values of the sliders as they change and re-plot the graph. The function follows the same logic as before with the only difference being the changing parameters.\nFinally we specify the behavior of the sliders as they change values and call our update function.", "#Build slider for each parameter desired\n\npSlider = FloatSlider(description='p', min=0, max=1, step=0.1)\neSlider = FloatSlider(description='e', min=0, max=1, step=0.1)\nmuSlider = FloatSlider(description='mu', min=0, max=1, step=0.005)\nbetaSlider = FloatSlider(description='beta', min=0, max=.01, step=0.0005)\ngammaSlider = FloatSlider(description='gamma', min=0, max=1, step=0.05)\n\n#Update function will update the plotted graph every time a slider is changed\n\ndef update():\n \n \"\"\"Returns a graph of the new results for a given slider parameter change.\n \n Parameters\n ----------\n p : float value of slider widget\n e : float value of slider widget\n mu : float value of slider widget\n beta : float value of slider widget\n gamma : float value of slider widget\n \n Returns\n -------\n Graph representing new populations\n \"\"\"\n \n #the following parameters use slider.value to get the value of the given slider\n p = pSlider.value\n e = eSlider.value \n mu = muSlider.value\n beta = betaSlider.value\n gamma = gammaSlider.value\n \n #inital population\n S0 = 100\n V0 = 50\n I0 = 75\n R0 = 10\n\n N = S0 + I0 + R0 + V0\n\n #Iteration parameters\n T = 365\n dt = 1\n N = int(T/dt)+1\n t = numpy.linspace(0, T, N)\n\n u = numpy.zeros((N,4))\n u[0] = [S0, V0, I0, R0]\n \n #Array of parameters\n init = numpy.array([p,e,mu,beta,gamma])\n\n for n in range(N-1):\n u[n+1] = euler_step(u[n], f, dt, init)\n \n #Plot of population with gicen slider parameters\n pyplot.figure(figsize=(15,5))\n pyplot.grid(True)\n pyplot.xlabel(r'time', fontsize=18)\n pyplot.ylabel(r'population', fontsize=18)\n pyplot.xlim(0, 500)\n pyplot.title('Population of SVIR model over time', fontsize=18)\n pyplot.plot(t,u[:,0], color= 'red', lw=2, label = 'Susceptible');\n pyplot.plot(t,u[:,1], color='green', lw=2, label = 'Vaccinated');\n pyplot.plot(t,u[:,2], color='black', lw=2, label = 'Infected');\n pyplot.plot(t,u[:,3], color='blue', lw=2, label = 'Recovered');\n pyplot.legend();\n \n #Clear the output otherwise it will create a new graph every time so you will end up with multiple graphs\n clear_output(True) #This ensures it recreates the data on the initial graph\n \n#Run the update function on slider values change\npSlider.on_trait_change(update, 'value')\neSlider.on_trait_change(update, 'value')\nmuSlider.on_trait_change(update, 'value')\nbetaSlider.on_trait_change(update, 'value')\ngammaSlider.on_trait_change(update, 'value')\n\n \ndisplay(pSlider, eSlider, muSlider, betaSlider, gammaSlider) #Display sliders\nupdate() # Run initial function", "Notice that the graph starts with all parameters equal to zero. Unfortunately we cannot set the initial value of the slider. We can work around this using conditional statements to see if the slider values are equal to zero, then use different parameters.\nNotice that as you change the parameters the graph starts to come alive! This allows you to quickly compare how different parameters affect the results of our model!\nDig deeper?\nUsing the ipywidget library, create a new function that allows for user input. Using the python array of objects below, which contains various diseases and their initial parameters, have the user type in one of the disease names and return the graph corresponding to that disease! You can use the ipywidget text box to take in the value from the user and then pass that value to a function that will call out that disease from the object below!", "Disease = [{'name': \"Ebola\", 'p': 0, 'e': 0, 'mu': .04, 'beta': .005, 'gamma': 0}, \\\n {'name': \"Measles\", 'p': .9, 'e': .9, 'mu': .02, 'beta': .002, 'gamma': .9}, \\\n {'name': \"Tuberculosis\", 'p': .5, 'e': .2, 'mu': .06, 'beta': .001, 'gamma': .3}]\n\n#Example\n\ndef z(x):\n print(x)\ninteract(z, x = 'Text')", "References\n\n\nScherer, A. and McLean, A. \"Mathematical Models of Vaccination\", British Medical Bulletin Volume 62 Issue 1, 2015 Oxford University Press. Online\n\n\nBarba, L., \"Practical Numerical Methods with Python\" George Washington University \n\n\nFor a good explanation of some of the simpler models and overview of parameters, visit this Wiki Page\n\n\nSlider tutorial posted on github", "from IPython.core.display import HTML\ncss_file = 'numericalmoocstyle.css'\nHTML(open(css_file, \"r\").read())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nick-youngblut/SIPSim
ipynb/example/2_simulation-shotgun.ipynb
mit
[ "Description\n\nTime to make a simple SIP data simulation with the dataset that you alreadly created\n\n\nMake sure you have created the dataset before trying to run this notebook\n\nSetting variables\n\n\"workDir\" is the path to the working directory for this analysis (where the files will be download to)\nNOTE: MAKE SURE to modify this path to the directory where YOU want to run the example.\n\"nprocs\" is the number of processors to use (3 by default, since only 3 genomes). Change this if needed.", "workDir = '../../t/SIPSim_example/'\nnprocs = 3", "Init", "import os\n\n# Note: you will need to install `rpy2.ipython` and the necessary R packages (see next cell)\n%load_ext rpy2.ipython\n\n%%R\nlibrary(ggplot2)\nlibrary(dplyr)\nlibrary(tidyr)\n\nworkDir = os.path.abspath(workDir)\nif not os.path.isdir(workDir):\n os.makedirs(workDir)\n%cd $workDir \n\ngenomeDir = os.path.join(workDir, 'genomes_rn')", "Experimental design\n\nHow many gradients?\nWhich are labeled treatments & which are controls?\nFor this tutorial, we'll keep things simple and just simulate one control & one treatment\nFor the labeled treatment, 34% of the taxa (1 of 3) will incorporate 50% isotope\n\nThe script below (\"SIPSim incorpConfigExample\") is helpful for making simple experimental designs", "%%bash\nsource activate SIPSim\n\n# creating example config\nSIPSim incorp_config_example \\\n --percTaxa 34 \\\n --percIncorpUnif 50 \\\n --n_reps 1 \\\n > incorp.config\n\n!cat incorp.config", "Pre-fractionation communities\n\nWhat is the relative abundance of taxa in the pre-fractionation samples?", "%%bash\nsource activate SIPSim\n\nSIPSim communities \\\n --config incorp.config \\\n ./genomes_rn/genome_index.txt \\\n > comm.txt \n\n!cat comm.txt", "Note: \"library\" = gradient\nSimulating gradient fractions\n\nBD size ranges for each fraction (& start/end of the total BD range)", "%%bash \nsource activate SIPSim\n\nSIPSim gradient_fractions \\\n --BD_min 1.67323 \\\n --BD_max 1.7744 \\\n comm.txt \\\n > fracs.txt \n\n!head -n 6 fracs.txt", "Simulating fragments\n\nSimulating shotgun-fragments\nFragment length distribution: skewed-normal\n\nPrimer sequences (wait... what?)\n\nIf you were to simulate amplicons, instead of shotgun fragments, you can use something like the following:", "# primers = \"\"\">515F\n# GTGCCAGCMGCCGCGGTAA\n# >806R\n# GGACTACHVGGGTWTCTAAT\n# \"\"\"\n\n# F = os.path.join(workDir, '515F-806R.fna')\n# with open(F, 'wb') as oFH:\n# oFH.write(primers)\n \n# print 'File written: {}'.format(F)", "Simulation", "%%bash -s $genomeDir\nsource activate SIPSim \n\n# skewed-normal\nSIPSim fragments \\\n $1/genome_index.txt \\\n --fp $1 \\\n --fld skewed-normal,9000,2500,-5 \\\n --flr None,None \\\n --nf 1000 \\\n --debug \\\n --tbl \\\n > shotFrags.txt \n\n!head -n 5 shotFrags.txt\n!tail -n 5 shotFrags.txt", "Plotting fragments", "%%R -w 700 -h 350\n\ndf = read.delim('shotFrags.txt')\n\np = ggplot(df, aes(fragGC, fragLength, color=taxon_name)) +\n geom_density2d() +\n scale_color_discrete('Taxon') +\n labs(x='Fragment G+C', y='Fragment length (bp)') +\n theme_bw() +\n theme(\n text = element_text(size=16)\n )\nplot(p)", "Note: for information on what's going on in this config file, use the command: SIPSim isotope_incorp -h\nConverting fragments to a 2d-KDE\n\nEstimating the joint-probabilty for fragment G+C & length", "%%bash \nsource activate SIPSim\n\nSIPSim fragment_KDE \\\n shotFrags.txt \\\n > shotFrags_kde.pkl\n\n!ls -thlc shotFrags_kde.pkl", "Note: The generated list of KDEs (1 per taxon per gradient) are in a binary file format\nTo get a table of length/G+C values, use the command: SIPSim KDE_sample\n\nAdding diffusion\n\nSimulating the BD distribution of fragments as Gaussian distributions. \nOne Gaussian distribution per homogeneous set of DNA molecules (same G+C and length)\n\n\nSee the README if you get MKL errors with the next step and re-run the fragment KDE generation step", "%%bash \nsource activate SIPSim\n\nSIPSim diffusion \\\n shotFrags_kde.pkl \\\n --np 3 \\\n > shotFrags_kde_dif.pkl \n\n!ls -thlc shotFrags_kde_dif.pkl ", "Plotting fragment distribution w/ and w/out diffusion\nMaking a table of fragment values from KDEs", "n = 100000\n\n%%bash -s $n\nsource activate SIPSim\n\nSIPSim KDE_sample -n $1 shotFrags_kde.pkl > shotFrags_kde.txt\nSIPSim KDE_sample -n $1 shotFrags_kde_dif.pkl > shotFrags_kde_dif.txt\n\nls -thlc shotFrags_kde*.txt", "Plotting\n\nplotting KDE with or without diffusion added", "%%R\ndf1 = read.delim('shotFrags_kde.txt', sep='\\t')\ndf2 = read.delim('shotFrags_kde_dif.txt', sep='\\t')\n\ndf1$data = 'no diffusion'\ndf2$data = 'diffusion'\ndf = rbind(df1, df2) %>%\n gather(Taxon, BD, Clostridium_ljungdahlii_DSM_13528, \n Escherichia_coli_1303, Streptomyces_pratensis_ATCC_33331) %>%\n mutate(Taxon = gsub('_(ATCC|DSM)', '\\n\\\\1', Taxon))\n\ndf %>% head(n=3)\n\n%%R -w 800 -h 300\n\np = ggplot(df, aes(BD, fill=data)) +\n geom_density(alpha=0.25) +\n facet_wrap( ~ Taxon) + \n scale_fill_discrete('') +\n theme_bw() +\n theme(\n text=element_text(size=16),\n axis.title.y = element_text(vjust=1),\n axis.text.x = element_text(angle=50, hjust=1)\n )\n\nplot(p)", "Adding diffusive boundary layer (DBL) effects\n\n'smearing' effects", "%%bash \nsource activate SIPSim\n\nSIPSim DBL \\\n shotFrags_kde_dif.pkl \\\n --np 3 \\\n > shotFrags_kde_dif_DBL.pkl\n\n# viewing DBL logs\n!ls -thlc *pkl", "Adding isotope incorporation\n\nUsing the config file produced in the Experimental Design section", "%%bash\nsource activate SIPSim\n\nSIPSim isotope_incorp \\\n --comm comm.txt \\\n --np 3 \\\n shotFrags_kde_dif_DBL.pkl \\\n incorp.config \\\n > shotFrags_KDE_dif_DBL_inc.pkl\n\n!ls -thlc *.pkl", "Note: statistics on how much isotope was incorporated by each taxon are listed in \"BD-shift_stats.txt\"", "%%R\ndf = read.delim('BD-shift_stats.txt', sep='\\t')\ndf ", "Making an OTU table\n\nNumber of amplicon-fragment in each fraction in each gradient\nAssuming a total pre-fractionation community size of 1e7", "%%bash\nsource activate SIPSim\n\nSIPSim OTU_table \\\n --abs 1e7 \\\n --np 3 \\\n shotFrags_KDE_dif_DBL_inc.pkl \\\n comm.txt \\\n fracs.txt \\\n > OTU.txt\n\n!head -n 7 OTU.txt", "Plotting fragment count distributions", "%%R -h 350 -w 750\n\ndf = read.delim('OTU.txt', sep='\\t')\n\np = ggplot(df, aes(BD_mid, count, fill=taxon)) +\n geom_area(stat='identity', position='dodge', alpha=0.5) +\n scale_x_continuous(expand=c(0,0)) +\n labs(x='Buoyant density') +\n labs(y='Shotgun fragment counts') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\nplot(p)", "Notes: \n\nThis plot represents the theoretical number of amplicon-fragments at each BD across each gradient. \nDerived from subsampling the fragment BD proability distributions generated in earlier steps.\nThe fragment BD distribution of one of the 3 taxa should have shifted in Gradient 2 (the treatment gradient).\nThe fragment BD distributions of the other 2 taxa should be approx. the same between the two gradients.\n\nViewing fragment counts as relative quantities", "%%R -h 350 -w 750\n\np = ggplot(df, aes(BD_mid, count, fill=taxon)) +\n geom_area(stat='identity', position='fill') +\n scale_x_continuous(expand=c(0,0)) +\n scale_y_continuous(expand=c(0,0)) +\n labs(x='Buoyant density') +\n labs(y='Shotgun fragment counts') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\nplot(p)", "Adding effects of PCR\n\nThis will alter the fragment counts based on the PCR kinetic model of:\nSuzuki MT, Giovannoni SJ. (1996). Bias caused by template annealing in the\n amplification of mixtures of 16S rRNA genes by PCR. Appl Environ Microbiol\n 62:625-630.", "%%bash\nsource activate SIPSim\n\nSIPSim OTU_PCR OTU.txt > OTU_PCR.txt \n\n!head -n 5 OTU_PCR.txt\n!tail -n 5 OTU_PCR.txt", "Notes\n\nThe table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.\n\nSimulating sequencing\n\nSampling from the OTU table", "%%bash\nsource activate SIPSim\n\nSIPSim OTU_subsample OTU_PCR.txt > OTU_PCR_sub.txt\n\n!head -n 5 OTU_PCR_sub.txt", "Notes\n\nThe table is in the same format as with the original OTU table, but the counts and relative abundances should be altered.\n\nPlotting", "%%R -h 350 -w 750\n\ndf = read.delim('OTU_PCR_sub.txt', sep='\\t')\n\n\np = ggplot(df, aes(BD_mid, rel_abund, fill=taxon)) +\n geom_area(stat='identity', position='fill') +\n scale_x_continuous(expand=c(0,0)) +\n scale_y_continuous(expand=c(0,0)) +\n labs(x='Buoyant density') +\n labs(y='Taxon relative abundances') +\n facet_grid(library ~ .) +\n theme_bw() +\n theme( \n text = element_text(size=16),\n axis.title.y = element_text(vjust=1), \n axis.title.x = element_blank()\n )\nplot(p)", "Misc\nA 'wide' OTU table\n\nIf you want to reformat the OTU table to a more standard 'wide' format (as used in Mothur or QIIME):", "%%bash\nsource activate SIPSim\n\nSIPSim OTU_wide_long -w \\\n OTU_PCR_sub.txt \\\n > OTU_PCR_sub_wide.txt\n\n!head -n 4 OTU_PCR_sub_wide.txt", "SIP metadata\n\nIf you want to make a table of SIP sample metadata", "%%bash\nsource activate SIPSim\n\nSIPSim OTU_sample_data \\\n OTU_PCR_sub.txt \\\n > OTU_PCR_sub_meta.txt\n\n!head OTU_PCR_sub_meta.txt", "Other SIPSim commands\nSIPSim -l will list all available SIPSim commands", "%%bash\nsource activate SIPSim\n\nSIPSim -l" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zerothi/ts-tbt-sisl-tutorial
TB_06/run.ipynb
gpl-3.0
[ "import sisl\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom functools import partial\n%matplotlib inline", "TBtrans is capable of calculating transport in $N\\ge 1$ electrode systems. In this example we will explore a 4-terminal graphene GNR cross-bar (one zGNR, the other aGNR) system.", "graphene = sisl.geom.graphene(orthogonal=True)\n\nR = [0.1, 1.43]\nhop = [0., -2.7]", "Create the two electrodes in $x$ and $y$ directions. We will force the systems to be nano-ribbons, i.e. only periodic along the ribbon. In sisl there are two ways of accomplishing this.\n\nExplicitly set number of auxiliary supercells\nAdd vacuum beyond the orbital interaction ranges\n\nThe below code uses the first method. \nPlease see if you can change the creation of elec_x by adding vacuum.\nHINT: Look at the documentation for the sisl.Geometry and search for vacuum. To know the orbital distance look up maxR in the geometry class as well.", "elec_y = graphene.tile(3, axis=0)\nelec_y.set_nsc([1, 3, 1])\nelec_y.write('elec_y.xyz')\nelec_x = graphene.tile(5, axis=1)\nelec_x.set_nsc([3, 1, 1])\nelec_x.write('elec_x.xyz')", "Subsequently we create the electronic structure.", "H_y = sisl.Hamiltonian(elec_y)\nH_y.construct((R, hop))\nH_y.write('ELEC_Y.nc') \nH_x = sisl.Hamiltonian(elec_x)\nH_x.construct((R, hop))\nH_x.write('ELEC_X.nc')", "Now we have created the electronic structure for the electrodes. All that is needed is the electronic structure of the device region, i.e. the crossing nano-ribbons.", "dev_y = elec_y.tile(30, axis=1)\ndev_y = dev_y.translate( -dev_y.center(what='xyz') )\ndev_x = elec_x.tile(18, axis=0)\ndev_x = dev_x.translate( -dev_x.center(what='xyz') )", "Remove any atoms that are duplicated, i.e. when we overlay these two geometries some atoms are the same.", "device = dev_y.add(dev_x)\ndevice.set_nsc([1,1,1])\nduplicates = []\nfor ia in dev_y:\n idx = device.close(ia, 0.1)\n if len(idx) > 1:\n duplicates.append(idx[1])\ndevice = device.remove(duplicates)", "Can you explain why set_nsc([1, 1, 1]) is called? And if so, is it necessary to do this step?\n\nEnsure the lattice vectors are big enough for plotting.\nTry and convince your-self that the lattice vectors are unimportant for tbtrans in this example.\nHINT: what is the periodicity?", "device = device.add_vacuum(70, 0).add_vacuum(20, 1)\ndevice = device.translate( device.center(what='cell') - device.center(what='xyz') )\ndevice.write('device.xyz')", "Since this system has 4 electrodes we need to tell tbtrans where the 4 electrodes are in the device. The following lines prints out the fdf-lines that are appropriate for each of the electrodes (RUN.fdf is already filled correctly):", "print('elec-Y-1: semi-inf -A2: {}'.format(1))\nprint('elec-Y-2: semi-inf +A2: end {}'.format(len(dev_y)))\nprint('elec-X-1: semi-inf -A1: {}'.format(len(dev_y) + 1))\nprint('elec-X-2: semi-inf +A1: end {}'.format(-1))\n\nH = sisl.Hamiltonian(device)\nH.construct([R, hop])\nH.write('DEVICE.nc')", "Exercises\nIn this example we have more than 1 transmission path. Before you run the below code which plots all relevant transmissions ($T_{ij}$ for $j>i$), consider if there are any symmetries, and if so, determine how many different transmission spectra you should expect? Please plot the geometry using your favourite geometry viewer (molden, Jmol, ...). The answer is not so trivial.", "tbt = sisl.get_sile('siesta.TBT.nc')", "Make easy function calls for plotting energy resolved quantites:", "E = tbt.E\nEplot = partial(plt.plot, E)\n\n# Make a shorthand version for the function (simplifies the below line)\nT = tbt.transmission\nt12, t13, t14, t23, t24, t34 = T(0, 1), T(0, 2), T(0, 3), T(1, 2), T(1, 3), T(2, 3)\nEplot(t12, label=r'$T_{12}$'); Eplot(t13, label=r'$T_{13}$'); Eplot(t14, label=r'$T_{14}$');\nEplot(t23, label=r'$T_{23}$'); Eplot(t24, label=r'$T_{24}$'); \nEplot(t34, label=r'$T_{34}$');\nplt.ylabel('Transmission'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();", "In RUN.fdf we have added the flag TBT.T.All which tells tbtrans to calculate all transmissions, i.e. between all $i\\to j$ for all $i,j \\in {1,2,3,4}$. This flag is by default False, why?\nCreate 3 plots each with $T_{1j}$ and $T_{j1}$ for all $j\\neq1$.", "# Insert plot of T12 and T21\n\n# Insert plot of T13 and T31\n\n# Insert plot of T14 and T41", "Considering symmetries, try to figure out which transmissions ($T_{ij}$) are unique?\nPlot the bulk DOS for the 2 differing electrodes.\nPlot the spectral DOS injected by all 4 electrodes.", "# Helper routines, this makes BDOS(...) == tbt.BDOS(..., norm='atom')\nBDOS = partial(tbt.BDOS, norm='atom')\nADOS = partial(tbt.ADOS, norm='atom')", "Bulk density of states:", "Eplot(..., label=r'$BDOS_1$');\nEplot(..., label=r'$BDOS_2$');\nplt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();", "Spectral density of states for all electrodes:\n- As a final exercise you can explore the details of the density of states for single atoms. Take for instance atom 205 (204 in Python index) which is in both GNRs at the crossing.\nFeel free to play around with different atoms, subset of atoms (pass a list) etc.", "Eplot(..., label=r'$ADOS_1$');\n...\nplt.ylabel('DOS [1/eV/N]'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); plt.legend();", "For 2D structures one can easily plot the DOS per atom via a scatter plot in matplotlib, here is the skeleton code for that, you should select an energy point and figure out how to extract the atom resolved DOS (you will need to look-up the documentation for the ADOS method to figure out which flag to use.", "Eidx = tbt.Eindex(...)\nADOS = [tbt.ADOS(i, ....) for i in range(4)]\nf, axs = plt.subplots(2, 2, figsize=(10, 10))\na_xy = tbt.geometry.xyz[tbt.a_dev, :2]\nfor i in range(4):\n A = ADOS[i]\n A *= 100 / A.max() # normalize to maximum 100 (simply for plotting)\n axs[i // 2][i % 2].scatter(a_xy[:, 0], a_xy[:, 1], A, c=\"bgrk\"[i], alpha=.5);\nplt.xlabel('x [Ang]'); plt.ylabel('y [Ang]'); plt.axis('equal');" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
lcharleux/numerical_analysis
doc/Traitement_donnees/.ipynb_checkpoints/TD Traitement de Données-checkpoint.ipynb
gpl-2.0
[ "%load_ext autoreload\n%matplotlib nbagg\n%autoreload 2\nimport copy\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n", "MECA653: Traitement de donnée - Analyse de la base de donnée de la sécurité routière\nL'objectif ici est d'analyser les données fournies par le ministère de l'intérieure sur les accidents de la route resencés en 2016.\nLe Module Panda sera largement utilisé.\nSources\nLien vers data.gouv.fr :\nhttps://www.data.gouv.fr/fr/datasets/base-de-donnees-accidents-corporels-de-la-circulation/#_\nDocumentation de la base de donnée : \nDATA/Description_des_bases_de_donnees_ONISR_-Annees_2005_a_2016.pdf\n1 - Charger les bases de donnée", "dfc = pd.read_csv('./DATA/caracteristiques_2016.csv') \ndfu = pd.read_csv('./DATA/usagers_2016.csv')\ndfl = pd.read_csv('./DATA/lieux_2016.csv')\ndf = pd.concat([dfu, dfc, dfl], axis=1)\n\ndfc.tail()\n\ndfu.head()\n\ndfl.tail()\n\ndf.head()\n\ndf = pd.concat([df, dfl], axis=1)\ndf.head()", "2 - Quelle est la poportion Homme/Femme impliquée dans les accidents ? Représenter le résultat sous forme graphique.", "# methode pas propre\n(h,c)=df[df.sexe==1].shape\n(f,c)=df[df.sexe==2].shape\n\n(t,c)=df.shape\n\nprint('h/t=', h/t)\nprint('f/t=', f/t)\n\n# methode panda\ndf[\"sexe\"].value_counts(normalize=True)\n\nfig = plt.figure()\ndf[df.grav==2].sexe.value_counts(normalize=True).plot.pie(labels=['Homme', 'Femme'], colors= ['r', 'g'], autopct='%.2f')", "2 - Quelle est la poportion des accidents ayant eu lieu le jour, la nuit ou a l'aube/crépuscule? Représenter le résultat sous forme graphique.", "dlum = df[\"lum\"].value_counts(normalize=True)\ndlum = dlum.sort_index()\n\ndlum\n\ndlum[3] = dlum[3:5].sum()\nfig = plt.figure()\ndlum[1:3].plot.pie(labels=['Jour','Aube/crépuscule', 'Nuit'], colors= ['y', 'g' , 'b'], autopct='%.2f')", "3- Position géographique", "df.lat=df.lat/100000\ndf.long=df.long/100000\ndfp = df[df.gps=='M']\ndfp = dfp[['lat','long']]\ndfp = dfp[(dfp.long!=0.0) & (dfp.lat!=0.0)]\ndfp.head()\n\n#fig = plt.figure()\ndfp.plot.scatter(x='long', y='lat',s=1);\n\ndf[(df.long!=0.0) & (df.lat!=0.0) & (df.gps=='M')].plot.scatter(x='long', y='lat',s=.5);" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
surfer1-dev/Stochastic_estimation_of_equipment_costs
costs.ipynb
mit
[ "Stochastic estimation of equipment costs\nby A. zayer\nPurpose:\nEstimation of equipment costs that are part of a project bid.\nFind a realistic price not too high to get the project and not too low to keep a healthy margin profit.", "%matplotlib inline\nimport numpy as np\nimport pymc3 as pm\nimport pandas as pd\nimport seaborn as sns\nsns.set(color_codes=True)\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport theano\nfrom pymc3 import Model, Normal, HalfNormal\nfrom pymc3 import traceplot\nfrom pymc3 import summary", "The total cost is made up of the following elements:\nEquipment cost: \nFrom 30,000 up to 50,000\nSpare parts cost for 5 years:\nFrom 16000 to 18000\nEach year is sampled separately from the normal distribution.\nMaintenance Charges for 5 years:\nAnnual rate of 12% of the equipment price (not including the spares).", "n_years = 5\nmaint_rate = 0.12\n", "Total Cost = Equipment + Spares + Maintenance\nWhere\nSpares = Spares for Year 1 + Spares for Year 2 + Spares for Year 3 + Spares for Year 4 + Spares for Year 5\nMaintenance = Equipment * Maintenance Rate * Number of Years\nThe objective of this simulation is to vary the price of the equipment and spares by drawing samples from the normal distribution", "model = Model()\n\nwith model:\n \n # Priors for unknown model parameters\n equip = Normal('equip', mu=40000, sd=4)\n spare1 = Normal('spare1', mu=17000, sd=500)\n spare2 = Normal('spare2', mu=17000, sd=500)\n spare3 = Normal('spare3', mu=17500, sd=500)\n spare4 = Normal('spare4', mu=17500, sd=500)\n spare5 = Normal('spare5', mu=18000, sd=500)\n main = pm.Deterministic('Maintenance', equip*maint_rate*n_years)\n Cost = pm.Deterministic('Total Cost', main+equip+(spare1+spare2+spare3+spare4+spare5))\n\nwith model:\n trace = pm.sample(2000)\n\ntraceplot(trace);\n\nsummary(trace)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
archman/phantasy
docs/source/src/notebooks/phantasy_element.ipynb
bsd-3-clause
[ "Element\nImport modules/packages", "import phantasy", "Model Machine", "mp = phantasy.MachinePortal(machine='FRIB_FE', segment='LEBT')", "Get Element by type", "mp.get_all_types()", "Example: Electric static Quadrupole\nGet the first equad", "equads = mp.get_elements(type='EQUAD')\n\nequads\n\n# first equad\nequad0 = equads[0]\nequad0", "Investigate the equad", "print(\"Index : %d\" % equad0.index)\nprint(\"Name : %s\" % equad0.name)\nprint(\"Family : %s\" % equad0.family)\nprint(\"Location : (begin) %f (end) %f\" % (equad0.sb, equad0.se))\nprint(\"Length : %f\" % equad0.length)\nprint(\"Groups : %s\" % equad0.group)\nprint(\"PVs : %s\" % equad0.pv())\nprint(\"Tags : %s\" % equad0.tags)\nprint(\"Fields : %s\" % equad0.fields)", "Dynamic field: V\nAll available dynamic fields could be retrieved by equad0.fields (for equad0 here, there is only one field, i.e. V).", "equad0.V", "Get values\nIf only readback value is of interest, Approach 1 is recommanded and most natural.", "# Approach 1: dynamic field feature (readback PV)\nprint(\"Readback: %f\" % equad0.V)\n\n# Approach 2: caget(pv_name)\npv_rdbk = equad0.pv(field='V', handle='readback')\nprint(\"Readback: %s\" % phantasy.caget(pv_rdbk))\n\n# Approach 3: CaField\nv_field = equad0.get_field('V')\nprint(\"Readback: %f\" % v_field.get(handle='readback'))\nprint(\"Setpoint: %f\" % v_field.get(handle='setpoint'))\nprint(\"Readset : %f\" % v_field.get(handle='readset'))", "Set values\nAlways Approach 1 is recommanded.", "# Save orignal set value for 'V' field\nv0 = equad0.get_field('V').get(handle='setpoint')\n\n# Approach 1: dynamic field feature (setpoint PV)\nequad0.V = 2000\n\n# Approach 2: caput(pv_name)\npv_cset = equad0.pv(field='V', handle='setpoint')\nphantasy.caput(pv_cset, 1000)\n\n# Approach 3: CaField\nv_field = equad0.get_field('V')\nv_field.set(handle='setpoint', value=1500)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tgsmith61591/skutil
doc/examples/decomposition/SelectivePCA weighting.ipynb
bsd-3-clause
[ "Leveraging SelectivePCA's weight=True capability\nSome algorithms intrinsically treat each feature with the same amount of importance. For many such algorithms, i.e., clustering algorithms, this is a fallacy and can cause inappropriate results. The following notebook demonstrates skutil's weighting capability via SelectivePCA", "from __future__ import print_function\nimport numpy as np\nimport pandas as pd\nimport sklearn\nfrom sklearn.datasets import load_iris\nsklearn.__version__", "Preparing the data for modeling", "iris = load_iris()\nX, y = iris.data, iris.target # this is unsupervised; we aren't going to split", "Basic k-Means, no weighting:\nHere, we'll run a basic k-Means (k=3) preceded by a default SelectivePCA (no weighting)", "from sklearn.metrics import accuracy_score\nfrom skutil.decomposition import SelectivePCA\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.cluster import KMeans\n\n# define our default pipe\npca = SelectivePCA(n_components=0.99)\npipe = Pipeline([\n ('pca', pca),\n ('model', KMeans(3))\n ])\n\n# fit the pipe\npipe.fit(X, y)\n\n# predict and score\nprint('Train accuracy: %.5f' % accuracy_score(y, pipe.predict(X)))", "This is a nice accuracy, but not a stellar one... Surely we can improve this, right? Part of the problem is that clustering (distance metrics) treats all the features equally. Since PCA intrinsically orders features based on importance, we can weight them according to the variability they each explain. Thus, the most important features will be up weighted, and the least important features will be down weighted.\nHere is the explained_variance_ratio_ vector:", "pca.pca_.explained_variance_ratio_", "And here's what our weighting vector will ultimately look like:", "weights = pca.pca_.explained_variance_ratio_\nweights -= np.median(weights)\nweights += 1\nweights", "k-Means with weighting:", "# define our weighted pipe\npca = SelectivePCA(n_components=0.99, weight=True)\npipe = Pipeline([\n ('pca', pca),\n ('model', KMeans(3))\n ])\n\n# fit the pipe\npipe.fit(X, y)\n\n# predict and score\nprint('Train accuracy (with weighting): %.5f' % accuracy_score(y, pipe.predict(X)))", "Note that this is not limited just to KMeans or even to clustering tasks. Any algorithm that does not intrinsically perform any kind of regularization or other feature selection may be subject to this trap, and SelectivePCA's weighting can help!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/workshops
tfx_labs/Lab_2_Intro_to_Apache_Beam.ipynb
apache-2.0
[ "Copyright &copy; 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TFX – Introduction to Apache Beam\nTFX is designed to be scalable to very large datasets which require substantial resources. Distributed pipeline frameworks such as Apache Beam offer the ability to distribute processing across compute clusters and apply the resources required. Many of the standard TFX components use Apache Beam, and custom components that you may write may also benefit from using Apache Beam for distibuted processing.\nThis notebook introduces the concepts and code patterns for developing with the Apache Beam Python API.\nSetup\nFirst, we install the necessary packages, download data, import modules and set up paths.\nInstall TensorFlow and Apache Beam\n\nNote\nBecause of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.", "!pip install -q -U \\\n tensorflow==2.0.0 \\\n apache-beam", "Import packages\nWe import necessary packages, including Beam.", "from datetime import datetime\nimport os\nimport pprint\nimport tempfile\nimport urllib\npp = pprint.PrettyPrinter()\n\nimport tensorflow as tf\n\nimport apache_beam as beam\nfrom apache_beam import pvalue\nfrom apache_beam.runners.interactive.display import pipeline_graph\nimport graphviz\n\nprint('TensorFlow version: {}'.format(tf.__version__))\nprint('Beam version: {}'.format(beam.__version__))", "Create a Beam Pipeline\nCreate a pipeline, including a simple PCollection and a ParDo() transform.\n\nA PCollection&lt;T&gt; is an immutable collection of values of type T. A PCollection can contain either a bounded or unbounded number of elements. Bounded and unbounded PCollections are produced as the output of PTransforms (including root PTransforms like Read and Create), and can be passed as the inputs of other PTransforms.\nParDo is the core element-wise transform in Apache Beam, invoking a user-specified function on each of the elements of the input PCollection to produce zero or more output elements, all of which are collected into the output PCollection.\n\nFirst, use the .run() method.", "first_pipeline = beam.Pipeline()\n\nlines = (first_pipeline\n | \"Create\" >> beam.Create([\"Hello\", \"World\", \"!!!\"]) # PCollection\n | \"Print\" >> beam.ParDo(print)) # ParDo transform\n\nresult = first_pipeline.run()\nresult.state", "Display the structure of this pipeline.", "def display_pipeline(pipeline):\n graph = pipeline_graph.PipelineGraph(pipeline)\n return graphviz.Source(graph.get_dot())\n\ndisplay_pipeline(first_pipeline)", "Next, invoke run inside a with block.", "with beam.Pipeline() as with_pipeline:\n lines = (with_pipeline\n | \"Create\" >> beam.Create([\"Hello\", \"World\", \"!!!\"])\n | \"Print\" >> beam.ParDo(print))\n\ndisplay_pipeline(with_pipeline)", "Exercise 1 — Creating and Running Your Beam Pipeline\n\nBuild a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.\nAdd a step in the pipeline to square each item.\nDisplay the pipeline.\n\nWarning: the ParDo() method must either return None or a list.\n\nSolution:", "with beam.Pipeline() as with_pipeline:\n lines = (with_pipeline\n | \"Create\" >> beam.Create(range(10 + 1))\n | \"Square\" >> beam.ParDo(lambda x: [x ** 2])\n | \"Print\" >> beam.ParDo(print))\n\ndisplay_pipeline(with_pipeline)", "Core Transforms\n\nBeam has a set of core transforms on data that is contained in PCollections. In the cells that follow, explore several core transforms and observe the results in order to develop some understanding and intuition for what each transform does.\nMap\nThe Map transform applies a simple 1-to-1 mapping function over each element in the collection. Map accepts a function that returns a single element for every input element in the PCollection. You can pass functions with multiple arguments to Map. They are passed as additional positional arguments or keyword arguments to the function.\nFirst, compare the results of a ParDo transform and a Map transform.", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([1, 2, 3])\n | \"Multiply\" >> beam.ParDo(lambda number: [number * 2]) # ParDo with integers\n | \"Print\" >> beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([1, 2, 3])\n | \"Multiply\" >> beam.Map(lambda number: number * 2) # Map with integers\n | \"Print\" >> beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([\"Hello Beam\", \"This is cool\"])\n | \"Split\" >> beam.ParDo(lambda sentence: sentence.split()) # ParDo with strings\n | \"Print\" >> beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([\"Hello Beam\", \"This is cool\"])\n | \"Split\" >> beam.Map(lambda sentence: sentence.split()) # Map with strings\n | \"Print\" >> beam.ParDo(print))\n\nclass BreakIntoWordsDoFn(beam.DoFn):\n def process(self, element):\n return element.split()\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([\"Hello Beam\", \"This is cool\"])\n | \"Split\" >> beam.ParDo(BreakIntoWordsDoFn()) # Apply a DoFn with a process method\n | \"Print\" >> beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([\"Hello Beam\", \"This is cool\"])\n | \"Split\" >> beam.FlatMap(lambda sentence: sentence.split()) # Compare to a FlatMap\n | \"Print\" >> beam.ParDo(print))", "GroupByKey\nGroupByKey takes a keyed collection of elements and produces a collection where each element consists of a key and all values associated with that key.\nGroupByKey is a transform for processing collections of key/value pairs. It’s a parallel reduction operation, analogous to the Shuffle phase of a Map/Shuffle/Reduce-style algorithm. The input to GroupByKey is a collection of key/value pairs that represents a multimap, where the collection contains multiple pairs that have the same key, but different values. Given such a collection, you use GroupByKey to collect all of the values associated with each unique key.\nGroupByKey is a good way to aggregate data that has something in common. For example, if you have a collection that stores records of customer orders, you might want to group together all the orders from the same postal code (wherein the “key” of the key/value pair is the postal code field, and the “value” is the remainder of the record).", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(['apple', 'ball', 'car', 'bear', 'cheetah', 'ant'])\n | beam.Map(lambda word: (word[0], word))\n | beam.GroupByKey()\n | beam.ParDo(print))", "Exercise 2 — Group Items by Key\n\nBuild a Beam pipeline that creates a PCollection containing integers 0 to 10 and prints them.\nAdd a step in the pipeline to add a key to each item that will indicate whether it is even or odd.\nUse GroupByKey to group even items together and odd items together.\n\n\nSolution:", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(10 + 1))\n | beam.Map(lambda x: (\"odd\" if x % 2 else \"even\", x))\n | beam.GroupByKey()\n | beam.ParDo(print))", "CoGroupByKey can combine multiple PCollections, assuming every element is a tuple whose first item is the key to join on.", "pipeline = beam.Pipeline()\n\nfruits = pipeline | 'Fruits' >> beam.Create(['apple',\n 'banana',\n 'cherry'])\ncountries = pipeline | 'Countries' >> beam.Create(['australia',\n 'brazil',\n 'belgium',\n 'canada'])\n\ndef add_key(word):\n return (word[0], word)\n\nfruits_with_keys = fruits | \"fruits_with_keys\" >> beam.Map(add_key)\ncountries_with_keys = countries | \"countries_with_keys\" >> beam.Map(add_key)\n\n({\"fruits\": fruits_with_keys, \"countries\": countries_with_keys}\n | beam.CoGroupByKey()\n | beam.Map(print))\n\npipeline.run()", "Combine\nCombine is a transform for combining collections of elements or values. Combine has variants that work on entire PCollections, and some that combine the values for each key in PCollections of key/value pairs.\nTo apply a Combine transform, you must provide the function that contains the logic for combining the elements or values. The combining function should be commutative and associative, as the function is not necessarily invoked exactly once on all values with a given key. Because the input data (including the value collection) may be distributed across multiple workers, the combining function might be called multiple times to perform partial combining on subsets of the value collection. The Beam SDK also provides some pre-built combine functions for common numeric combination operations such as sum, min, and max.\nSimple combine operations, such as sums, can usually be implemented as a simple function. More complex combination operations might require you to create a subclass of CombineFn that has an accumulation type distinct from the input/output type.", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create([1, 2, 3, 4, 5])\n | beam.CombineGlobally(sum)\n | beam.Map(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create([1, 2, 3, 4, 5])\n | beam.combiners.Mean.Globally()\n | beam.Map(print))\n\nclass AverageFn(beam.CombineFn):\n def create_accumulator(self):\n return (0.0, 0)\n\n def add_input(self, accumulator, input_):\n total, count = accumulator\n total += input_\n count += 1\n return (total, count)\n\n def merge_accumulators(self, accumulators):\n totals, counts = zip(*accumulators)\n return sum(totals), sum(counts)\n\n def extract_output(self, accumulator):\n total, count = accumulator\n return total / count if count else float(\"NaN\")\n\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create([1, 2, 3, 4, 5])\n | beam.CombineGlobally(AverageFn())\n | beam.Map(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])\n | beam.combiners.Count.PerElement()\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])\n | beam.Map(lambda word: (word, 1))\n | beam.CombinePerKey(sum)\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(['bob', 'alice', 'alice', 'bob', 'charlie', 'alice'])\n | beam.combiners.Count.Globally()\n | beam.ParDo(print))", "Exercise 3 — Combine Items\n\nStart with Beam pipeline you built in the previous exercise: it creates a PCollection containing integers 0 to 10, groups them by their parity, and prints the groups.\nAdd a step that computes the mean of each group (i.e., the mean of all odd numbers between 0 and 10, and the mean of all even numbers between 0 and 10).\nAdd another step to make the pipeline compute the mean of the squares of the numbers in each group.\n\n\nSolution:", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(10 + 1))\n | beam.Map(lambda x: (\"odd\" if x % 2 else \"even\", x))\n | beam.Map(lambda x: (x[0], x[1] ** 2))\n | beam.CombinePerKey(AverageFn())\n | beam.ParDo(print))", "Flatten\nFlatten is a transform for PCollection objects that store the same data type. Flatten merges multiple PCollection objects into a single logical PCollection.\nData encoding in merged collections\nBy default, the coder for the output PCollection is the same as the coder for the first PCollection in the input PCollectionList. However, the input PCollection objects can each use different coders, as long as they all contain the same data type in your chosen language.\nMerging windowed collections\nWhen using Flatten to merge PCollection objects that have a windowing strategy applied, all of the PCollection objects you want to merge must use a compatible windowing strategy and window sizing. For example, all the collections you're merging must all use (hypothetically) identical 5-minute fixed windows or 4-minute sliding windows starting every 30 seconds.\nIf your pipeline attempts to use Flatten to merge PCollection objects with incompatible windows, Beam generates an IllegalStateException error when your pipeline is constructed.", "pipeline = beam.Pipeline()\n\nwordsStartingWithA = (pipeline\n | 'Words starting with A' >> beam.Create(['apple', 'ant', 'arrow']))\n\nwordsStartingWithB = (pipeline\n | 'Words starting with B' >> beam.Create(['ball', 'book', 'bow']))\n\n((wordsStartingWithA, wordsStartingWithB)\n | beam.Flatten()\n | beam.ParDo(print))\n\npipeline.run()", "Partition\nPartition is a transform for PCollection objects that store the same data type. Partition splits a single PCollection into a fixed number of smaller collections.\nPartition divides the elements of a PCollection according to a partitioning function that you provide. The partitioning function contains the logic that determines how to split up the elements of the input PCollection into each resulting partition PCollection. The number of partitions must be determined at graph construction time. You can, for example, pass the number of partitions as a command-line option at runtime (which will then be used to build your pipeline graph), but you cannot determine the number of partitions in mid-pipeline (based on data calculated after your pipeline graph is constructed, for instance).", "def partition_fn(number, num_partitions):\n partition = number // 100\n return min(partition, num_partitions - 1)\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create([1, 110, 2, 350, 4, 5, 100, 150, 3])\n | beam.Partition(partition_fn, 3))\n\n lines[0] | '< 100' >> beam.ParDo(print, \"Small\")\n lines[1] | '[100, 200)' >> beam.ParDo(print, \"Medium\")\n lines[2] | '> 200' >> beam.ParDo(print, \"Big\")", "Side Inputs\nIn addition to the main input PCollection, you can provide additional inputs to a ParDo transform in the form of side inputs. A side input is an additional input that your DoFn can access each time it processes an element in the input PCollection. When you specify a side input, you create a view of some other data that can be read from within the ParDo transform’s DoFn while processing each element.\nSide inputs are useful if your ParDo needs to inject additional data when processing each element in the input PCollection, but the additional data needs to be determined at runtime (and not hard-coded). Such values might be determined by the input data, or depend on a different branch of your pipeline.", "def increment(number, inc=1):\n return number + inc\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([1, 2, 3, 4, 5])\n | \"Increment\" >> beam.Map(increment)\n | \"Print\" >> beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | \"Create\" >> beam.Create([1, 2, 3, 4, 5])\n | \"Increment\" >> beam.Map(increment, 10) # Pass a side input of 10\n | \"Print\" >> beam.ParDo(print))", "Additional Outputs\nWhile ParDo always produces a main output PCollection (as the return value from apply), you can also have your ParDo produce any number of additional output PCollections. If you choose to have multiple outputs, your ParDo returns all of the output PCollections (including the main output) bundled together.\nTo emit elements to multiple output PCollections, invoke with_outputs() on the ParDo, and specify the expected tags for the outputs. with_outputs() returns a DoOutputsTuple object. Tags specified in with_outputs are attributes on the returned DoOutputsTuple object. The tags give access to the corresponding output PCollections.", "def compute(number):\n if number % 2 == 0:\n yield number\n else:\n yield pvalue.TaggedOutput(\"odd\", number + 10)\n\nwith beam.Pipeline() as pipeline:\n even, odd = (pipeline\n | \"Create\" >> beam.Create([1, 2, 3, 4, 5, 6, 7])\n | \"Increment\" >> beam.ParDo(compute).with_outputs(\"odd\",\n main=\"even\"))\n even | \"Even\" >> beam.ParDo(print, \"even\")\n odd | \"Odd\" >> beam.ParDo(print, \"odd\")", "Branching\nA transform does not consume or otherwise alter the input collection – remember that a PCollection is immutable by definition. This means that you can apply multiple transforms to the same input PCollection to create a branching pipeline.", "with beam.Pipeline() as branching_pipeline:\n numbers = (branching_pipeline | beam.Create([1, 2, 3, 4, 5]))\n\n mult5_results = numbers | beam.Map(lambda num: num * 5)\n mult10_results = numbers | beam.Map(lambda num: num * 10)\n\n mult5_results | 'Log multiply 5' >> beam.ParDo(print, 'Mult 5')\n mult10_results | 'Log multiply 10' >> beam.ParDo(print, 'Mult 10')\n\ndisplay_pipeline(branching_pipeline)", "Composite Transforms\nTransforms can have a nested structure, where a complex transform performs multiple simpler transforms (such as more than one ParDo, Combine, GroupByKey, or even other composite transforms). These transforms are called composite transforms. Nesting multiple transforms inside a single composite transform can make your code more modular and easier to understand.\nYour composite transform's parameters and return value must match the initial input type and final return type for the entire transform, even if the transform's intermediate data changes type multiple times.\nTo create a composite transform, create a subclass of the PTransform class and override the expand method to specify the actual processing logic. Then use this transform just as you would a built-in transform.", "class ExtractAndMultiplyNumbers(beam.PTransform):\n def expand(self, pcollection):\n return (pcollection\n | beam.FlatMap(lambda line: line.split(\",\"))\n | beam.Map(lambda num: int(num) * 10))\n\nwith beam.Pipeline() as composite_pipeline:\n lines = (composite_pipeline\n | beam.Create(['1,2,3,4,5', '6,7,8,9,10'])\n | ExtractAndMultiplyNumbers()\n | beam.ParDo(print))\n\ndisplay_pipeline(composite_pipeline)", "Filter\nFilter, given a predicate, filters out all elements that don't satisfy that predicate. Filter may also be used to filter based on an inequality with a given value based on the comparison ordering of the element. You can pass functions with multiple arguments to Filter. They are passed as additional positional arguments or keyword arguments to the function. If the PCollection has a single value, such as the average from another computation, passing the PCollection as a singleton accesses that value. If the PCollection has multiple values, pass the PCollection as an iterator. This accesses elements lazily as they are needed, so it is possible to iterate over large PCollections that won't fit into memory.\n\nNote: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection), but this requires that all the elements fit into memory.\n\nIf a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary. Each element must be a (key, value) pair. Note that all the elements of the PCollection must fit into memory. If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.", "class FilterOddNumbers(beam.DoFn):\n def process(self, element, *args, **kwargs):\n if element % 2 == 1:\n yield element\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.ParDo(FilterOddNumbers())\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.Filter(lambda num: num % 2 == 1)\n | beam.ParDo(print))", "Aggregation\nBeam uses windowing to divide a continuously updating unbounded PCollection into logical windows of finite size. These logical windows are determined by some characteristic associated with a data element, such as a timestamp. Aggregation transforms (such as GroupByKey and Combine) work on a per-window basis — as the data set is generated, they process each PCollection as a succession of these finite windows.\nA related concept, called triggers, determines when to emit the results of aggregation as unbounded data arrives. You can use triggers to refine the windowing strategy for your PCollection. Triggers allow you to deal with late-arriving data, or to provide early results.", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.combiners.Count.Globally() # Count\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.CombineGlobally(sum) # CombineGlobally sum\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.combiners.Mean.Globally() # Mean\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.combiners.Top.Smallest(1) # Top Smallest\n | beam.ParDo(print))\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.Create(range(1, 11))\n | beam.combiners.Top.Largest(1) # Top Largest\n | beam.ParDo(print))", "Pipeline I/O\nWhen you create a pipeline, you often need to read data from some external source, such as a file or a database. Likewise, you may want your pipeline to output its result data to an external storage system. Beam provides read and write transforms for a number of common data storage types. If you want your pipeline to read from or write to a data storage format that isn’t supported by the built-in transforms, you can implement your own read and write transforms.\nDownload example data\nDownload the sample dataset for use with the cells below.", "DATA_PATH = 'https://raw.githubusercontent.com/ageron/open-datasets/master/' \\\n 'online_news_popularity_for_course/online_news_popularity_for_course.csv'\n_data_root = tempfile.mkdtemp(prefix='tfx-data')\n_data_filepath = os.path.join(_data_root, \"data.csv\")\nurllib.request.urlretrieve(DATA_PATH, _data_filepath)\n\n!head {_data_filepath}\n\nwith beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.io.ReadFromText(_data_filepath)\n | beam.Filter(lambda line: line.startswith(\"2013-01-07,0,World\"))\n | beam.ParDo(print))", "Putting Everything Together\nUse several of the concepts, classes, and methods discussed above in a concrete example.\nExercise 4 — Reading, Filtering, Parsing, Grouping and Averaging\nWrite a Beam pipeline that reads the dataset, computes the mean label (the numbers in the last column) for each article category (the third column) and prints out the results.\nHints:\n* Use the code above to read the dataset and change the filtering logic to keep only the year 2013.\n* Add a Map step to split each row on the commas.\n* Add another Map step to add a key equal to the category, and a GroupByKey step to group the articles by their category.\n* Add a step to convert the last column (i.e., the label) to a float, and another step to compute the mean of that column for each category, using beam.combiners.Mean.PerKey.\n* Finally, add a ParDo step to print out the results.\n\nSolution:", "with beam.Pipeline() as pipeline:\n lines = (pipeline\n | beam.io.ReadFromText(_data_filepath)\n | beam.Filter(lambda line: line < \"2014-01-01\")\n | beam.Map(lambda line: line.split(\",\")) # CSV parser?\n | beam.Map(lambda cols: (cols[2], float(cols[-1])))\n | beam.combiners.Mean.PerKey()\n | beam.ParDo(print))\n\nwith tf.io.TFRecordWriter(\"test.tfrecord\") as tfrecord_file:\n for index in range(10):\n tfrecord_file.write(\"Record {}\".format(index).encode(\"utf-8\"))\n\ndataset = tf.data.TFRecordDataset('test.tfrecord')\nfor record in dataset:\n print(record.numpy())\n\nwith beam.Pipeline() as rw_pipeline:\n lines = (rw_pipeline\n | beam.io.ReadFromTFRecord(\"test.tfrecord\")\n | beam.Map(lambda line: line + b' processed')\n | beam.io.WriteToTFRecord(\"test_processed.tfrecord\")\n | beam.ParDo(print))\n\ndisplay_pipeline(rw_pipeline)\n\nwith beam.Pipeline() as utf_pipeline:\n lines = (utf_pipeline\n | \"Read\" >> beam.io.ReadFromTFRecord(\"test_processed.tfrecord-00000-of-00001\")\n | \"Decode\" >> beam.Map(lambda line: line.decode('utf-8'))\n | \"Print\" >> beam.ParDo(print))\n\ndisplay_pipeline(utf_pipeline)", "Note that there are many other built-in I/O transforms.\nWindowing\nAs discussed above, windowing subdivides a PCollection according to the timestamps of its individual elements.\nSome Beam transforms, such as GroupByKey and Combine, group multiple elements by a common key. Ordinarily, that grouping operation groups all of the elements that have the same key within the entire data set. With an unbounded data set, it is impossible to collect all of the elements, since new elements are constantly being added and may be infinitely many (e.g. streaming data). If you are working with unbounded PCollections, windowing is especially useful.\nIn the Beam model, any PCollection (including unbounded PCollections) can be subdivided into logical windows. Each element in a PCollection is assigned to one or more windows according to the PCollection's windowing function, and each individual window contains a finite number of elements. Grouping transforms then consider each PCollection's elements on a per-window basis. GroupByKey, for example, implicitly groups the elements of a PCollection by key and window.\nAdditional information on Beam Windowing is available in the Beam Programming Guide.", "DAYS = 24 * 60 * 60\n\nclass AssignTimestamps(beam.DoFn):\n def process(self, element):\n date = datetime.strptime(element[0], \"%Y-%m-%d\")\n yield beam.window.TimestampedValue(element, date.timestamp())\n\nwith beam.Pipeline() as window_pipeline:\n lines = (window_pipeline\n | beam.io.ReadFromText(_data_filepath)\n | beam.Filter(lambda line: line < \"2014-01-01\")\n | beam.Map(lambda line: line.split(\",\")) # CSV parser?\n | beam.ParDo(AssignTimestamps())\n | beam.WindowInto(beam.window.FixedWindows(7*DAYS))\n | beam.Map(lambda cols: (cols[2], float(cols[-1])))\n | beam.combiners.Mean.PerKey()\n | beam.ParDo(print))\n\ndisplay_pipeline(window_pipeline)\n\nclass AssignTimestamps(beam.DoFn):\n def process(self, element):\n date = datetime.strptime(element[0], \"%Y-%m-%d\")\n yield beam.window.TimestampedValue(element, date.timestamp())\n\nclass PrintWithTimestamp(beam.DoFn):\n def process(self, element, timestamp=beam.DoFn.TimestampParam):\n print(timestamp.to_rfc3339()[:10], element)\n\nwith beam.Pipeline() as ts_pipeline:\n lines = (ts_pipeline\n | beam.io.ReadFromText(_data_filepath)\n | beam.Filter(lambda line: line < \"2014-01-01\")\n | beam.Map(lambda line: line.split(\",\")) # CSV parser?\n | beam.ParDo(AssignTimestamps())\n | beam.WindowInto(beam.window.FixedWindows(7 * DAYS))\n | beam.Map(lambda cols: (cols[2], float(cols[-1])))\n | beam.combiners.Mean.PerKey()\n | beam.ParDo(PrintWithTimestamp()))\n\ndisplay_pipeline(ts_pipeline)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_ems_filtering.ipynb
bsd-3-clause
[ "%matplotlib inline", "==============================================\nCompute effect-matched-spatial filtering (EMS)\n==============================================\nThis example computes the EMS to reconstruct the time course of\nthe experimental effect as described in:\nAaron Schurger, Sebastien Marti, and Stanislas Dehaene, \"Reducing multi-sensor\ndata to a single time course that reveals experimental effects\",\nBMC Neuroscience 2013, 14:122\nThis technique is used to create spatial filters based on the\ndifference between two conditions. By projecting the trial onto the\ncorresponding spatial filters, surrogate single trials are created\nin which multi-sensor activity is reduced to one time series which\nexposes experimental effects, if present.\nWe will first plot a trials x times image of the single trials and order the\ntrials by condition. A second plot shows the average time series for each\ncondition. Finally a topographic plot is created which exhibits the\ntemporal evolution of the spatial filters.", "# Author: Denis Engemann <denis.engemann@gmail.com>\n# Jean-Remi King <jeanremi.king@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io, EvokedArray\nfrom mne.datasets import sample\nfrom mne.decoding import EMS, compute_ems\nfrom sklearn.cross_validation import StratifiedKFold\n\nprint(__doc__)\n\ndata_path = sample.data_path()\n\n# Preprocess the data\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_ids = {'AudL': 1, 'VisL': 3}\n\n# Read data and create epochs\nraw = io.read_raw_fif(raw_fname, preload=True)\nraw.filter(0.5, 45, l_trans_bandwidth='auto', h_trans_bandwidth='auto',\n filter_length='auto', phase='zero')\nevents = mne.read_events(event_fname)\n\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\n\nepochs = mne.Epochs(raw, events, event_ids, tmin=-0.2, tmax=0.5, picks=picks,\n baseline=None, reject=dict(grad=4000e-13, eog=150e-6),\n preload=True)\nepochs.drop_bad()\nepochs.pick_types(meg='grad')\n\n# Setup the data to use it a scikit-learn way:\nX = epochs.get_data() # The MEG data\ny = epochs.events[:, 2] # The conditions indices\nn_epochs, n_channels, n_times = X.shape\n\n# Initialize EMS transformer\nems = EMS()\n\n# Initialize the variables of interest\nX_transform = np.zeros((n_epochs, n_times)) # Data after EMS transformation\nfilters = list() # Spatial filters at each time point\n\n# In the original paper, the cross-validation is a leave-one-out. However,\n# we recommend using a Stratified KFold, because leave-one-out tends\n# to overfit and cannot be used to estimate the variance of the\n# prediction within a given fold.\n\nfor train, test in StratifiedKFold(y):\n # In the original paper, the z-scoring is applied outside the CV.\n # However, we recommend to apply this preprocessing inside the CV.\n # Note that such scaling should be done separately for each channels if the\n # data contains multiple channel types.\n X_scaled = X / np.std(X[train])\n\n # Fit and store the spatial filters\n ems.fit(X_scaled[train], y[train])\n\n # Store filters for future plotting\n filters.append(ems.filters_)\n\n # Generate the transformed data\n X_transform[test] = ems.transform(X_scaled[test])\n\n# Average the spatial filters across folds\nfilters = np.mean(filters, axis=0)\n\n# Plot individual trials\nplt.figure()\nplt.title('single trial surrogates')\nplt.imshow(X_transform[y.argsort()], origin='lower', aspect='auto',\n extent=[epochs.times[0], epochs.times[-1], 1, len(X_transform)],\n cmap='RdBu_r')\nplt.xlabel('Time (ms)')\nplt.ylabel('Trials (reordered by condition)')\n\n# Plot average response\nplt.figure()\nplt.title('Average EMS signal')\nmappings = [(key, value) for key, value in event_ids.items()]\nfor key, value in mappings:\n ems_ave = X_transform[y == value]\n plt.plot(epochs.times, ems_ave.mean(0), label=key)\nplt.xlabel('Time (ms)')\nplt.ylabel('a.u.')\nplt.legend(loc='best')\nplt.show()\n\n# Visualize spatial filters across time\nevoked = EvokedArray(filters, epochs.info, tmin=epochs.tmin)\nevoked.plot_topomap()", "Note that a similar transformation can be applied with compute_ems\nHowever, this function replicates Schurger et al's original paper, and thus\napplies the normalization outside a leave-one-out cross-validation, which we\nrecommend not to do.", "epochs.equalize_event_counts(event_ids)\nX_transform, filters, classes = compute_ems(epochs)" ]
[ "code", "markdown", "code", "markdown", "code" ]
FordyceLab/AcqPack
notebooks/ExperimentTemplate.ipynb
mit
[ "SETUP", "import time\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Autosipper", "# config directory must have \"__init__.py\" file\n# from the 'config' directory, import the following classes:\nfrom config import Motor, ASI_Controller, Autosipper\nfrom config import utils as ut\n\nautosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))\nautosipper.coord_frames\n\nfrom config import gui\ngui.stage_control(autosipper.XY, autosipper.Z)\n\n# add/determine deck info\nautosipper.coord_frames.deck.position_table = ut.read_delim_pd('config/position_tables/deck')\n\n# check deck alignment\n# CLEAR DECK OF OBSTRUCTIONS!!\nautosipper.go_to('deck', ['name'],'align')\n\n# add plate", "Manifold", "from config import Manifold\n\nmanifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)\nmanifold.valvemap[manifold.valvemap.name>0]\n\nfor i in [2,0,14,8]:\n status = 'x'\n if manifold.read_valve(i):\n status = 'o'\n print status, manifold.valvemap.name.iloc[i]\n\nfor i in range(16):\n status = 'x'\n if manifold.read_valve(i):\n status = 'o'\n print i, status, manifold.valvemap.name.iloc[i]\n\nname = 'inlet_in'\nv = manifold.valvemap['valve'][manifold.valvemap.name==name]\n\nv=7\n\nmanifold.depressurize(v)\n\nmanifold.pressurize(v)\n\nmanifold.exit()\n\nfor j in range(5):\n for i in range(9,14):\n manifold.depressurize(i)\n time.sleep(1)\n manifold.pressurize(i)\n ", "Micromanager", "# !!!! Also must have MM folder on system PATH\n# mm_version = 'C:\\Micro-Manager-1.4'\n# cfg = 'C:\\Micro-Manager-1.4\\SetupNumber2_05102016.cfg'\nmm_version = 'C:\\Program Files\\Micro-Manager-2.0beta'\ncfg = 'C:\\Program Files\\Micro-Manager-2.0beta\\Setup2_20170413.cfg'\n\nimport sys\nsys.path.insert(0, mm_version) # make it so python can find MMCorePy\nimport MMCorePy\n\nfrom PIL import Image\n\ncore = MMCorePy.CMMCore()\ncore.loadSystemConfiguration(cfg)\ncore.setProperty(\"Spectra\", \"White_Enable\", \"1\")\ncore.waitForDevice(\"Spectra\")\n\ncore.setProperty(\"Cam Andor_Zyla4.2\", \"Sensitivity/DynamicRange\", \"16-bit (low noise & high well capacity)\") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)\n\n# core.initializeCircularBuffer()\n# core.setCircularBufferMemoryFootprint(4096) # MiB ", "Preset: 1_PBP \nConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP\nPreset: 2_BF \nConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF\nPreset: 3_DAPI \nConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI\nPreset: 4_eGFP \nConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP\nPreset: 5_Cy5 \nConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5\nPreset: 6_AttoPhos \nConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos\nACQUISITION", "core.setConfig('Channel','2_BF')\n\ncore.setProperty(core.getCameraDevice(), \"Exposure\", 300)\n\ncore.snapImage()\nimg = core.getImage()\nplt.imshow(img,cmap='gray')\nimage = Image.fromarray(img)\n# image.save('TESTIMAGE.tif')\n\nposition_list = ut.load_mm_positionlist(\"C:/Users/fordycelab/Desktop/D1_cjm.pos\")\nposition_list\n\ndef acquire():\n for i in xrange(len(position_list)):\n si = str(i)\n x,y = position_list[['x','y']].iloc[i]\n core.setXYPosition(x,y)\n core.waitForDevice(core.getXYStageDevice())\n logadd(log, 'moved '+si)\n \n core.snapImage()\n# core.waitForDevice(core.getCameraDevice())\n logadd(log, 'snapped '+si)\n \n img = core.getImage()\n logadd(log, 'got image '+si)\n \n image = Image.fromarray(img)\n image.save('images/images_{}.tif'.format(i))\n logadd(log, 'saved image '+si)\n \n x,y = position_list[['x','y']].iloc[0]\n core.setXYPosition(x,y)\n core.waitForDevice(core.getXYStageDevice())\n logadd(log, 'moved '+ str(0))\n \ndef logadd(log,st):\n log.append([time.ctime(time.time()), st])\n print log[-1]\n\n# Trial 1: crapped out after 10 min wait (stage moved to first position, shutter open, then froze)\n# for some reason, did not log this (updated print statement)\nlog = []\nfor i in xrange(10):\n acquire()\n sleep = 5*(i+1)*60\n print('SLEEP', sleep)\n time.sleep(5*(i+1)*60)\n\n# Trial 2: Went to completion; max wait 9 min\nlog = []\nfor i in xrange(10):\n logadd(log, 'ACQ STARTED')\n acquire()\n sleep = (5 + 0.5*i)*60\n print 'SLEEP', sleep/60, 'min'\n time.sleep(sleep)\n\n# Trial 3: No errors\nlog = []\nfor i in xrange(10):\n sleep = (9 + 0.25*i)*60\n logadd(log, 'STRT SLEEP '+ str(sleep/60) + ' min')\n time.sleep(sleep)\n \n logadd(log, 'ACQ STARTED')\n acquire()\n\n# Trial 4: No problems through 45 min wait; terminated kernel\nlog = []\nfor i in xrange(10):\n sleep = (10 + 5*i)*60\n logadd(log, 'STRT SLEEP '+ str(sleep/60) + ' min')\n time.sleep(sleep)\n \n logadd(log, 'ACQ STARTED '+str(i))\n acquire()\n\n# Trial 5: returning stage to home at end of acquire\nlog = []\nfor i in xrange(15):\n sleep = (10 + 10*i)*60\n logadd(log, 'STRT SLEEP '+ str(sleep/60) + ' min')\n time.sleep(sleep)\n \n logadd(log, 'ACQ STARTED '+str(i))\n acquire()\n\n# Auto\ncore.setAutoShutter(True) # default\ncore.snapImage()\n\n# Manual\ncore.setAutoShutter(False) # disable auto shutter\ncore.setProperty(\"Shutter\", \"State\", \"1\")\ncore.waitForDevice(\"Shutter\")\ncore.snapImage()\ncore.setProperty(\"Shutter\", \"State\", \"0\")", "MM Get info", "core.getFocusDevice()\ncore.getCameraDevice()\ncore.XYStageDevice()\ncore.getDevicePropertyNames(core.getCameraDevice())", "Video", "# cv2.startWindowThread()\ncv2.namedWindow('Video')\ncv2.imshow('Video',img)\ncv2.waitKey(0)\n\ncv2.destroyAllWindows()\ncore.stopSequenceAcquisition()\n\nimport cv2\ncv2.namedWindow('Video')\ncore.startContinuousSequenceAcquisition(1)\nwhile True:\n img = core.getLastImage()\n if core.getRemainingImageCount() > 0:\n# img = core.popNextImage()\n img = core.getLastImage()\n cv2.imshow('Video', img)\n cv2.waitkey(0)\n else:\n print('No frame')\n if cv2.waitKey(20) >= 0:\n break\ncv2.destroyAllWindows()\ncore.stopSequenceAcquisition()\n# core.reset()", "EXIT", "autosipper.exit()\nmanifold.exit()\ncore.unloadAllDevices()\ncore.reset()\nprint 'closed'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ThinkBayes2
examples/hyrax_soln.ipynb
mit
[ "The Rock Hyrax Problem\nAllen Downey\nThis notebook contains a solution to a problem I posed in my Bayesian statistics class:\n\nSuppose I capture and tag 10 rock hyraxes.  Some time later, I capture another 10 hyraxes and find that two of them are\nalready tagged.  How many hyraxes are there in this environment?\n\nThis is an example of a mark and recapture experiment, which you can read about on Wikipedia.  The Wikipedia page also includes the photo of a tagged hyrax shown above.\nAs always with problems like this, we have to make some modeling assumptions.\n1) For simplicity, you can assume that the environment is reasonably isolated, so the number of hyraxes does not change between observations.\n2) And you can assume that each hyrax is equally likely to be captured during each phase of the experiment, regardless of whether it has been tagged.  In reality, it is possible that tagged animals would avoid traps in the future, or possible that the same behavior that got them caught the first time makes them more likely to be caught again.  But let's start simple.\nMy solution uses the ThinkBayes2 framework, which is described in Think Bayes, and summarized in this notebook.\nI'll start by defining terms:\n$N$: total population of hyraxes\n$K$: number of hyraxes tagged in the first round\n$n$: number of hyraxes caught in the second round\n$k$: number of hyraxes in the second round that had been tagged\nSo $N$ is the hypothesis and $(K, n, k)$ make up the data. The probability of the data, given the hypothesis, is the probability of finding $k$ tagged hyraxes out of $n$ if (in the population) $K$ out of $N$ are tagged. There are two ways we can compute this:\n1) If you are familiar with the hypergeometric distribution, you might recognize this problem and use an implementation of the hypergeometric PMF, evaluated at $k$.\n2) Otherwise, you can figure it out using combinatorics.\nI'll do the second one first. Out of a population of $N$ hyraxes, we captured $n$; the total number of combinations is $N \\choose n$.\n$k$ of the ones we caught are tagged, so $n-k$ are not. The total number of combinations is ${K \\choose k}{N-K \\choose n-k}$. So the probability of the data is\n${K \\choose k}{N-K \\choose n-k}/{N \\choose n}$\nscipy.special provides binom(x, y), which computes the binomial coefficient, $x \\choose y$.\nSo let's see how that looks in code:", "# first a little house-keeping\nfrom __future__ import print_function, division\n% matplotlib inline\n\nimport thinkbayes2\nfrom scipy.special import binom\n\nclass Hyrax(thinkbayes2.Suite):\n \"\"\"Represents hypotheses about how many hyraxes there are.\"\"\"\n\n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n hypo: total population (N)\n data: # tagged (K), # caught (n), # of caught who were tagged (k)\n \"\"\"\n N = hypo\n K, n, k = data\n\n if hypo < K + (n - k):\n return 0\n\n like = binom(N-K, n-k) / binom(N, n)\n return like", "Again $N$ is the hypothesis and $(K, n, k)$ is the data. If we've tagged $K$ hyraxes and then caught another $n-k$, the total number of unique hyraxes we're seen is $K + (n - k)$. For any smaller value of N, the likelihood is 0.\nNotice that I didn't bother to compute $K \\choose k$; because it does not depend on $N$, it's the same for all hypotheses, so it gets cancelled out when we normalize the suite.\nNext I construct the prior and update it with the data. I use a uniform prior from 0 to 999.", "hypos = range(1, 1000)\nsuite = Hyrax(hypos)\n\ndata = 10, 10, 2\nsuite.Update(data)", "Here's what the posterior distribution looks like:", "import thinkplot\nthinkplot.Pdf(suite)\nthinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False)", "And here are some summaries of the posterior distribution:", "print('Posterior mean', suite.Mean())\nprint('Maximum a posteriori estimate', suite.MaximumLikelihood())\nprint('90% credible interval', suite.CredibleInterval(90))", "The combinatorial expression we computed is the PMF of the hypergeometric distribution, so we can also compute it using thinkbayes2.EvalHypergeomPmf, which uses scipy.stats.hypergeom.pmf.", "import thinkbayes2\n\nclass Hyrax2(thinkbayes2.Suite):\n \"\"\"Represents hypotheses about how many hyraxes there are.\"\"\"\n\n def Likelihood(self, data, hypo):\n \"\"\"Computes the likelihood of the data under the hypothesis.\n\n hypo: total population (N)\n data: # tagged (K), # caught (n), # of caught who were tagged (k)\n \"\"\"\n N = hypo\n K, n, k = data\n\n if hypo < K + (n - k):\n return 0\n\n like = thinkbayes2.EvalHypergeomPmf(k, N, K, n)\n return like\n", "And the result is the same:", "hypos = range(1, 1000)\nsuite = Hyrax2(hypos)\n\ndata = 10, 10, 2\nsuite.Update(data)\n\nthinkplot.Pdf(suite)\nthinkplot.Config(xlabel='Number of hyraxes', ylabel='PMF', legend=False)\n\nprint('Posterior mean', suite.Mean())\nprint('Maximum a posteriori estimate', suite.MaximumLikelihood())\nprint('90% credible interval', suite.CredibleInterval(90))", "If we run the analysis again with a different prior (running from 0 to 1999), the MAP is the same, but the posterior mean and credible interval are substantially different:", "hypos = range(1, 2000)\nsuite = Hyrax2(hypos)\ndata = 10, 10, 2\nsuite.Update(data)\n\nprint('Posterior mean', suite.Mean())\nprint('Maximum a posteriori estimate', suite.MaximumLikelihood())\nprint('90% credible interval', suite.CredibleInterval(90))", "This difference indicates that we don't have enough data to swamp the priors, so a more definitive answer would require either more data or a prior based on more background information." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
syednasar/talks
synthetic-dialog/script_generation-tech-summit.ipynb
mit
[ "Script Generation\nIn this session, we'll generate your own Simpsons TV scripts using RNNs. We'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network we'll build will generate a new TV script for a scene at Moe's Tavern.\nLet's get the Data\nWe will be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nGet the helper function. Make sure you look at the helper to see what it is doing\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n\n", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Here are two main preprocessing functions, see below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n vocab = {word: None for word in text} \n empty = 0 # RNN mask of no data\n eos = 1 # end of sentence\n start_idx = eos+1 # first real word\n \n # Dictionary to go from the words to an id, we'll call vocab_to_int\n vocab_to_int = dict((word, idx+start_idx) for idx,word in enumerate(vocab))\n vocab_to_int['<empty>'] = empty\n vocab_to_int['<eos>'] = eos\n \n # Dictionary to go from the id to word, we'll call int_to_vocab\n int_to_vocab = dict((value,key) for key, value in vocab_to_int.items() )\n\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "How does the vocab-to-int pairing look like?", "test_text = '''\n Moe_Szyslak Moe's Tavern Where the elite meet to drink\n '''\ntest_text = test_text.lower()\ntest_text = test_text.split()\n\nvocab_to_int, int_to_vocab = create_lookup_tables(test_text)\n\nprint(vocab_to_int['where'])\nprint(int_to_vocab[10])\n", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n symbols = ['.', ',', '\"', ';', '!', '?', '(', ')', '--', '\\n']\n symbol_vals = ['Period', 'Comma', 'Quotation_Mark', 'Semicolon', 'Exclamation_mark', 'Question_mark', 'Left_Parentheses', 'Right_Parentheses', 'Dash', 'Return'] \n symbol_vals = [ ('||' + val + '||') for val in symbol_vals]\n \n merged = dict( zip(symbols, symbol_vals) )\n \n \n return merged\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_tokenize(token_lookup)\n\nwith tf.Graph().as_default():\n symbols = set(['.', ',', '\"', ';', '!', '?', '(', ')', '--', '\\n'])\n token_dict = token_lookup()\n\n print(token_dict)\n print(token_dict['\\n'])", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\n\nprint(int_text[:2])\n\nfor i in int_text[:4]:\n print(i, int_to_vocab[i])\n print(int_to_vocab[i], vocab_to_int[int_to_vocab[i]])", "Build the Neural Network\nWe'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\nprint(\"Device name: \", tf.test.gpu_device_name())\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nHere we implement the get_inputs() function to create TF Placeholders for the Neural Network. It creates the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following the tuple (Input, Targets, LearingRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n Input = tf.placeholder(tf.int32, [None, None], name='input')\n Targets = tf.placeholder(tf.int32, [None, None], name='Targets')\n LearingRate = tf.placeholder(tf.float32, name='LearingRate') \n return Input, Targets, LearingRate\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # Set number of layers and dropout value\n lstm_layers = 3\n keep_prob = 0.5\n \n # Let us create the cells\n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n cell = tf.contrib.rnn.MultiRNNCell([lstm])\n initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name=\"initial_state\")\n return cell, initial_state\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n \n embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n embed = tf.nn.embedding_lookup(embedding, input_data)\n return embed\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nWe created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Here we build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n \n outputs, state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)\n final_state = tf.identity(state, name = 'final_state')\n return outputs, final_state \n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nHere we apply the functions we implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :return: Tuple (Logits, FinalState)\n \"\"\"\n \n embedding = get_embed(input_data, vocab_size, rnn_size)\n outputs, final_state = build_rnn(cell, embedding)\n logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)\n \n return (logits, final_state) \n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nWe implemented get_batches to create batches of input and targets using int_text. The batches are a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf we can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n \n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 50\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 100\n\n\"\"\"\nPoint to where to save?\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "input_text, targets, lr = get_inputs()\n\nprint(lr)\n\n\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n \n print(loaded_graph)\n InputTensor = loaded_graph.get_tensor_by_name(\"input:0\")\n InitialStateTensor = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n FinalStateTensor = loaded_graph.get_tensor_by_name(\"final_state:0\")\n ProbsTensor = loaded_graph.get_tensor_by_name(\"probs:0\") \n return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n \n index = np.random.choice(len(int_to_vocab),p=probabilities)\n\n word = int_to_vocab[index]\n\n return word \n\n\n\n\"\"\"\nLet us quickly test it\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amkatrutsa/MIPT-Opt
Spring2022/cg.ipynb
mit
[ "Beyond gradient descent\nМетод сопряжённых градиентов\nСистема линейных уравнений vs. задача безусловной минимизации\nРассмотрим задачу\n$$\n\\min_{x \\in \\mathbb{R}^n} \\frac{1}{2}x^{\\top}Ax - b^{\\top}x,\n$$\nгде $A \\in \\mathbb{S}^n_{++}$.\nИз необходимого условия экстремума имеем\n$$\nAx^* = b\n$$\nТакже обозначим $f'(x_k) = Ax_k - b = r_k$\nКак решить систему $Ax = b$?\n\nПрямые методы основаны на матричных разложениях:\nПлотная матрица $A$: для размерностей не больше нескольких тысяч\nРазреженная (sparse) матрица $A$: для размерностей порядка $10^4 - 10^5$\n\n\nИтерационные методы: хороши во многих случаях, единственный подход для задач с размерностью $ > 10^6$\n\nМетод сопряжённых направлений\nВ градиентном спуске направления убывания - анти-градиенты, но для функций с плохо обусловленным гессианом сходимость медленная.\nИдея: двигаться вдоль направлений, которые гарантируют сходимость за $n$ шагов.\nОпределение. Множество ненулевых векторов ${p_0, \\ldots, p_l}$ называется сопряжённым относительно матрицы $A \\in \\mathbb{S}^n_{++}$, если \n$$\np^{\\top}_iAp_j = 0, \\qquad i \\neq j\n$$\nУтверждение. Для любой $x_0 \\in \\mathbb{R}^n$ последовательность ${x_k}$, генерируемая методом сопряжённых направлений, сходится к решению системы $Ax = b$ максимум за $n$ шагов.\n```python\ndef ConjugateDirections(x0, A, b, p):\nx = x0\n\nr = A.dot(x) - b\n\nfor i in range(len(p)):\n\n alpha = - (r.dot(p[i])) / (p[i].dot(A.dot(p[i])))\n\n x = x + alpha * p[i]\n\n r = A.dot(x) - b\n\nreturn x\n\n```\nПримеры сопряжённых направлений\n\nСобственные векторы матрицы $A$\nДля любого набора из $n$ векторов можно провести аналог ортогонализации Грама-Шмидта и получить сопряжённые направления\n\nВопрос: что такое ортогонализация Грама-Шмидта? :)\nМетод сопряжённых градиентов\nИдея: новое направление $p_k$ ищется в виде $p_k = -r_k + \\beta_k p_{k-1}$, где $\\beta_k$ выбирается, исходя из требования сопряжённости $p_k$ и $p_{k-1}$:\n$$\n\\beta_k = \\dfrac{p^{\\top}{k-1}Ar_k}{p^{\\top}{k-1}Ap_{k-1}}\n$$\nТаким образом, для получения следующего сопряжённого направления $p_k$ необходимо хранить только сопряжённое направление $p_{k-1}$ и остаток $r_k$ с предыдущей итерации. \nВопрос: как находить размер шага $\\alpha_k$?\nСопряжённость сопряжённых градиентов\nТеорема\nПусть после $k$ итераций $x_k \\neq x^*$. Тогда \n\n$\\langle r_k, r_i \\rangle = 0, \\; i = 1, \\ldots k - 1$\n$\\mathtt{span}(r_0, \\ldots, r_k) = \\mathtt{span}(r_0, Ar_0, \\ldots, A^kr_0)$\n$\\mathtt{span}(p_0, \\ldots, p_k) = \\mathtt{span}(r_0, Ar_0, \\ldots, A^kr_0)$\n$p_k^{\\top}Ap_i = 0$, $i = 1,\\ldots,k-1$\n\nТеоремы сходимости\nТеорема 1. Если матрица $A$ имеет только $r$ различных собственных значений, то метод сопряжённых градиентов cойдётся за $r$ итераций.\nТеорема 2. Имеет место следующая оценка сходимости\n$$\n\\| x_{k} - x^ \\|_A \\leq 2\\left( \\dfrac{\\sqrt{\\kappa(A)} - 1}{\\sqrt{\\kappa(A)} + 1} \\right)^k \\|x_0 - x^\\|_A,\n$$\nгде $\\|x\\|^2_A = x^{\\top}Ax$ и $\\kappa(A) = \\frac{\\lambda_1(A)}{\\lambda_n(A)}$ - число обусловленности матрицы $A$, $\\lambda_1(A) \\geq ... \\geq \\lambda_n(A)$ - собственные значения матрицы $A$\nЗамечание: сравните коэффициент геометрической прогрессии с аналогом в градиентном спуске.\nИнтерпретации метода сопряжённых градиентов\n\nГрадиентный спуск в пространстве $y = Sx$, где $S = [p_0, \\ldots, p_n]$, в котором матрица $A$ становится диагональной (или единичной в случае ортонормированности сопряжённых направлений)\nПоиск оптимального решения в Крыловском подпространстве $\\mathcal{K}_k(A) = {b, Ab, A^2b, \\ldots A^{k-1}b}$\n\n$$\nx_k = \\arg\\min_{x \\in \\mathcal{K}_k} f(x)\n$$\n\nОднако естественный базис Крыловского пространства неортогональный и, более того, плохо обусловлен.\n\nУпражнение Проверьте численно, насколько быстро растёт обусловленность матрицы из векторов ${b, Ab, ... }$\n\nПоэтому его необходимо ортогонализовать, что и происходит в методе сопряжённых градиентов\n\nОсновное свойство\n$$ \nA^{-1}b \\in \\mathcal{K}_n(A)\n$$\nДоказательство\n\nТеорема Гамильтона-Кэли: $p(A) = 0$, где $p(\\lambda) = \\det(A - \\lambda I)$\n$p(A)b = A^nb + a_1A^{n-1}b + \\ldots + a_{n-1}Ab + a_n b = 0$\n$A^{-1}p(A)b = A^{n-1}b + a_1A^{n-2}b + \\ldots + a_{n-1}b + a_nA^{-1}b = 0$\n$A^{-1}b = -\\frac{1}{a_n}(A^{n-1}b + a_1A^{n-2}b + \\ldots + a_{n-1}b)$\n\nУлучшенная версия метода сопряжённых градиентов\nНа практике используются следующие формулы для шага $\\alpha_k$ и коэффициента $\\beta_{k}$:\n$$\n\\alpha_k = \\dfrac{r^{\\top}k r_k}{p^{\\top}{k}Ap_{k}} \\qquad \\beta_k = \\dfrac{r^{\\top}k r_k}{r^{\\top}{k-1} r_{k-1}}\n$$\nВопрос: чем они лучше базовой версии?\nПсевдокод метода сопряжённых градиентов\n```python\ndef ConjugateGradientQuadratic(x0, A, b, eps):\nr = A.dot(x0) - b\n\np = -r\n\nwhile np.linalg.norm(r) &gt; eps:\n\n alpha = r.dot(r) / p.dot(A.dot(p))\n\n x = x + alpha * p\n\n r_next = r + alpha * A.dot(p)\n\n beta = r_next.dot(r_next) / r.dot(r)\n\n p = -r_next + beta * p\n\n r = r_next\n\nreturn x\n\n```\nМетод сопряжённых градиентов для неквадратичной функции\nИдея: использовать градиенты $f'(x_k)$ неквадратичной функции вместо остатков $r_k$ и линейный поиск шага $\\alpha_k$ вместо аналитического вычисления. Получим метод Флетчера-Ривса.\n```python\ndef ConjugateGradientFR(f, gradf, x0, eps):\nx = x0\n\ngrad = gradf(x)\n\np = -grad\n\nwhile np.linalg.norm(gradf(x)) &gt; eps:\n\n alpha = StepSearch(x, f, gradf, **kwargs)\n\n x = x + alpha * p\n\n grad_next = gradf(x)\n\n beta = grad_next.dot(grad_next) / grad.dot(grad)\n\n p = -grad_next + beta * p\n\n grad = grad_next\n\n if restart_condition:\n\n p = -gradf(x)\n\nreturn x\n\n```\nТеорема сходимости\nТеорема. Пусть \n- множество уровней $\\mathcal{L}$ ограничено\n- существует $\\gamma > 0$: $\\| f'(x) \\|_2 \\leq \\gamma$ для $x \\in \\mathcal{L}$\nТогда\n$$\n\\lim_{j \\to \\infty} \\| f'(x_{k_j}) \\|_2 = 0\n$$\nПерезапуск (restart)\n\nДля ускорения метода сопряжённых градиентов используют технику перезапусков: удаление ранее накопленной истории и перезапуск метода с текущей точки, как будто это точка $x_0$\nСуществуют разные условия, сигнализирующие о том, что надо делать перезапуск, например\n$k = n$\n$\\dfrac{|\\langle f'(x_k), f'(x_{k-1}) \\rangle |}{\\| f'(x_k) \\|_2^2} \\geq \\nu \\approx 0.1$\n\n\nМожно показать (см. Nocedal, Wright Numerical Optimization, Ch. 5, p. 125), что запуск метода Флетчера-Ривза без использования перезапусков на некоторых итерациях может приводить к крайне медленной сходимости! \nМетод Полака-Рибьера и его модификации лишены подобного недостатка.\n\nКомментарии\n\nЗамечательная методичка \"An Introduction to the Conjugate Gradient Method Without the Agonizing Pain\" размещена тут\nПомимо метода Флетчера-Ривса существуют другие способы вычисления $\\beta_k$: \nметод Полака-Рибьера $\\beta_k = \\frac{f'(x_k)^{\\top}(f'(x_k) - f'(x_{k-1}))}{\\| f'(x_{k-1})\\|^2_2}$ \nметод Хестенса-Штифеля $\\beta_k = \\frac{f'(x_k)^{\\top}(f'(x_k) - f'(x_{k-1}))}{-p_{k-1}^{\\top}(f'(x_k) - f'(x_{k-1}))}$\n\n\nДля метода сопряжённых градиентов требуется 4 вектора: каких?\nСамой дорогой операцией является умножение матрицы на вектор\n\nЭксперименты\nКвадратичная целевая функция", "import numpy as np\nn = 100\n# Random\n# A = np.random.randn(n, n)\n# A = A.T.dot(A)\n# Clustered eigenvalues\nA = np.diagflat([np.ones(n//4), 10 * np.ones(n//4), 100*np.ones(n//4), 1000* np.ones(n//4)])\nU = np.random.rand(n, n)\nQ, _ = np.linalg.qr(U)\nA = Q.dot(A).dot(Q.T)\nA = (A + A.T) * 0.5\nprint(\"A is normal matrix: ||AA* - A*A|| =\", np.linalg.norm(A.dot(A.T) - A.T.dot(A)))\nb = np.random.randn(n)\n# Hilbert matrix\n# A = np.array([[1.0 / (i+j - 1) for i in range(1, n+1)] for j in range(1, n+1)])\n# b = np.ones(n)\n\nf = lambda x: 0.5 * x.dot(A.dot(x)) - b.dot(x)\ngrad_f = lambda x: A.dot(x) - b\nx0 = np.zeros(n)", "Распределение собственных значений", "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rc(\"text\", usetex=True)\nplt.rc(\"font\", family='serif')\neigs = np.linalg.eigvalsh(A)\nplt.semilogy(np.unique(eigs))\nplt.ylabel(\"Eigenvalues\", fontsize=20)\nplt.xticks(fontsize=18)\n_ = plt.yticks(fontsize=18)", "Правильный ответ", "import scipy.optimize as scopt\n\ndef callback(x, array):\n array.append(x)\n\nscopt_cg_array = []\nscopt_cg_callback = lambda x: callback(x, scopt_cg_array)\nx = scopt.minimize(f, x0, method=\"CG\", jac=grad_f, callback=scopt_cg_callback)\nx = x.x\nprint(\"||f'(x*)|| =\", np.linalg.norm(A.dot(x) - b))\nprint(\"f* =\", f(x))", "Реализация метода сопряжённых градиентов", "def ConjugateGradientQuadratic(x0, A, b, tol=1e-8, callback=None):\n x = x0\n r = A.dot(x0) - b\n p = -r\n while np.linalg.norm(r) > tol:\n alpha = r.dot(r) / p.dot(A.dot(p))\n x = x + alpha * p\n if callback is not None:\n callback(x)\n r_next = r + alpha * A.dot(p)\n beta = r_next.dot(r_next) / r.dot(r)\n p = -r_next + beta * p\n r = r_next\n return x\n\nimport liboptpy.unconstr_solvers as methods\nimport liboptpy.step_size as ss\n\nmax_iter = 70\nprint(\"\\t CG quadratic\")\ncg_quad = methods.fo.ConjugateGradientQuad(A, b)\nx_cg = cg_quad.solve(x0, tol=1e-7, max_iter=max_iter, disp=True)\n\nprint(\"\\t Gradient Descent\")\ngd = methods.fo.GradientDescent(f, grad_f, ss.ExactLineSearch4Quad(A, b))\nx_gd = gd.solve(x0, tol=1e-7, max_iter=max_iter, disp=True)\n\nprint(\"Condition number of A =\", abs(max(eigs)) / abs(min(eigs)))", "График сходимости", "plt.figure(figsize=(8,6))\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()], label=r\"$\\|f'(x_k)\\|^{CG}_2$\", linewidth=2)\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array[:max_iter]], label=r\"$\\|f'(x_k)\\|^{CG_{PR}}_2$\", linewidth=2)\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r\"$\\|f'(x_k)\\|^{G}_2$\", linewidth=2)\nplt.legend(loc=\"best\", fontsize=20)\nplt.xlabel(r\"Iteration number, $k$\", fontsize=20)\nplt.ylabel(\"Convergence rate\", fontsize=20)\nplt.xticks(fontsize=18)\n_ = plt.yticks(fontsize=18)\n\nprint([np.linalg.norm(grad_f(x)) for x in cg_quad.get_convergence()])\n\nplt.figure(figsize=(8,6))\nplt.plot([f(x) for x in cg_quad.get_convergence()], label=r\"$f(x^{CG}_k)$\", linewidth=2)\nplt.plot([f(x) for x in scopt_cg_array], label=r\"$f(x^{CG_{PR}}_k)$\", linewidth=2)\nplt.plot([f(x) for x in gd.get_convergence()], label=r\"$f(x^{G}_k)$\", linewidth=2)\nplt.legend(loc=\"best\", fontsize=20)\nplt.xlabel(r\"Iteration number, $k$\", fontsize=20)\nplt.ylabel(\"Function value\", fontsize=20)\nplt.xticks(fontsize=18)\n_ = plt.yticks(fontsize=18)", "Неквадратичная функция\n$$\nf(w) = \\frac12 \\|w\\|2^2 + C \\frac1m \\sum{i=1}^m \\log (1 + \\exp(- y_i \\langle x_i, w \\rangle)) \\to \\min_w\n$$", "import numpy as np\nimport sklearn.datasets as skldata\nimport scipy.special as scspec\nimport jax\nimport jax.numpy as jnp\nfrom jax.config import config\nconfig.update(\"jax_enable_x64\", True)\n\nn = 300\nm = 1000\n\nX, y = skldata.make_classification(n_classes=2, n_features=n, n_samples=m, n_informative=n//3, random_state=0)\nX = jnp.array(X)\ny = jnp.array(y)\nC = 1\n@jax.jit\ndef f(w):\n return jnp.linalg.norm(w)**2 / 2 + C * jnp.mean(jnp.logaddexp(jnp.zeros(X.shape[0]), -y * (X @ w)))\n\nautograd_f = jax.jit(jax.grad(f))\ndef grad_f(w):\n denom = scspec.expit(-y * X.dot(w))\n return w - C * X.T.dot(y * denom) / X.shape[0]\n\nx0 = jax.random.normal(jax.random.PRNGKey(0), (n,))\nprint(\"Initial function value = {}\".format(f(x0)))\nprint(\"Initial gradient norm = {}\".format(jnp.linalg.norm(autograd_f(x0))))\nprint(\"Initial gradient norm = {}\".format(jnp.linalg.norm(grad_f(x0))))", "Реализация метода Флетчера-Ривса", "def ConjugateGradientFR(f, gradf, x0, num_iter=100, tol=1e-8, callback=None, restart=False):\n x = x0\n grad = gradf(x)\n p = -grad\n it = 0\n while np.linalg.norm(gradf(x)) > tol and it < num_iter:\n alpha = utils.backtracking(x, p, method=\"Wolfe\", beta1=0.1, beta2=0.4, rho=0.5, f=f, grad_f=gradf)\n if alpha < 1e-18:\n break\n x = x + alpha * p\n if callback is not None:\n callback(x)\n grad_next = gradf(x)\n beta = grad_next.dot(grad_next) / grad.dot(grad)\n p = -grad_next + beta * p\n grad = grad_next.copy()\n it += 1\n if restart and it % restart == 0:\n grad = gradf(x)\n p = -grad\n return x", "График сходимости", "import scipy.optimize as scopt\nimport liboptpy.restarts as restarts\n\nn_restart = 60\ntol = 1e-5\nmax_iter = 600\n\nscopt_cg_array = []\nscopt_cg_callback = lambda x: callback(x, scopt_cg_array)\nx = scopt.minimize(f, x0, tol=tol, method=\"CG\", jac=autograd_f, callback=scopt_cg_callback, options={\"maxiter\": max_iter})\nx = x.x\nprint(\"\\t CG by Polak-Rebiere\")\nprint(\"Norm of garient = {}\".format(np.linalg.norm(autograd_f(x))))\nprint(\"Function value = {}\".format(f(x)))\n\nprint(\"\\t CG by Fletcher-Reeves\")\n# ss.Backtracking(\"Armijo\", rho=0.5, beta=0.001, init_alpha=1.)\ncg_fr = methods.fo.ConjugateGradientFR(f, autograd_f, ss.Backtracking(\"Wolfe\", rho=0.9, beta1=0.3, beta2=0.8, init_alpha=1.))\nx = cg_fr.solve(x0, tol=tol, max_iter=max_iter, disp=True)\n\nprint(\"\\t CG by Fletcher-Reeves with restart n\")\ncg_fr_rest = methods.fo.ConjugateGradientFR(f, autograd_f, ss.Backtracking(\"Wolfe\", rho=0.9, beta1=0.3, beta2=0.8, \n init_alpha=1.), restarts.Restart(n // n_restart))\nx = cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter, disp=True)\n\nprint(\"\\t Gradient Descent\")\ngd = methods.fo.GradientDescent(f, autograd_f, ss.Backtracking(\"Wolfe\", rho=0.9, beta1=0.1, beta2=0.8, init_alpha=1.))\nx = gd.solve(x0, max_iter=max_iter, tol=tol, disp=True)\n\nplt.figure(figsize=(8, 6))\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr.get_convergence()], label=r\"$\\|f'(x_k)\\|^{CG_{FR}}_2$ no restart\", linewidth=2)\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in cg_fr_rest.get_convergence()], label=r\"$\\|f'(x_k)\\|^{CG_{FR}}_2$ restart\", linewidth=2)\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in scopt_cg_array], label=r\"$\\|f'(x_k)\\|^{CG_{PR}}_2$\", linewidth=2)\n\nplt.semilogy([np.linalg.norm(grad_f(x)) for x in gd.get_convergence()], label=r\"$\\|f'(x_k)\\|^{G}_2$\", linewidth=2)\nplt.legend(loc=\"best\", fontsize=16)\nplt.xlabel(r\"Iteration number, $k$\", fontsize=20)\nplt.ylabel(\"Convergence rate\", fontsize=20)\nplt.xticks(fontsize=18)\n_ = plt.yticks(fontsize=18)", "Время выполнения", "%timeit scopt.minimize(f, x0, method=\"CG\", tol=tol, jac=grad_f, options={\"maxiter\": max_iter})\n%timeit cg_fr.solve(x0, tol=tol, max_iter=max_iter)\n%timeit cg_fr_rest.solve(x0, tol=tol, max_iter=max_iter)\n%timeit gd.solve(x0, tol=tol, max_iter=max_iter)", "Резюме\n\nСопряжённые направления\nМетод сопряжённых градиентов\nСходимость\nЭксперименты" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/probability
tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Probability Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "TensorFlow Distributions: A Gentle Introduction\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/TensorFlow_Distributions_Tutorial\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/TensorFlow_Distributions_Tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out Understanding TensorFlow Distributions Shapes. If you have any questions about the material here, don't hesitate to contact (or join) the TensorFlow Probability mailing list. We're happy to help.\nBefore we start, we need to import the appropriate libraries. Our overall library is tensorflow_probability. By convention, we generally refer to the distributions library as tfd.\nTensorflow Eager is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard \"graph\" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode.", "import collections\n\nimport tensorflow as tf\nimport tensorflow_probability as tfp\ntfd = tfp.distributions\n\ntry:\n tf.compat.v1.enable_eager_execution()\nexcept ValueError:\n pass\n\nimport matplotlib.pyplot as plt", "Basic Univariate Distributions\nLet's dive right in and create a normal distribution:", "n = tfd.Normal(loc=0., scale=1.)\nn", "We can draw a sample from it:", "n.sample()", "We can draw multiple samples:", "n.sample(3)", "We can evaluate a log prob:", "n.log_prob(0.)", "We can evaluate multiple log probabilities:", "n.log_prob([0., 2., 4.])", "We have a wide range of distributions. Let's try a Bernoulli:", "b = tfd.Bernoulli(probs=0.7)\nb\n\nb.sample()\n\nb.sample(8)\n\nb.log_prob(1)\n\nb.log_prob([1, 0, 1, 0])", "Multivariate Distributions\nWe'll create a multivariate normal with a diagonal covariance:", "nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.])\nnd", "Comparing this to the univariate normal we created earlier, what's different?", "tfd.Normal(loc=0., scale=1.)", "We see that the univariate normal has an event_shape of (), indicating it's a scalar distribution. The multivariate normal has an event_shape of 2, indicating the basic [event space](https://en.wikipedia.org/wiki/Event_(probability_theory&#41;) of this distribution is two-dimensional.\nSampling works just as before:", "nd.sample()\n\nnd.sample(5)\n\nnd.log_prob([0., 10])", "Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification, which we use here.", "nd = tfd.MultivariateNormalFullCovariance(\n loc = [0., 5], covariance_matrix = [[1., .7], [.7, 1.]])\ndata = nd.sample(200)\nplt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4)\nplt.axis([-5, 5, 0, 10])\nplt.title(\"Data set\")\nplt.show()", "Multiple Distributions\nOur first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single Distribution object:", "b3 = tfd.Bernoulli(probs=[.3, .5, .7])\nb3", "It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python Distribution object. The three distributions cannot be manipulated individually. Note how the batch_shape is (3,), indicating a batch of three distributions, and the event_shape is (), indicating the individual distributions have a univariate event space.\nIf we call sample, we get a sample from all three:", "b3.sample()\n\nb3.sample(6)", "If we call prob, (this has the same shape semantics as log_prob; we use prob with these small Bernoulli examples for clarity, although log_prob is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value:", "b3.prob([1, 1, 0])", "Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a for loop (at least in Eager mode, in TF graph mode you'd need a tf.while loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators.\nUsing Independent To Aggregate Batches to Events\nIn the previous section, we created b3, a single Distribution object that represented three coin flips. If we called b3.prob on a vector $v$, the $i$'th entry was the probability that the $i$th coin takes value $v[i]$.\nSuppose we'd instead like to specify a \"joint\" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, prob on a vector $v$ will return a single value representing the probability that the entire set of coins matches the vector $v$.\nHow do we accomplish this? We use a \"higher-order\" distribution called Independent, which takes a distribution and yields a new distribution with the batch shape moved to the event shape:", "b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1)\nb3_joint", "Compare the shape to that of the original b3:", "b3", "As promised, we see that that Independent has moved the batch shape into the event shape: b3_joint is a single distribution (batch_shape = ()) over a three-dimensional event space (event_shape = (3,)).\nLet's check the semantics:", "b3_joint.prob([1, 1, 0])", "An alternate way to get the same result would be to compute probabilities using b3 and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing):", "tf.reduce_prod(b3.prob([1, 1, 0]))", "Indpendent allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary.\nFun facts:\n\nb3.sample and b3_joint.sample have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using Independent shows up when computing probabilites, not when sampling.\nMultivariateNormalDiag could be trivially implemented using the scalar Normal and Independent distributions (it isn't actually implemented this way, but it could be).\n\nBatches of Multivariate Distirbutions\nLet's create a batch of three full-covariance two-dimensional multivariate normals:", "nd_batch = tfd.MultivariateNormalFullCovariance(\n loc = [[0., 0.], [1., 1.], [2., 2.]],\n covariance_matrix = [[[1., .1], [.1, 1.]], \n [[1., .3], [.3, 1.]],\n [[1., .5], [.5, 1.]]])\nnd_batch", "We see batch_shape = (3,), so there are three independent multivariate normals, and event_shape = (2,), so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements.\nSampling works:", "nd_batch.sample(4)", "Since batch_shape = (3,) and event_shape = (2,), we pass a tensor of shape (3, 2) to log_prob:", "nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]])", "Broadcasting, aka Why Is This So Confusing?\nAbstracting out what we've done so far, every distribution has an batch shape B and an event shape E. Let BE be the concatenation of the event shapes:\n\nFor the univariate scalar distributions n and b, BE = ()..\nFor the two-dimensional multivariate normals nd. BE = (2).\nFor both b3 and b3_joint, BE = (3).\nFor the batch of multivariate normals ndb, BE = (3, 2).\n\nThe \"evaluation rules\" we've been using so far are:\n\nSample with no argument returns a tensor with shape BE; sampling with a scalar n returns an \"n by BE\" tensor.\nprob and log_prob take a tensor of shape BE and return a result of shape B.\n\nThe actual \"evaluation rule\" for prob and log_prob is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that the argument to log_prob must be broadcastable against BE; any \"extra\" dimensions are preserved in the output. \nLet's explore the implications. For the univariate normal n, BE = (), so log_prob expects a scalar. If we pass log_prob a tensor with non-empty shape, those show up as batch dimensions in the output:", "n = tfd.Normal(loc=0., scale=1.)\nn\n\nn.log_prob(0.)\n\nn.log_prob([0.])\n\nn.log_prob([[0., 1.], [-1., 2.]])", "Let's turn to the two-dimensional multivariate normal nd (parameters changed for illustrative purposes):", "nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.])\nnd", "log_prob \"expects\" an argument with shape (2,), but it will accept any argument that broadcasts against this shape:", "nd.log_prob([0., 0.])", "But we can pass in \"more\" examples, and evaluate all their log_prob's at once:", "nd.log_prob([[0., 0.],\n [1., 1.],\n [2., 2.]])", "Perhaps less appealingly, we can broadcast over the event dimensions:", "nd.log_prob([0.])\n\nnd.log_prob([[0.], [1.], [2.]])", "Broadcasting this way is a consequence of our \"enable broadcasting whenever possible\" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP.\nNow let's look at the three coins example again:", "b3 = tfd.Bernoulli(probs=[.3, .5, .7])", "Here, using broadcasting to represent the probability that each coin comes up heads is quite intuitive:", "b3.prob([1])", "(Compare this to b3.prob([1., 1., 1.]), which we would have used back where b3 was introduced.)\nNow suppose we want to know, for each coin, the probability the coin comes up heads and the probability it comes up tails. We could imagine trying:\nb3.log_prob([0, 1])\nUnfortunately, this produces an error with a long and not-very-readable stack trace. b3 has BE = (3), so we must pass b3.prob something broadcastable against (3,). [0, 1] has shape (2), so it doesn't broadcast and creates an error. Instead, we have to say:", "b3.prob([[0], [1]])", "Why? [[0], [1]] has shape (2, 1), so it broadcasts against shape (3) to make a broadcast shape of (2, 3).\nBroadcasting is quite powerful: there are cases where it allows order-of-magnitude reduction in the amount of memory used, and it often makes user code shorter. However, it can be challenging to program with. If you call log_prob and get an error, a failure to broadcast is nearly always the problem.\nGoing Farther\nIn this tutorial, we've (hopefully) provided a simple introduction. A few pointers for going further:\n\nevent_shape, batch_shape and sample_shape can be arbitrary rank (in this tutorial they are always either scalar or rank 1). This increases the power but again can lead to programming challenges, especially when broadcasting is involved. For an additional deep dive into shape manipulation, see the Understanding TensorFlow Distributions Shapes. \nTFP includes a powerful abstraction known as Bijectors, which in conjunction with TransformedDistribution, yields a flexible, compositional way to easily create new distributions that are invertible transformations of existing distributions. We'll try to write a tutorial on this soon, but in the meantime, check out the documentation" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ocefpaf/folium
examples/Polygons_from_list_of_points.ipynb
mit
[ "Creating a polygon from a list of points\nFor many of those working with geo data it is a common task being asked to create a polygon from a list of points. More specific, to create a polygon that wraps around those points in a meaningful manner. So, there are several sources in the web explaining how to create the shape (see sources at end of document). This example notebook is the application of those solutions to folium maps.\nHelpers", "# Imports\nimport random\n\nimport folium\nfrom scipy.spatial import ConvexHull\n\n\n# Function to create a list of some random points\ndef randome_points(amount, LON_min, LON_max, LAT_min, LAT_max):\n\n points = []\n for _ in range(amount):\n points.append(\n (random.uniform(LON_min, LON_max), random.uniform(LAT_min, LAT_max))\n )\n\n return points\n\n\n# Function to draw points in the map\ndef draw_points(map_object, list_of_points, layer_name, line_color, fill_color, text):\n\n fg = folium.FeatureGroup(name=layer_name)\n\n for point in list_of_points:\n fg.add_child(\n folium.CircleMarker(\n point,\n radius=1,\n color=line_color,\n fill_color=fill_color,\n popup=(folium.Popup(text)),\n )\n )\n\n map_object.add_child(fg)", "Convex hull\nThe convex hull is probably the most common approach - its goal is to create the smallest polygon that contains all points from a given list. The scipy.spatial package provides this algorithm (https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.spatial.ConvexHull.html, accessed 29.12.2018).", "# Function that takes a map and a list of points (LON,LAT tupels) and\n# returns a map with the convex hull polygon from the points as a new layer\n\n\ndef create_convexhull_polygon(\n map_object, list_of_points, layer_name, line_color, fill_color, weight, text\n):\n\n # Since it is pointless to draw a convex hull polygon around less than 3 points check len of input\n if len(list_of_points) < 3:\n return\n\n # Create the convex hull using scipy.spatial\n form = [list_of_points[i] for i in ConvexHull(list_of_points).vertices]\n\n # Create feature group, add the polygon and add the feature group to the map\n fg = folium.FeatureGroup(name=layer_name)\n fg.add_child(\n folium.vector_layers.Polygon(\n locations=form,\n color=line_color,\n fill_color=fill_color,\n weight=weight,\n popup=(folium.Popup(text)),\n )\n )\n map_object.add_child(fg)\n\n return map_object\n\n# Initialize map\nmy_convexhull_map = folium.Map(location=[48.5, 9.5], zoom_start=8)\n\n# Create a convex hull polygon that contains some points\nlist_of_points = randome_points(\n amount=10, LON_min=48, LON_max=49, LAT_min=9, LAT_max=10\n)\n\ncreate_convexhull_polygon(\n my_convexhull_map,\n list_of_points,\n layer_name=\"Example convex hull\",\n line_color=\"lightblue\",\n fill_color=\"lightskyblue\",\n weight=5,\n text=\"Example convex hull\",\n)\n\ndraw_points(\n my_convexhull_map,\n list_of_points,\n layer_name=\"Example points for convex hull\",\n line_color=\"royalblue\",\n fill_color=\"royalblue\",\n text=\"Example point for convex hull\",\n)\n\n# Add layer control and show map\nfolium.LayerControl(collapsed=False).add_to(my_convexhull_map)\nmy_convexhull_map", "Envelope\nThe envelope is another interesting approach - its goal is to create a box that contains all points from a given list.", "def create_envelope_polygon(\n map_object, list_of_points, layer_name, line_color, fill_color, weight, text\n):\n\n # Since it is pointless to draw a box around less than 2 points check len of input\n if len(list_of_points) < 2:\n return\n\n # Find the edges of box\n from operator import itemgetter\n\n list_of_points = sorted(list_of_points, key=itemgetter(0))\n x_min = list_of_points[0]\n x_max = list_of_points[len(list_of_points) - 1]\n\n list_of_points = sorted(list_of_points, key=itemgetter(1))\n y_min = list_of_points[0]\n y_max = list_of_points[len(list_of_points) - 1]\n\n upper_left = (x_min[0], y_max[1])\n upper_right = (x_max[0], y_max[1])\n lower_right = (x_max[0], y_min[1])\n lower_left = (x_min[0], y_min[1])\n\n edges = [upper_left, upper_right, lower_right, lower_left]\n\n # Create feature group, add the polygon and add the feature group to the map\n fg = folium.FeatureGroup(name=layer_name)\n fg.add_child(\n folium.vector_layers.Polygon(\n locations=edges,\n color=line_color,\n fill_color=fill_color,\n weight=weight,\n popup=(folium.Popup(text)),\n )\n )\n map_object.add_child(fg)\n\n return map_object\n\n# Initialize map\nmy_envelope_map = folium.Map(location=[49.5, 8.5], zoom_start=8)\n\n# Create an envelope polygon that contains some points\nlist_of_points = randome_points(\n amount=10, LON_min=49.1, LON_max=50, LAT_min=8, LAT_max=9\n)\n\ncreate_envelope_polygon(\n my_envelope_map,\n list_of_points,\n layer_name=\"Example envelope\",\n line_color=\"indianred\",\n fill_color=\"red\",\n weight=5,\n text=\"Example envelope\",\n)\n\ndraw_points(\n my_envelope_map,\n list_of_points,\n layer_name=\"Example points for envelope\",\n line_color=\"darkred\",\n fill_color=\"darkred\",\n text=\"Example point for envelope\",\n)\n\n# Add layer control and show map\nfolium.LayerControl(collapsed=False).add_to(my_envelope_map)\nmy_envelope_map", "Concave hull (alpha shape)\nIn some cases the convex hull does not yield good results - this is when the shape of the polygon should be concave instead of convex. The solution is a concave hull that is also called alpha shape. Yet, there is no ready to go, off the shelve solution for this but there are great resources (see: http://blog.thehumangeo.com/2014/05/12/drawing-boundaries-in-python/, accessed 04.01.2019 or https://towardsdatascience.com/the-concave-hull-c649795c0f0f, accessed 29.12.2018).\nMain code\nJust putting it all together...", "# Initialize map\nmy_map_global = folium.Map(location=[48.2460683, 9.26764125], zoom_start=7)\n\n# Create a convex hull polygon that contains some points\nlist_of_points = randome_points(\n amount=10, LON_min=48, LON_max=49, LAT_min=9, LAT_max=10\n)\n\ncreate_convexhull_polygon(\n my_map_global,\n list_of_points,\n layer_name=\"Example convex hull\",\n line_color=\"lightblue\",\n fill_color=\"lightskyblue\",\n weight=5,\n text=\"Example convex hull\",\n)\n\ndraw_points(\n my_map_global,\n list_of_points,\n layer_name=\"Example points for convex hull\",\n line_color=\"royalblue\",\n fill_color=\"royalblue\",\n text=\"Example point for convex hull\",\n)\n\n# Create an envelope polygon that contains some points\nlist_of_points = randome_points(\n amount=10, LON_min=49.1, LON_max=50, LAT_min=8, LAT_max=9\n)\n\ncreate_envelope_polygon(\n my_map_global,\n list_of_points,\n layer_name=\"Example envelope\",\n line_color=\"indianred\",\n fill_color=\"red\",\n weight=5,\n text=\"Example envelope\",\n)\n\ndraw_points(\n my_map_global,\n list_of_points,\n layer_name=\"Example points for envelope\",\n line_color=\"darkred\",\n fill_color=\"darkred\",\n text=\"Example point for envelope\",\n)\n\n# Add layer control and show map\nfolium.LayerControl(collapsed=False).add_to(my_map_global)\nmy_map_global", "Sources:\n\n\nhttp://blog.yhat.com/posts/interactive-geospatial-analysis.html, accessed 28.12.2018\n\n\nhttps://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.spatial.ConvexHull.html, accessed 29.12.2018\n\n\nhttps://www.oreilly.com/ideas/an-elegant-solution-to-the-convex-hull-problem, accessed 29.12.2018\n\n\nhttps://medium.com/@vworri/simple-geospacial-mapping-with-geopandas-and-the-usual-suspects-77f46d40e807, accessed 29.12.2018\n\n\nhttps://towardsdatascience.com/the-concave-hull-c649795c0f0f, accessed 29.12.2018\n\n\nhttp://blog.thehumangeo.com/2014/05/12/drawing-boundaries-in-python/, accessed 04.01.2019" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
euler16/Deep-Learning
Tensorflow/.ipynb_checkpoints/TensorLearn2-checkpoint.ipynb
unlicense
[ "Variables\n\nVariables must be initialized by running an init Op after having launched the graph. - We first have to add the init Op to the graph.", "import tensorflow as tf\n\n# creating a variable : Note we gave an initialization value\nstate = tf.Variable(0,name = \"counter\")\none = tf.constant(1)\nincr = tf.add(state,one)\n\n# state = state + one produces error\n\nupdate = tf.assign(state,incr)\n\n# the initialization operation\ninit_op = tf.initialize_all_variables()\n\nwith tf.Session() as sess:\n # print(sess.run(state)) ERROR\n sess.run(init_op) # initialized my variables\n print(sess.run(state))\n sess.run(incr)\n print(incr)\n sess.run(update)\n print(update)\n print(sess.run(state))", "Custom Initialization\nThe convenience function tf.initialize_all_variables() adds an op to initialize all variables in the model. You can also pass it an explicit list of variables to initialize. See the Variables Documentation for more options, including checking if variables are initialized.\nInitialization from another Variable\nYou sometimes need to initialize a variable from the initial value of another variable. As the op added by tf.initialize_all_variables() initializes all variables in parallel you have to be careful when this is needed.\nTo initialize a new variable from the value of another variable use the other variable's initialized_value() property. You can use the initialized value directly as the initial value for the new variable, or you can use it as any other tensor to compute a value for the new variable.", "weights = tf.Variable(tf.random_normal(shape = (3,3),mean = 0,stddev = 1.0),name = \"weights\")\nbiases = tf.Variable(tf.random_uniform(shape = (3,1),minval = -1,maxval = 1),name = \"biases\")\n\nw2 = tf.Variable(weights.initialized_value(),name = \"w2\")\nb2 = tf.Variable(biases.initialized_value()*2,name = \"b2\")\n\n# init_op1 = tf.initialize_all_variables([weights])\n# init_op2 = tf.initialize_all_variables([biases]) DIDN'T WORK\n# init_op3 = tf.initialize_all_variables([w2,b2])\n\ninit = tf.initialize_all_variables()\n\nwith tf.Session() as sess:\n # sess.run(init_op1)\n # sess.run(inti_op2)\n # sess.run(init_op3)\n sess.run(init)\n \n print(sess.run(weights))\n print(sess.run(biases))\n print(sess.run(w2))\n print(sess.run(b2))\n\nx = tf.constant(35,name = 'x')\ny = tf.Variable(x+5,name = 'y')\n\nwith tf.Session() as sess:\n # sess.run(x) # NO NEED TO INITIALIZE OR RUN A CONSTANT\n sess.run(y.initializer)\n print(sess.run(y))\n\nx = tf.constant([1,2,3])\ny = tf.Variable(x+5)\n\nwith tf.Session() as sess:\n sess.run(y.initializer)\n print(sess.run(y))\n\nwith tf.Session() as sess:\n print(x.eval())", "Placeholders\n\ndon't use eval()\ndata need to be fed to them\nused for taking input and output, ie don't change during the course of learning\n\nFeeding\n\n\nTensorFlow's feed mechanism lets you inject data into any Tensor in a computation graph. A python computation can thus feed data directly into the graph.\n\n\nSupply feed data through the feed_dict argument to a run() or eval() call that initiates computation.\n\n\nwith tf.Session():\n input = tf.placeholder(tf.float32)\n classifier = ...\n print(classifier.eval(feed_dict={input: my_python_preprocessing_fn()}))\n\nWhile you can replace any Tensor with feed data, including variables and constants, the best practice is to use a placeholder op node. A placeholder exists solely to serve as the target of feeds. It is not initialized and contains no data. A placeholder generates an error if it is executed without a feed, so you won't forget to feed it.", "import numpy as np\n\nx = tf.placeholder(tf.float32,shape = (3,3),name = \"x\")\ny = tf.matmul(x,x)\n\nwith tf.Session() as sess:\n rnd = np.random.rand(3,3)\n result = sess.run(y,feed_dict = {x:rnd})\n print(result)\n\n# giving partial shapes\n\nx = tf.placeholder(\"float\",[None,3]) # while the num_rows can be any number, num_cols = 3" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jseabold/statsmodels
examples/notebooks/quasibinomial.ipynb
bsd-3-clause
[ "Quasi-binomial regression\nThis notebook demonstrates using custom variance functions and non-binary data\nwith the quasi-binomial GLM family to perform a regression analysis using\na dependent variable that is a proportion.\nThe notebook uses the barley leaf blotch data that has been discussed in\nseveral textbooks. See below for one reference:\nhttps://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect016.htm", "import statsmodels.api as sm\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom io import StringIO", "The raw data, expressed as percentages. We will divide by 100\nto obtain proportions.", "raw = StringIO(\"\"\"0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50\n0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00\n0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50\n0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00\n0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50\n0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00\n0.50,3.00,0.00,25.00,16.50,10.00,50.00,50.00,62.50\n1.30,7.50,20.00,55.00,29.50,5.00,25.00,75.00,95.00\n1.50,1.00,37.50,5.00,20.00,50.00,50.00,75.00,95.00\n1.50,12.70,26.25,40.00,43.50,75.00,75.00,75.00,95.00\"\"\")", "The regression model is a two-way additive model with\nsite and variety effects. The data are a full unreplicated\ndesign with 10 rows (sites) and 9 columns (varieties).", "df = pd.read_csv(raw, header=None)\ndf = df.melt()\ndf[\"site\"] = 1 + np.floor(df.index / 10).astype(np.int)\ndf[\"variety\"] = 1 + (df.index % 10)\ndf = df.rename(columns={\"value\": \"blotch\"})\ndf = df.drop(\"variable\", axis=1)\ndf[\"blotch\"] /= 100", "Fit the quasi-binomial regression with the standard variance\nfunction.", "model1 = sm.GLM.from_formula(\"blotch ~ 0 + C(variety) + C(site)\",\n family=sm.families.Binomial(), data=df)\nresult1 = model1.fit(scale=\"X2\")\nprint(result1.summary())", "The plot below shows that the default variance function is\nnot capturing the variance structure very well. Also note\nthat the scale parameter estimate is quite small.", "plt.clf()\nplt.grid(True)\nplt.plot(result1.predict(linear=True), result1.resid_pearson, 'o')\nplt.xlabel(\"Linear predictor\")\nplt.ylabel(\"Residual\")", "An alternative variance function is mu^2 * (1 - mu)^2.", "class vf(sm.families.varfuncs.VarianceFunction):\n def __call__(self, mu):\n return mu**2 * (1 - mu)**2\n\n def deriv(self, mu):\n return 2*mu - 6*mu**2 + 4*mu**3", "Fit the quasi-binomial regression with the alternative variance\nfunction.", "bin = sm.families.Binomial()\nbin.variance = vf()\nmodel2 = sm.GLM.from_formula(\"blotch ~ 0 + C(variety) + C(site)\", family=bin, data=df)\nresult2 = model2.fit(scale=\"X2\")\nprint(result2.summary())", "With the alternative variance function, the mean/variance relationship\nseems to capture the data well, and the estimated scale parameter is\nclose to 1.", "plt.clf()\nplt.grid(True)\nplt.plot(result2.predict(linear=True), result2.resid_pearson, 'o')\nplt.xlabel(\"Linear predictor\")\nplt.ylabel(\"Residual\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SamLau95/nbinteract
docs/notebooks/recipes/recipes_layout.ipynb
bsd-3-clause
[ "# nbi:hide_in\nimport warnings\n# Ignore numpy dtype warnings. These warnings are caused by an interaction\n# between numpy and Cython and can be safely ignored.\n# Reference: https://stackoverflow.com/a/40846742\nwarnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\nwarnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\n%matplotlib inline\nimport ipywidgets as widgets\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport nbinteract as nbi\n\nnp.set_printoptions(threshold=20, precision=2, suppress=True)\npd.options.display.max_rows = 7\npd.options.display.max_columns = 8\npd.set_option('precision', 2)\n# This option stops scientific notation for pandas\n# pd.set_option('display.float_format', '{:.2f}'.format)\n\n# nbi:hide_in\ndef df_interact(df, nrows=7, ncols=7):\n '''\n Outputs sliders that show rows and columns of df\n '''\n def peek(row=0, col=0):\n return df.iloc[row:row + nrows, col:col + ncols]\n if len(df.columns) <= ncols:\n interact(peek, row=(0, len(df) - nrows, nrows), col=fixed(0))\n else:\n interact(peek,\n row=(0, len(df) - nrows, nrows),\n col=(0, len(df.columns) - ncols))\n print('({} rows, {} columns) total'.format(df.shape[0], df.shape[1]))\n\n# nbi:hide_in\nvideos = pd.read_csv('https://github.com/SamLau95/nbinteract/raw/master/notebooks/youtube_trending.csv',\n parse_dates=['publish_time'],\n index_col='publish_time')", "Page Layout / Dashboarding\nnbinteract gives basic page layout functionality using special comments in your code. Include one or more of these markers in a Python comment and nbinteract will add their corresponding CSS classes to the generated cells.\n| Marker | Description | CSS class added |\n| --------- | --------- | --------- |\n| nbi:left | Floats cell to the left | nbinteract-left |\n| nbi:right | Floats cell to the right | nbinteract-right |\n| nbi:hide_in | Hides cell input | nbinteract-hide_in |\n| nbi:hide_out | Hides cell output | nbinteract-hide_out |\nBy default, only the full template will automatically provide styling for these classes. For other templates, nbinteract assumes that the embedding page will use the CSS classes to style the cells.\nYou can use the layout markers to create simple dashboards. In this page, we create a dashboard using a dataset of trending videos on YouTube. We first create a dashboard showing the code used to generate the plots. Further down the page, we replicate the dashboard without showing the code.", "df_interact(videos)\n\n# nbi:left\noptions = {\n 'title': 'Views for Trending Videos',\n 'xlabel': 'Date Trending',\n 'ylabel': 'Views',\n 'animation_duration': 500,\n 'aspect_ratio': 1.0,\n}\n\ndef xs(channel):\n return videos.loc[videos['channel_title'] == channel].index\n\ndef ys(xs):\n return videos.loc[xs, 'views']\n\nnbi.scatter(xs, ys,\n channel=videos['channel_title'].unique()[9:15],\n options=options)\n\n# nbi:right\noptions={\n 'ylabel': 'Proportion per Unit',\n 'bins': 100,\n 'aspect_ratio': 1.0,\n}\n\n\ndef values(col):\n vals = videos[col]\n return vals[vals < vals.quantile(0.8)]\n\nnbi.hist(values, col=widgets.ToggleButtons(options=['views', 'likes', 'dislikes', 'comment_count']), options=options)", "Dashboard (without showing code)", "# nbi:hide_in\ndf_interact(videos)\n\n# nbi:hide_in\n# nbi:left\noptions = {\n 'title': 'Views for Trending Videos',\n 'xlabel': 'Date Trending',\n 'ylabel': 'Views',\n 'animation_duration': 500,\n 'aspect_ratio': 1.0,\n}\n\ndef xs(channel):\n return videos.loc[videos['channel_title'] == channel].index\n\ndef ys(xs):\n return videos.loc[xs, 'views']\n\nnbi.scatter(xs, ys,\n channel=videos['channel_title'].unique()[9:15],\n options=options)\n\n# nbi:hide_in\n# nbi:right\noptions={\n 'ylabel': 'Proportion per Unit',\n 'bins': 100,\n 'aspect_ratio': 1.0,\n}\n\n\ndef values(col):\n vals = videos[col]\n return vals[vals < vals.quantile(0.8)]\n\nnbi.hist(values, col=widgets.ToggleButtons(options=['views', 'likes', 'dislikes', 'comment_count']), options=options)" ]
[ "code", "markdown", "code", "markdown", "code" ]
QuLogic/python-future
docs/notebooks/Writing Python 2-3 compatible code.ipynb
mit
[ "Cheat Sheet: Writing Python 2-3 compatible code\n\nCopyright (c): 2013-2016 Python Charmers Pty Ltd, Australia.\nAuthor: Ed Schofield.\nLicence: Creative Commons Attribution.\n\nA PDF version is here: http://python-future.org/compatible_idioms.pdf\nThis notebook shows you idioms for writing future-proof code that is compatible with both versions of Python: 2 and 3. It accompanies Ed Schofield's talk at PyCon AU 2014, \"Writing 2/3 compatible code\". (The video is here: http://www.youtube.com/watch?v=KOqk8j11aAI&t=10m14s.)\nMinimum versions:\n\nPython 2: 2.6+\nPython 3: 3.3+\n\nSetup\nThe imports below refer to these pip-installable packages on PyPI:\nimport future # pip install future\nimport builtins # pip install future\nimport past # pip install future\nimport six # pip install six\n\nThe following scripts are also pip-installable:\nfuturize # pip install future\npasteurize # pip install future\n\nSee http://python-future.org and https://pythonhosted.org/six/ for more information.\nEssential syntax differences\nprint", "# Python 2 only:\nprint 'Hello'\n\n# Python 2 and 3:\nprint('Hello')", "To print multiple strings, import print_function to prevent Py2 from interpreting it as a tuple:", "# Python 2 only:\nprint 'Hello', 'Guido'\n\n# Python 2 and 3:\nfrom __future__ import print_function # (at top of module)\n\nprint('Hello', 'Guido')\n\n# Python 2 only:\nprint >> sys.stderr, 'Hello'\n\n# Python 2 and 3:\nfrom __future__ import print_function\n\nprint('Hello', file=sys.stderr)\n\n# Python 2 only:\nprint 'Hello',\n\n# Python 2 and 3:\nfrom __future__ import print_function\n\nprint('Hello', end='')", "Raising exceptions", "# Python 2 only:\nraise ValueError, \"dodgy value\"\n\n# Python 2 and 3:\nraise ValueError(\"dodgy value\")", "Raising exceptions with a traceback:", "# Python 2 only:\ntraceback = sys.exc_info()[2]\nraise ValueError, \"dodgy value\", traceback\n\n# Python 3 only:\nraise ValueError(\"dodgy value\").with_traceback()\n\n# Python 2 and 3: option 1\nfrom six import reraise as raise_\n# or\nfrom future.utils import raise_\n\ntraceback = sys.exc_info()[2]\nraise_(ValueError, \"dodgy value\", traceback)\n\n# Python 2 and 3: option 2\nfrom future.utils import raise_with_traceback\n\nraise_with_traceback(ValueError(\"dodgy value\"))", "Exception chaining (PEP 3134):", "# Setup:\nclass DatabaseError(Exception):\n pass\n\n# Python 3 only\nclass FileDatabase:\n def __init__(self, filename):\n try:\n self.file = open(filename)\n except IOError as exc:\n raise DatabaseError('failed to open') from exc\n\n# Python 2 and 3:\nfrom future.utils import raise_from\n\nclass FileDatabase:\n def __init__(self, filename):\n try:\n self.file = open(filename)\n except IOError as exc:\n raise_from(DatabaseError('failed to open'), exc)\n\n# Testing the above:\ntry:\n fd = FileDatabase('non_existent_file.txt')\nexcept Exception as e:\n assert isinstance(e.__cause__, IOError) # FileNotFoundError on Py3.3+ inherits from IOError", "Catching exceptions", "# Python 2 only:\ntry:\n ...\nexcept ValueError, e:\n ...\n\n# Python 2 and 3:\ntry:\n ...\nexcept ValueError as e:\n ...", "Division\nInteger division (rounding down):", "# Python 2 only:\nassert 2 / 3 == 0\n\n# Python 2 and 3:\nassert 2 // 3 == 0", "\"True division\" (float division):", "# Python 3 only:\nassert 3 / 2 == 1.5\n\n# Python 2 and 3:\nfrom __future__ import division # (at top of module)\n\nassert 3 / 2 == 1.5", "\"Old division\" (i.e. compatible with Py2 behaviour):", "# Python 2 only:\na = b / c # with any types\n\n# Python 2 and 3:\nfrom past.utils import old_div\n\na = old_div(b, c) # always same as / on Py2", "Long integers\nShort integers are gone in Python 3 and long has become int (without the trailing L in the repr).", "# Python 2 only\nk = 9223372036854775808L\n\n# Python 2 and 3:\nk = 9223372036854775808\n\n# Python 2 only\nbigint = 1L\n\n# Python 2 and 3\nfrom builtins import int\nbigint = int(1)", "To test whether a value is an integer (of any kind):", "# Python 2 only:\nif isinstance(x, (int, long)):\n ...\n\n# Python 3 only:\nif isinstance(x, int):\n ...\n\n# Python 2 and 3: option 1\nfrom builtins import int # subclass of long on Py2\n\nif isinstance(x, int): # matches both int and long on Py2\n ...\n\n# Python 2 and 3: option 2\nfrom past.builtins import long\n\nif isinstance(x, (int, long)):\n ...", "Octal constants", "0644 # Python 2 only\n\n0o644 # Python 2 and 3", "Backtick repr", "`x` # Python 2 only\n\nrepr(x) # Python 2 and 3", "Metaclasses", "class BaseForm(object):\n pass\n\nclass FormType(type):\n pass\n\n# Python 2 only:\nclass Form(BaseForm):\n __metaclass__ = FormType\n pass\n\n# Python 3 only:\nclass Form(BaseForm, metaclass=FormType):\n pass\n\n# Python 2 and 3:\nfrom six import with_metaclass\n# or\nfrom future.utils import with_metaclass\n\nclass Form(with_metaclass(FormType, BaseForm)):\n pass", "Strings and bytes\nUnicode (text) string literals\nIf you are upgrading an existing Python 2 codebase, it may be preferable to mark up all string literals as unicode explicitly with u prefixes:", "# Python 2 only\ns1 = 'The Zen of Python'\ns2 = u'きたないのよりきれいな方がいい\\n'\n\n# Python 2 and 3\ns1 = u'The Zen of Python'\ns2 = u'きたないのよりきれいな方がいい\\n'", "The futurize and python-modernize tools do not currently offer an option to do this automatically.\nIf you are writing code for a new project or new codebase, you can use this idiom to make all string literals in a module unicode strings:", "# Python 2 and 3\nfrom __future__ import unicode_literals # at top of module\n\ns1 = 'The Zen of Python'\ns2 = 'きたないのよりきれいな方がいい\\n'", "See http://python-future.org/unicode_literals.html for more discussion on which style to use.\nByte-string literals", "# Python 2 only\ns = 'This must be a byte-string'\n\n# Python 2 and 3\ns = b'This must be a byte-string'", "To loop over a byte-string with possible high-bit characters, obtaining each character as a byte-string of length 1:", "# Python 2 only:\nfor bytechar in 'byte-string with high-bit chars like \\xf9':\n ...\n\n# Python 3 only:\nfor myint in b'byte-string with high-bit chars like \\xf9':\n bytechar = bytes([myint])\n\n# Python 2 and 3:\nfrom builtins import bytes\nfor myint in bytes(b'byte-string with high-bit chars like \\xf9'):\n bytechar = bytes([myint])", "As an alternative, chr() and .encode('latin-1') can be used to convert an int into a 1-char byte string:", "# Python 3 only:\nfor myint in b'byte-string with high-bit chars like \\xf9':\n char = chr(myint) # returns a unicode string\n bytechar = char.encode('latin-1')\n\n# Python 2 and 3:\nfrom builtins import bytes, chr\nfor myint in bytes(b'byte-string with high-bit chars like \\xf9'):\n char = chr(myint) # returns a unicode string\n bytechar = char.encode('latin-1') # forces returning a byte str", "basestring", "# Python 2 only:\na = u'abc'\nb = 'def'\nassert (isinstance(a, basestring) and isinstance(b, basestring))\n\n# Python 2 and 3: alternative 1\nfrom past.builtins import basestring # pip install future\n\na = u'abc'\nb = b'def'\nassert (isinstance(a, basestring) and isinstance(b, basestring))\n\n# Python 2 and 3: alternative 2: refactor the code to avoid considering\n# byte-strings as strings.\n\nfrom builtins import str\na = u'abc'\nb = b'def'\nc = b.decode()\nassert isinstance(a, str) and isinstance(c, str)\n# ...", "unicode", "# Python 2 only:\ntemplates = [u\"blog/blog_post_detail_%s.html\" % unicode(slug)]\n\n# Python 2 and 3: alternative 1\nfrom builtins import str\ntemplates = [u\"blog/blog_post_detail_%s.html\" % str(slug)]\n\n# Python 2 and 3: alternative 2\nfrom builtins import str as text\ntemplates = [u\"blog/blog_post_detail_%s.html\" % text(slug)]", "StringIO", "# Python 2 only:\nfrom StringIO import StringIO\n# or:\nfrom cStringIO import StringIO\n\n# Python 2 and 3:\nfrom io import BytesIO # for handling byte strings\nfrom io import StringIO # for handling unicode strings", "Imports relative to a package\nSuppose the package is:\nmypackage/\n __init__.py\n submodule1.py\n submodule2.py\n\nand the code below is in submodule1.py:", "# Python 2 only: \nimport submodule2\n\n# Python 2 and 3:\nfrom . import submodule2\n\n# Python 2 and 3:\n# To make Py2 code safer (more like Py3) by preventing\n# implicit relative imports, you can also add this to the top:\nfrom __future__ import absolute_import", "Dictionaries", "heights = {'Fred': 175, 'Anne': 166, 'Joe': 192}", "Iterating through dict keys/values/items\nIterable dict keys:", "# Python 2 only:\nfor key in heights.iterkeys():\n ...\n\n# Python 2 and 3:\nfor key in heights:\n ...", "Iterable dict values:", "# Python 2 only:\nfor value in heights.itervalues():\n ...\n\n# Idiomatic Python 3\nfor value in heights.values(): # extra memory overhead on Py2\n ...\n\n# Python 2 and 3: option 1\nfrom builtins import dict\n\nheights = dict(Fred=175, Anne=166, Joe=192)\nfor key in heights.values(): # efficient on Py2 and Py3\n ...\n\n# Python 2 and 3: option 2\nfrom builtins import itervalues\n# or\nfrom six import itervalues\n\nfor key in itervalues(heights):\n ...", "Iterable dict items:", "# Python 2 only:\nfor (key, value) in heights.iteritems():\n ...\n\n# Python 2 and 3: option 1\nfor (key, value) in heights.items(): # inefficient on Py2 \n ...\n\n# Python 2 and 3: option 2\nfrom future.utils import viewitems\n\nfor (key, value) in viewitems(heights): # also behaves like a set\n ...\n\n# Python 2 and 3: option 3\nfrom future.utils import iteritems\n# or\nfrom six import iteritems\n\nfor (key, value) in iteritems(heights):\n ...", "dict keys/values/items as a list\ndict keys as a list:", "# Python 2 only:\nkeylist = heights.keys()\nassert isinstance(keylist, list)\n\n# Python 2 and 3:\nkeylist = list(heights)\nassert isinstance(keylist, list)", "dict values as a list:", "# Python 2 only:\nheights = {'Fred': 175, 'Anne': 166, 'Joe': 192}\nvaluelist = heights.values()\nassert isinstance(valuelist, list)\n\n# Python 2 and 3: option 1\nvaluelist = list(heights.values()) # inefficient on Py2\n\n# Python 2 and 3: option 2\nfrom builtins import dict\n\nheights = dict(Fred=175, Anne=166, Joe=192)\nvaluelist = list(heights.values())\n\n# Python 2 and 3: option 3\nfrom future.utils import listvalues\n\nvaluelist = listvalues(heights)\n\n# Python 2 and 3: option 4\nfrom future.utils import itervalues\n# or\nfrom six import itervalues\n\nvaluelist = list(itervalues(heights))", "dict items as a list:", "# Python 2 and 3: option 1\nitemlist = list(heights.items()) # inefficient on Py2\n\n# Python 2 and 3: option 2\nfrom future.utils import listitems\n\nitemlist = listitems(heights)\n\n# Python 2 and 3: option 3\nfrom future.utils import iteritems\n# or\nfrom six import iteritems\n\nitemlist = list(iteritems(heights))", "Custom class behaviour\nCustom iterators", "# Python 2 only\nclass Upper(object):\n def __init__(self, iterable):\n self._iter = iter(iterable)\n def next(self): # Py2-style\n return self._iter.next().upper()\n def __iter__(self):\n return self\n\nitr = Upper('hello')\nassert itr.next() == 'H' # Py2-style\nassert list(itr) == list('ELLO')\n\n# Python 2 and 3: option 1\nfrom builtins import object\n\nclass Upper(object):\n def __init__(self, iterable):\n self._iter = iter(iterable)\n def __next__(self): # Py3-style iterator interface\n return next(self._iter).upper() # builtin next() function calls\n def __iter__(self):\n return self\n\nitr = Upper('hello')\nassert next(itr) == 'H' # compatible style\nassert list(itr) == list('ELLO')\n\n# Python 2 and 3: option 2\nfrom future.utils import implements_iterator\n\n@implements_iterator\nclass Upper(object):\n def __init__(self, iterable):\n self._iter = iter(iterable)\n def __next__(self): # Py3-style iterator interface\n return next(self._iter).upper() # builtin next() function calls\n def __iter__(self):\n return self\n\nitr = Upper('hello')\nassert next(itr) == 'H'\nassert list(itr) == list('ELLO')", "Custom __str__ methods", "# Python 2 only:\nclass MyClass(object):\n def __unicode__(self):\n return 'Unicode string: \\u5b54\\u5b50'\n def __str__(self):\n return unicode(self).encode('utf-8')\n\na = MyClass()\nprint(a) # prints encoded string\n\n# Python 2 and 3:\nfrom future.utils import python_2_unicode_compatible\n\n@python_2_unicode_compatible\nclass MyClass(object):\n def __str__(self):\n return u'Unicode string: \\u5b54\\u5b50'\n\na = MyClass()\nprint(a) # prints string encoded as utf-8 on Py2", "Custom __nonzero__ vs __bool__ method:", "# Python 2 only:\nclass AllOrNothing(object):\n def __init__(self, l):\n self.l = l\n def __nonzero__(self):\n return all(self.l)\n\ncontainer = AllOrNothing([0, 100, 200])\nassert not bool(container)\n\n# Python 2 and 3:\nfrom builtins import object\n\nclass AllOrNothing(object):\n def __init__(self, l):\n self.l = l\n def __bool__(self):\n return all(self.l)\n\ncontainer = AllOrNothing([0, 100, 200])\nassert not bool(container)", "Lists versus iterators\nxrange", "# Python 2 only:\nfor i in xrange(10**8):\n ...\n\n# Python 2 and 3: forward-compatible\nfrom builtins import range\nfor i in range(10**8):\n ...\n\n# Python 2 and 3: backward-compatible\nfrom past.builtins import xrange\nfor i in xrange(10**8):\n ...", "range", "# Python 2 only\nmylist = range(5)\nassert mylist == [0, 1, 2, 3, 4]\n\n# Python 2 and 3: forward-compatible: option 1\nmylist = list(range(5)) # copies memory on Py2\nassert mylist == [0, 1, 2, 3, 4]\n\n# Python 2 and 3: forward-compatible: option 2\nfrom builtins import range\n\nmylist = list(range(5))\nassert mylist == [0, 1, 2, 3, 4]\n\n# Python 2 and 3: option 3\nfrom future.utils import lrange\n\nmylist = lrange(5)\nassert mylist == [0, 1, 2, 3, 4]\n\n# Python 2 and 3: backward compatible\nfrom past.builtins import range\n\nmylist = range(5)\nassert mylist == [0, 1, 2, 3, 4]", "map", "# Python 2 only:\nmynewlist = map(f, myoldlist)\nassert mynewlist == [f(x) for x in myoldlist]\n\n# Python 2 and 3: option 1\n# Idiomatic Py3, but inefficient on Py2\nmynewlist = list(map(f, myoldlist))\nassert mynewlist == [f(x) for x in myoldlist]\n\n# Python 2 and 3: option 2\nfrom builtins import map\n\nmynewlist = list(map(f, myoldlist))\nassert mynewlist == [f(x) for x in myoldlist]\n\n# Python 2 and 3: option 3\ntry:\n import itertools.imap as map\nexcept ImportError:\n pass\n\nmynewlist = list(map(f, myoldlist)) # inefficient on Py2\nassert mynewlist == [f(x) for x in myoldlist]\n\n# Python 2 and 3: option 4\nfrom future.utils import lmap\n\nmynewlist = lmap(f, myoldlist)\nassert mynewlist == [f(x) for x in myoldlist]\n\n# Python 2 and 3: option 5\nfrom past.builtins import map\n\nmynewlist = map(f, myoldlist)\nassert mynewlist == [f(x) for x in myoldlist]", "imap", "# Python 2 only:\nfrom itertools import imap\n\nmyiter = imap(func, myoldlist)\nassert isinstance(myiter, iter)\n\n# Python 3 only:\nmyiter = map(func, myoldlist)\nassert isinstance(myiter, iter)\n\n# Python 2 and 3: option 1\nfrom builtins import map\n\nmyiter = map(func, myoldlist)\nassert isinstance(myiter, iter)\n\n# Python 2 and 3: option 2\ntry:\n import itertools.imap as map\nexcept ImportError:\n pass\n\nmyiter = map(func, myoldlist)\nassert isinstance(myiter, iter)", "zip, izip\nAs above with zip and itertools.izip.\nfilter, ifilter\nAs above with filter and itertools.ifilter too.\nOther builtins\nFile IO with open()", "# Python 2 only\nf = open('myfile.txt')\ndata = f.read() # as a byte string\ntext = data.decode('utf-8')\n\n# Python 2 and 3: alternative 1\nfrom io import open\nf = open('myfile.txt', 'rb')\ndata = f.read() # as bytes\ntext = data.decode('utf-8') # unicode, not bytes\n\n# Python 2 and 3: alternative 2\nfrom io import open\nf = open('myfile.txt', encoding='utf-8')\ntext = f.read() # unicode, not bytes", "reduce()", "# Python 2 only:\nassert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5\n\n# Python 2 and 3:\nfrom functools import reduce\n\nassert reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) == 1+2+3+4+5", "raw_input()", "# Python 2 only:\nname = raw_input('What is your name? ')\nassert isinstance(name, str) # native str\n\n# Python 2 and 3:\nfrom builtins import input\n\nname = input('What is your name? ')\nassert isinstance(name, str) # native str on Py2 and Py3", "input()", "# Python 2 only:\ninput(\"Type something safe please: \")\n\n# Python 2 and 3\nfrom builtins import input\neval(input(\"Type something safe please: \"))", "Warning: using either of these is unsafe with untrusted input.\nfile()", "# Python 2 only:\nf = file(pathname)\n\n# Python 2 and 3:\nf = open(pathname)\n\n# But preferably, use this:\nfrom io import open\nf = open(pathname, 'rb') # if f.read() should return bytes\n# or\nf = open(pathname, 'rt') # if f.read() should return unicode text", "exec", "# Python 2 only:\nexec 'x = 10'\n\n# Python 2 and 3:\nexec('x = 10')\n\n# Python 2 only:\ng = globals()\nexec 'x = 10' in g\n\n# Python 2 and 3:\ng = globals()\nexec('x = 10', g)\n\n# Python 2 only:\nl = locals()\nexec 'x = 10' in g, l\n\n# Python 2 and 3:\nexec('x = 10', g, l)", "But note that Py3's exec() is less powerful (and less dangerous) than Py2's exec statement.\nexecfile()", "# Python 2 only:\nexecfile('myfile.py')\n\n# Python 2 and 3: alternative 1\nfrom past.builtins import execfile\n\nexecfile('myfile.py')\n\n# Python 2 and 3: alternative 2\nexec(compile(open('myfile.py').read()))\n\n# This can sometimes cause this:\n# SyntaxError: function ... uses import * and bare exec ...\n# See https://github.com/PythonCharmers/python-future/issues/37", "unichr()", "# Python 2 only:\nassert unichr(8364) == '€'\n\n# Python 3 only:\nassert chr(8364) == '€'\n\n# Python 2 and 3:\nfrom builtins import chr\nassert chr(8364) == '€'", "intern()", "# Python 2 only:\nintern('mystring')\n\n# Python 3 only:\nfrom sys import intern\nintern('mystring')\n\n# Python 2 and 3: alternative 1\nfrom past.builtins import intern\nintern('mystring')\n\n# Python 2 and 3: alternative 2\nfrom six.moves import intern\nintern('mystring')\n\n# Python 2 and 3: alternative 3\nfrom future.standard_library import install_aliases\ninstall_aliases()\nfrom sys import intern\nintern('mystring')\n\n# Python 2 and 3: alternative 2\ntry:\n from sys import intern\nexcept ImportError:\n pass\nintern('mystring')", "apply()", "args = ('a', 'b')\nkwargs = {'kwarg1': True}\n\n# Python 2 only:\napply(f, args, kwargs)\n\n# Python 2 and 3: alternative 1\nf(*args, **kwargs)\n\n# Python 2 and 3: alternative 2\nfrom past.builtins import apply\napply(f, args, kwargs)", "chr()", "# Python 2 only:\nassert chr(64) == b'@'\nassert chr(200) == b'\\xc8'\n\n# Python 3 only: option 1\nassert chr(64).encode('latin-1') == b'@'\nassert chr(0xc8).encode('latin-1') == b'\\xc8'\n\n# Python 2 and 3: option 1\nfrom builtins import chr\n\nassert chr(64).encode('latin-1') == b'@'\nassert chr(0xc8).encode('latin-1') == b'\\xc8'\n\n# Python 3 only: option 2\nassert bytes([64]) == b'@'\nassert bytes([0xc8]) == b'\\xc8'\n\n# Python 2 and 3: option 2\nfrom builtins import bytes\n\nassert bytes([64]) == b'@'\nassert bytes([0xc8]) == b'\\xc8'", "cmp()", "# Python 2 only:\nassert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0\n\n# Python 2 and 3: alternative 1\nfrom past.builtins import cmp\nassert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0\n\n# Python 2 and 3: alternative 2\ncmp = lambda(x, y): (x > y) - (x < y)\nassert cmp('a', 'b') < 0 and cmp('b', 'a') > 0 and cmp('c', 'c') == 0", "reload()", "# Python 2 only:\nreload(mymodule)\n\n# Python 2 and 3\nfrom imp import reload\nreload(mymodule)", "Standard library\ndbm modules", "# Python 2 only\nimport anydbm\nimport whichdb\nimport dbm\nimport dumbdbm\nimport gdbm\n\n# Python 2 and 3: alternative 1\nfrom future import standard_library\nstandard_library.install_aliases()\n\nimport dbm\nimport dbm.ndbm\nimport dbm.dumb\nimport dbm.gnu\n\n# Python 2 and 3: alternative 2\nfrom future.moves import dbm\nfrom future.moves.dbm import dumb\nfrom future.moves.dbm import ndbm\nfrom future.moves.dbm import gnu\n\n# Python 2 and 3: alternative 3\nfrom six.moves import dbm_gnu\n# (others not supported)", "commands / subprocess modules", "# Python 2 only\nfrom commands import getoutput, getstatusoutput\n\n# Python 2 and 3\nfrom future import standard_library\nstandard_library.install_aliases()\n\nfrom subprocess import getoutput, getstatusoutput", "subprocess.check_output()", "# Python 2.7 and above\nfrom subprocess import check_output\n\n# Python 2.6 and above: alternative 1\nfrom future.moves.subprocess import check_output\n\n# Python 2.6 and above: alternative 2\nfrom future import standard_library\nstandard_library.install_aliases()\n\nfrom subprocess import check_output", "collections: Counter, OrderedDict, ChainMap", "# Python 2.7 and above\nfrom collections import Counter, OrderedDict, ChainMap\n\n# Python 2.6 and above: alternative 1\nfrom future.backports import Counter, OrderedDict, ChainMap\n\n# Python 2.6 and above: alternative 2\nfrom future import standard_library\nstandard_library.install_aliases()\n\nfrom collections import Counter, OrderedDict, ChainMap", "StringIO module", "# Python 2 only\nfrom StringIO import StringIO\nfrom cStringIO import StringIO\n\n# Python 2 and 3\nfrom io import BytesIO\n# and refactor StringIO() calls to BytesIO() if passing byte-strings", "http module", "# Python 2 only:\nimport httplib\nimport Cookie\nimport cookielib\nimport BaseHTTPServer\nimport SimpleHTTPServer\nimport CGIHttpServer\n\n# Python 2 and 3 (after ``pip install future``):\nimport http.client\nimport http.cookies\nimport http.cookiejar\nimport http.server", "xmlrpc module", "# Python 2 only:\nimport DocXMLRPCServer\nimport SimpleXMLRPCServer\n\n# Python 2 and 3 (after ``pip install future``):\nimport xmlrpc.server\n\n# Python 2 only:\nimport xmlrpclib\n\n# Python 2 and 3 (after ``pip install future``):\nimport xmlrpc.client", "html escaping and entities", "# Python 2 and 3:\nfrom cgi import escape\n\n# Safer (Python 2 and 3, after ``pip install future``):\nfrom html import escape\n\n# Python 2 only:\nfrom htmlentitydefs import codepoint2name, entitydefs, name2codepoint\n\n# Python 2 and 3 (after ``pip install future``):\nfrom html.entities import codepoint2name, entitydefs, name2codepoint", "html parsing", "# Python 2 only:\nfrom HTMLParser import HTMLParser\n\n# Python 2 and 3 (after ``pip install future``)\nfrom html.parser import HTMLParser\n\n# Python 2 and 3 (alternative 2):\nfrom future.moves.html.parser import HTMLParser", "urllib module\nurllib is the hardest module to use from Python 2/3 compatible code. You may like to use Requests (http://python-requests.org) instead.", "# Python 2 only:\nfrom urlparse import urlparse\nfrom urllib import urlencode\nfrom urllib2 import urlopen, Request, HTTPError\n\n# Python 3 only:\nfrom urllib.parse import urlparse, urlencode\nfrom urllib.request import urlopen, Request\nfrom urllib.error import HTTPError\n\n# Python 2 and 3: easiest option\nfrom future.standard_library import install_aliases\ninstall_aliases()\n\nfrom urllib.parse import urlparse, urlencode\nfrom urllib.request import urlopen, Request\nfrom urllib.error import HTTPError\n\n# Python 2 and 3: alternative 2\nfrom future.standard_library import hooks\n\nwith hooks():\n from urllib.parse import urlparse, urlencode\n from urllib.request import urlopen, Request\n from urllib.error import HTTPError\n\n# Python 2 and 3: alternative 3\nfrom future.moves.urllib.parse import urlparse, urlencode\nfrom future.moves.urllib.request import urlopen, Request\nfrom future.moves.urllib.error import HTTPError\n# or\nfrom six.moves.urllib.parse import urlparse, urlencode\nfrom six.moves.urllib.request import urlopen\nfrom six.moves.urllib.error import HTTPError\n\n# Python 2 and 3: alternative 4\ntry:\n from urllib.parse import urlparse, urlencode\n from urllib.request import urlopen, Request\n from urllib.error import HTTPError\nexcept ImportError:\n from urlparse import urlparse\n from urllib import urlencode\n from urllib2 import urlopen, Request, HTTPError", "Tkinter", "# Python 2 only:\nimport Tkinter\nimport Dialog\nimport FileDialog\nimport ScrolledText\nimport SimpleDialog\nimport Tix \nimport Tkconstants\nimport Tkdnd \nimport tkColorChooser\nimport tkCommonDialog\nimport tkFileDialog\nimport tkFont\nimport tkMessageBox\nimport tkSimpleDialog\nimport ttk\n\n# Python 2 and 3 (after ``pip install future``):\nimport tkinter\nimport tkinter.dialog\nimport tkinter.filedialog\nimport tkinter.scrolledtext\nimport tkinter.simpledialog\nimport tkinter.tix\nimport tkinter.constants\nimport tkinter.dnd\nimport tkinter.colorchooser\nimport tkinter.commondialog\nimport tkinter.filedialog\nimport tkinter.font\nimport tkinter.messagebox\nimport tkinter.simpledialog\nimport tkinter.ttk", "socketserver", "# Python 2 only:\nimport SocketServer\n\n# Python 2 and 3 (after ``pip install future``):\nimport socketserver", "copy_reg, copyreg", "# Python 2 only:\nimport copy_reg\n\n# Python 2 and 3 (after ``pip install future``):\nimport copyreg", "configparser", "# Python 2 only:\nfrom ConfigParser import ConfigParser\n\n# Python 2 and 3 (after ``pip install future``):\nfrom configparser import ConfigParser", "queue", "# Python 2 only:\nfrom Queue import Queue, heapq, deque\n\n# Python 2 and 3 (after ``pip install future``):\nfrom queue import Queue, heapq, deque", "repr, reprlib", "# Python 2 only:\nfrom repr import aRepr, repr\n\n# Python 2 and 3 (after ``pip install future``):\nfrom reprlib import aRepr, repr", "UserDict, UserList, UserString", "# Python 2 only:\nfrom UserDict import UserDict\nfrom UserList import UserList\nfrom UserString import UserString\n\n# Python 3 only:\nfrom collections import UserDict, UserList, UserString\n\n# Python 2 and 3: alternative 1\nfrom future.moves.collections import UserDict, UserList, UserString\n\n# Python 2 and 3: alternative 2\nfrom six.moves import UserDict, UserList, UserString\n\n# Python 2 and 3: alternative 3\nfrom future.standard_library import install_aliases\ninstall_aliases()\nfrom collections import UserDict, UserList, UserString", "itertools: filterfalse, zip_longest", "# Python 2 only:\nfrom itertools import ifilterfalse, izip_longest\n\n# Python 3 only:\nfrom itertools import filterfalse, zip_longest\n\n# Python 2 and 3: alternative 1\nfrom future.moves.itertools import filterfalse, zip_longest\n\n# Python 2 and 3: alternative 2\nfrom six.moves import filterfalse, zip_longest\n\n# Python 2 and 3: alternative 3\nfrom future.standard_library import install_aliases\ninstall_aliases()\nfrom itertools import filterfalse, zip_longest" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
taliamo/Final_Project
organ_pitch/Scripts/upload_env_data.ipynb
mit
[ "T. Martz-Oberlander, 2015-11-12, CO2 and Speed of Sound\nFormatting ENVIRONMENTAL CONDITIONS pipe organ data for Python operations\nNOTE: Here, pitch and frequency are used interchangeably to signify the speed of sound from organ pipes.\nThe entire script looks for mathematical relationships between CO2 concentration changes and pitch changes from a pipe organ. This script uploads, cleans data and organizes new dataframes, creates figures, and performs statistical tests on the relationships between variable CO2 and frequency of sound from a note played on a pipe organ.\nThis uploader script:\n1) Uploads CO2, temp, and RH data files;\n2) Munges it (creates a Date Time column for the time stamps), establishes column contents as floats;\n3) Calculates expected frequency, as per Cramer's equation;\n4) Imports output from pitch_data.py script, the dataframe with measured frequency;\n5) Plots expected frequency curve, CO2 (ppm) curve, and measured pitch points in a figure.\n[ Here I pursue data analysis route 1 (as mentionted in my organ_pitch/notebook.md file), which involves comparing one pitch dataframe with one dataframe of environmental characteristics taken at one sensor location. Both dataframes are compared by the time of data recorded. ]", "# I import useful libraries (with functions) so I can visualize my data\n# I use Pandas because this dataset has word/string column titles and I like the readability features of commands and finish visual products that Pandas offers\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport re\nimport numpy as np\n\n%matplotlib inline\n\n#I want to be able to easily scroll through this notebook so I limit the length of the appearance of my dataframes \nfrom pandas import set_option\nset_option('display.max_rows', 10)", "Uploaded RH and temp data into Python¶\nFirst I upload my data set(s). I am working with environmental data from different locations in the church at differnet dates. Files include: environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements). \nI can discard the CO2_2 column values since they are false measurements logged from an empty input jack in the CO2 HOBOWare ^(r) device.", "#I import a temp and RH data file\nenv=pd.read_table('../Data/CO2May.csv', sep=',')\n\n#assigning columns names\nenv.columns=[['test', 'time','temp C', 'RH %', 'CO2_1', 'CO2_2']]\n\n#I display my dataframe\nenv\n\n#Plot CO2 \n\nplt.plot(env['CO2_1'], color='navy')\n\nplt.title('CO2 Concentration Spikes with Chapel Population')\nplt.ylabel('CO2 (ppm)', color='navy')\nplt.xlabel('Sample number (2 mins)')\n\n#Save the figure \nplt.savefig('CO2_figure2.pdf')\n#return(fig)\n\n\n#change data time variable to actual values of time. \nenv['time']= pd.to_datetime(env['time'])\n\n#print the new table and the type of data. \nprint(env)\n\nenv.dtypes", "Next\n1. Create a function for expected pitch (frequency of sound waves) from CO2 data\n2. Add expected_frequency to dataframe\nCalculated pitch from CO2 levels\nHere I use Cramer's equation for frequency of sound from CO2 concentration (1992). \nfreq = a0 + a1(T) + ... + (a9 +...) +... + a14(xc^2)\nwhere xc is the mole fraction of CO2 and T is temperature. Full derivation of these equations can be found in the \"Doc\" directory.\nI will later plot measured pitch (frequency) data points from my \"pitch\" data frame on top of these calculated frequency values for comparison.", "#Here I am trying to create a function for the above equation. \n#I want to plug in each CO2_ave value for a time stamp (row) from the \"env\" data frame above. \n\n#define coefficients (Cramer, 1992)\na0 = 331.5024\n#a1 = 0.603055\n#a2 = -0.000528\na9 = -(-85.20931) #need to account for negative values\n#a10 = -0.228525\na14 = 29.179762\n\n#xc = CO2 values from dataframe\n\n\n#test function\ndef test_cramer():\n assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.33964466, 'Equation failure'\n return()\n\ntest_cramer()\n\n#This function also converts ppm to mole fraction (just quantity as a proportion of total)\ndef cramer(data):\n '''Calculate pitch from CO2_1 concentration'''\n \n calc_freq = a0 + ((a9)*data)/100 + a14*((data/1000000)**2)\n \n return(calc_freq)\n\n#run the cramer values for the calculated frequency \n#calc_freq = cramer(env['calc_freq'])\n\n#define the new column as the output of the cramer function\n#env['calc_freq'] = calc_freq\n\n#Run the function for the input column (CO2 values)\nenv['calc_freq'] = cramer(env['CO2_1'])\n\ncramer(env['CO2_1'])\n\n#check the dataframe\n#calculated frequency values seem reasonable based on changes in CO2\nenv\n\n#Now I call in my measured pitch data, \n#to be able to visually compare calculated and measured\n\n#Import the measured pitch values--the output of pitch_data.py script\nmeasured_freq = pd.read_table('../Data/pitches.csv', sep=',')\n\n#change data time variable to actual values of time. \nenv['time']= pd.to_datetime(env['time'])\n\n#I test to make sure I'm importing the correct data\nmeasured_freq", "Visualizing the expected pitch values by time\n1. Plot calculated frequency, CO2 (ppm), and measured frequency values", "print(calc_freq)\n\n#define variables from dataframe columns\nCO2_1 = env[['CO2_1']]\n\ncalc_freq=env[['calc_freq']]\n\n#measured_pitch = output_from_'pitch_data.py'\n\n\n#want to set x-axis as date_time\n#how do I format the ax2 y axis scale\n\ndef make_plot(variable_1, variable_2):\n '''Make a three variable plot with two axes'''\n\n#plot title\n plt.title('CO2 and Calculated Pitch', fontsize='14')\n\n#twinx layering\n ax1=plt.subplot()\n ax2=ax1.twinx()\n #ax3=ax1.twinx()\n\n#call data for the plot\n ax1.plot(CO2_1, color='g', linewidth=1)\n ax2.plot(calc_freq, color= 'm', linewidth=1) \n #ax3.plot(measured_freq, color = 'b', marker= 'x')\n\n#axis labeling\n ax1.yaxis.set_tick_params(labelcolor='grey')\n ax1.set_xlabel('Sample Number')\n ax1.set_ylabel('CO2 (ppm)', fontsize=12, color = 'g')\n ax2.set_ylabel('Calculated Pitch (Hz)', fontsize=12, color='m') \n #ax3.set_ylabel('Measured Pitch')\n\n#axis limits\n ax1.set_ylim([400,1300])\n ax2.set_ylim([600, 1500])\n\n #plt.savefig('../Figures/fig1.pdf')\n\n#Close function\n return()#'../Figures/fig1.pdf')\n\n\n#Call my function to test it \nmake_plot(CO2_1, calc_freq)\n", "Here we see the relationship between CO2 concentration in parts per million and the expected changes in pitch. Measured pitch values did not match the time of sampling with that of CO2, so therefore could not be plotted. Measured pitch data would have been \"ax3\".\nEnd of script", "#def make_fig(datasets, variable_1, variable_2, savename):\n\n#twinx layering\nax1=plt.subplot()\nax2=ax1.twinx()\n\n#plot 2 variables in predertermined plot above\nax1.plot(dataset.index, variable_1, 'k-', linewidth=2)\nax2.plot(dataset.index, variable_2, )\n\n#moving plots lines\nvariable_2_spine=ax2.spines['right']\nvariable_2_spine.set_position(('axes', 1.2))\n\nax1.yaxi.set_tick_params(labelcolor='k')\nax1.set_ylabel(variable_1.name, fontsize=13, colour = 'k')\nax2.sey_ylabel(variable_2.name + '($^o$C)', fontsize=13, color='grey')\n\n#plt.savefig(savename)\nreturn(savename)\n\n\nfig = plt.figure(figsize=(11,14))\nplt.suptitle('')\n\nax1.plot(colum1, colum2, 'k-', linewidth=2)\n\" \"\n\nax1.set_ylim([0,1])\nax2.set_ylim([0,1])\n\nax1.set_xlabel('name', fontsize=14, y=0)\nax1.set_ylabel\nax2.set_ylabel\n\n#'float' object not callable--the data in \"CO2_1\" are objects and cannot be called into the equation\n#cramer(env.CO2_ave) \n\nenv.dtypes\n\nenv.CO2_1.dtypes\n\nnew = pd.Series([env.CO2_1], name = 'CO2_1')\n\nCO2_1 = new.tolist()\n\nCO2_array = np.array(CO2_1)\n\n#Test type of data in \"CO2_1\" column\nenv.CO2_1.dtypes\n\ncramer(CO2_array)\n\ntype(CO2_array)\n\n# To choose which CO2 value to use, I first visualize which seems normal \n\n#Create CO2-only dataframs\nCO2 = env[['CO2_1', 'CO2_2']]\n\n#Make a plot\nCO2_fig = plt.plot(CO2)\n\nplt.ylabel('CO2 (ppm)')\nplt.xlabel('Sample number')\nplt.title('Two CO2 sensors, same time and place')\n\n#plt.savefig('CO2_fig.pdf')\n\ninput_file = env\n\n\n\n#Upload environmental data file\nenv = pd.read_table('', sep=',')\n\n\n\n#assigning columns names\nenv.columns=[['test', 'date_time','temp C', 'RH %', 'CO2_1', 'CO2_2']]\n\n#change data time variable to actual values of time.\nenv['date_time']= pd.to_datetime(env['date_time'])\n\n#test function\n #def test_cramer():\n #assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.339644669, 'Equation failure, math-mess-up'\n #return()\n\n#Call the test function\n #test_cramer()\n\n#pitch calculator function from Cramer equation\ndef cramer(data):\n '''Calculate pitch from CO2_1 concentration'''\n calc_freq = a0 + ((a9*data)/100) + a14*((data)**2)\n return(calc_freq)\n\n#Run the function for the input column (CO2 values) to get a new column of calculated_frequency\nenv['calc_freq'] = cramer(env['CO2_1'])\n\n#Import the measured pitch values--the output of pitch_data.py script\nmeasured_freq = pd.read_table('../organ_pitch/Data/munged_pitch.csv', sep=',')\n\n#change data time variable to actual values of time.\nenv['time']= pd.to_datetime(env['time'])\n\n#Function to make and save a plot\n\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dominikgrimm/ridge_and_svm
RidgeRegression.ipynb
mit
[ "Ridge Regression\nRückblick Lineare Regression\nLineare Regression: $\\mathbf{y}=𝑏+w_1\\mathbf{x}$\n$\\mathbf{x} \\in \\mathbb{R}^n$: Einflussgröße (Feature)\n$\\mathbf{y} \\in \\mathbb{R}^n$: Zielvariable (Target)\n$n$: Anzahl der Trainingsinstanzen\n$b,w_1 \\in \\mathbb{R}$: Gewichte/Parameter \nLinear Regression Straffunktion (Loss) ist definiert als: \n$\\mathcal{L}(\\mathbf{w})= \\sum_{i=1}^n \\left[y_i - (b - \\mathbf{w}^T \\mathbf{x}_i) \\right]^2$ \nZum lernen der unbekannten Gewichte $\\mathbf{w}$ muss man die Straffunktion $\\mathcal{L}$ minimieren. \nSimuliere und Plotte Daten", "#Importiere Python Libraries\n%matplotlib inline\nimport pylab as pl\nimport seaborn as sns\nsns.set(font_scale=1.7)\n\nfrom plotly.offline import init_notebook_mode, iplot\nfrom plotly.graph_objs import *\nimport plotly.tools as tls\n#Set to True\ninit_notebook_mode(connected=True)\n\nimport scipy as sp\nfrom sklearn.preprocessing import PolynomialFeatures, StandardScaler\nfrom sklearn.linear_model import LinearRegression, Ridge\nfrom sklearn.pipeline import Pipeline\nfrom ipywidgets import *\nfrom IPython.display import display\n\n#Funktion zum Plotten der Daten\ndef plot_data(X,y,model=None,interactive=False):\n fig = pl.figure(figsize=(10,6))\n pl.plot(X,y,'o',markersize=10)\n pl.xlabel(\"x\")\n pl.ylabel(\"y\")\n pl.title(\"\")\n pl.ylim([-1.1,1.1])\n pl.xlim([-3.1,3.1])\n pl.xticks([-3,-2,-1,0,1,2,3],[\"50\",\"60\",\"70\",\"80\",\"90\",\"100\",\"110\"])\n pl.yticks([-1,-0.5,0,0.5,1],[\"200k\",\"400k\",\"600k\",\"800k\",\"1M\"])\n if not model==None:\n X_new=sp.linspace(-3, 3, 100).reshape(100, 1)\n y_new = model.predict(X_new)\n pl.plot(X_new,y_new,\"r-\",linewidth=4,label=\"Learned Regression Fit\")\n pl.legend()\n if interactive:\n plotly_fig = tls.mpl_to_plotly(fig)\n iplot(plotly_fig, show_link=False)\n\n\n#Funktion um Beispieldaten zu simulieren\ndef generate_data():\n sp.random.seed(42)\n X = sp.arange(-3,3,1.0/20.0).reshape(-1,1)\n y = sp.sin(0.2*sp.pi*X+0.1*sp.random.randn(X.shape[0],1))\n return X,y\n\ndef generate_polynomial_features(X,degree=1,return_transformer=True):\n transformer = PolynomialFeatures(degree=degree, include_bias=False)\n X_poly = transformer.fit_transform(X)\n if return_transformer:\n return X_poly, transformer\n else:\n return X_poly\n\n#Generiere Daten\nX,y = generate_data()\nprint X.shape\n#Plotte Daten\nplot_data(X,y,interactive=True);", "Lerne Lineare Regression auf Daten", "#Lerne Lineare Regression\nprint \"Anzahl der Trainingsinstanzen:\\t%d\"%(X.shape[0])\nprint \"Anzahl der Features:\\t\\t%d\"%(X.shape[1])\nmodel = LinearRegression()\nmodel.fit(X,y)\n#Plotte Daten und die gelernte Funktion\nplot_data(X,y,model,interactive=True);", "<span style=\"color:orange\">Model beschreibt die zugrundeliegenden Daten nur schlecht -> Model ist Unterangepasst!</span>\nPolynomiale Regression\nPolynomiale Regression durch hinzufügen von Features höherer Ordnung, z.B. Polynom des 100. Grades: \n$\\mathbf{y} = b + w_1 \\mathbf{x}_1 + w_2 \\mathbf{x}_1^2 + w_3 \\mathbf{x}_1^3 + \\dots + + w_2 \\mathbf{x}_1^{100} $", "#Funktion um eine Polynomielle Regression unterschiedlichen Grades zu plotten\ndef render_polynomial_regression(degree=150):\n #Lerne Lineare Regression auf polynomiellen Features\n transformer = PolynomialFeatures(degree=degree, include_bias=False)\n scaler = StandardScaler()\n model = LinearRegression()\n\n #Polynomielle Regression mit Feature Scaling\n polynomial_regression = Pipeline((\n ('make_poly_features',transformer),\n (\"scale_features\",scaler),\n (\"run_linreg\",model),\n ))\n\n polynomial_regression.fit(X,y)\n #Plotte Daten und die gelernte Funktion\n plot_data(X,y,polynomial_regression)\n pl.show()\n\n#Render einen Interaktiven Plot\n#interact(render_polynomial_regression,degree=IntSlider(min=1,max=300,value=100,\n# description=\"Grad des Polynoms:\"));\nrender_polynomial_regression(degree=100)", "<span style=\"color:orange\">Model beschreibt die Daten zu gut --> Model ist Üeberangepasst und führt zu einer schlechten Generalisierung!</span>\nEinführung in Ridge Regression\nRidge Regression Loss ist definiert als: \n$\\mathcal{L}{Ridge}(\\mathbf{w})=\\frac{1}{n}\\sum{i=1}^n \\left[y_i - (b - \\mathbf{w}^T \\mathbf{x}i) \\right]^2 + \\underbrace{\\alpha \\Vert \\mathbf{w}\\Vert_2^2}{Strafterm}$ \nZum lernen der unbekannten Gewichte $\\mathbf{w}$ muss man die Straffunktion $\\mathcal{L}_{Ridge}$ minimieren.", "#Lerne Ridge Regression auf polynomiellen Features mit alpha=1.1\nridge_regression = Pipeline((\n ('make_poly_features',PolynomialFeatures(degree=100, include_bias=False)),\n (\"scale_features\",StandardScaler()),\n (\"run_ridgereg\",Ridge(alpha=1.1)),\n))\n\nridge_regression.fit(X,y)\n\nplot_data(X,y,ridge_regression,interactive=True)", "<span style=\"color:green\">Optimale Abwägung zwischen zu einfachem und zu komplexem Model durch L2-Regularisierung!\n</span>\nEffekt von $\\alpha$ auf die Gewichte", "#Funktion um den Effekt von alpha auf die Gewichte zu illustrieren\ndef plot_effect_of_alpha(interactive=False):\n coefs = []\n alphas = sp.logspace(5,-6,200)\n poly_feat = PolynomialFeatures(degree=10, include_bias=False)\n scaler = StandardScaler()\n for alpha in alphas:\n model = Ridge(alpha=alpha)\n ridge_regression = Pipeline((\n ('make_poly_features',poly_feat),\n (\"scale_features\",scaler),\n (\"run_ridgereg\",model),\n ))\n\n ridge_regression.fit(X,y)\n\n X_new=sp.linspace(-3, 3, 100).reshape(100, 1)\n y_new = ridge_regression.predict(X_new)\n coefs.append(model.coef_.flatten()[1:])\n fig = pl.figure(figsize=(10,6))\n ax = pl.gca()\n ax.plot(alphas, coefs,linewidth=3)\n ax.set_xscale('log')\n if interactive:\n pl.xlabel(\"alpha\")\n else:\n pl.xlabel('$\\\\alpha$')\n pl.ylabel('Gewichte')\n pl.axis('tight')\n if interactive:\n pl.xticks(fontsize=13)\n plotly_fig = tls.mpl_to_plotly(fig)\n iplot(plotly_fig, show_link=False)\n else:\n pl.show()\n#Plot Effect of Alpha\nplot_effect_of_alpha(interactive=True);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SeldonIO/seldon-server
python/examples/doc_similarity_reuters.ipynb
apache-2.0
[ "Creating a document similarity microservice for the Reuters-21578 dataset.\nFirst download the Reuters-21578 dataset in JSON format into the local folder:\nbash\ngit clone https://github.com/fergiemcdowall/reuters-21578-json\nThe first step will be to convert this into the default corpus format we use:", "import json\nimport codecs \nimport os\n\ndocs = []\nfor filename in os.listdir(\"reuters-21578-json/data/full\"):\n f = open(\"reuters-21578-json/data/full/\"+filename)\n js = json.load(f)\n for j in js:\n if 'topics' in j and 'body' in j:\n d = {}\n d[\"id\"] = j['id']\n d[\"text\"] = j['body'].replace(\"\\n\",\"\")\n d[\"title\"] = j['title']\n d[\"tags\"] = \",\".join(j['topics'])\n docs.append(d)\nprint \"loaded \",len(docs),\" documents\"", "Create a gensim LSI document similarity model", "from seldon.text import DocumentSimilarity,DefaultJsonCorpus\nimport logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\n\ncorpus = DefaultJsonCorpus(docs)\nds = DocumentSimilarity(model_type='gensim_lsi')\nds.fit(corpus)\nprint \"done\"\n", "Run accuracy tests\nRun a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the \"tags\" field of the meta data as the ground truth.", "ds.score()", "Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster.", "ds.score(approx=True)", "Run single nearest neighbour query\nRun a nearest neighbour query on a single document and print the title and tag meta data", "query_doc=6023\nprint \"Query doc: \",ds.get_meta(query_doc)['title'],\"Tagged:\",ds.get_meta(query_doc)['tags']\nneighbours = ds.nn(query_doc,k=5,translate_id=True,approx=True)\nprint neighbours\nfor (doc_id,_) in neighbours:\n j = ds.get_meta(doc_id)\n print \"Doc id\",doc_id,j['title'],\"Tagged:\",j['tags']", "Save recommender\nSave the recommender to the filesystem in reuters_recommender folder", "import seldon\nrw = seldon.Recommender_wrapper()\nrw.save_recommender(ds,\"reuters_recommender\")\nprint \"done\"", "Start a microservice to serve the recommender", "from seldon.microservice import Microservices\nm = Microservices()\napp = m.create_recommendation_microservice(\"reuters_recommender\")\napp.run(host=\"0.0.0.0\",port=5000,debug=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xMyrst/BigData
python/howto/001_Markdown y maquetación.ipynb
gpl-3.0
[ "# HowTo Python 3\n# By César Calderón\n# Maquetación mediante Markdown para la presentación de información en Python 3\n\n#Página Oficial de MARKDOWN by John Gruber\n# https://daringfireball.net/projects/markdown/", "Maquetación Python 3 (Markdown)", "# Los HEADERS o encabezados se definen mediante #, existen 6 niveles siendo un sólo # el de mayor tamaño y ###### el de menor\n# Ejemplo:", "Encabezado de tres \"###\"\nEncabezado de cinco \"#####\"", "# Emphasis o estilos del texto\n# *cursiva*\n# **negrita**\n# ~~tachado~~\n# Ejemplos:", "texto en cursiva\ntexto en negrita\nasteriscos y cursiva en una sola línea\n~~texto tachado~~", "# Como crear listas ordenadas o elementos no númerados\n# Ejemplos:", "Mediante número seguido de punto se define un elemento ordenado de una lista\nSegundo elemento del listado (2.)\n\nTercer elemento (3.)\n\nElemento no numerado (mediante \"*\", también son válidos \"-\" y \"+\")\nElemento de segundo orden con numeración\n\n\n\n\n\nElemento 1\n\nElemento 2\nElemento 3\nSub-elemento 1\nSub-elemento 2", "# En iPython se puede escribir código HTML para maquetar o presentar datos\n# por ejemplo para definir un encabezado se puede escribir <h4>h1 Heading</h4> o, como ya hemos visto, #### Heading 1\n# Ejemplo:", "Encabezado mediante MARKDOWN\n<h4>Encabezado mediante HTML</h4>\n\n'<!--\nSe puede comentar texto como si de JAVA se tratase\n-->'\n<pre>\n <code>\n// Comentarios\nlínea 1 de código\nlínea 2 de código\nlínea 3 de código\n </code>\n</pre>", "# Creación de párrafos mediante MARKDOWN\n# > especifica el primer nivel de párrafo, sucesivos > profundizan en la sangría de los mismos", "Creación de párrafos\n\nPárrafo 1\n\nPárrafo 2\n\nPárrafo 3", "# Mediante el uso de ''' ''' se puede mantener una estructura de un comentario, \n# por ejemplo cuando se escribe código para que se vea legible.\n", "Ejemplo de comentario de un código\njs\ngrunt.initConfig({\n assemble: {\n options: {\n assets: 'docs/assets',\n data: 'src/data/*.{json,yml}',\n helpers: 'src/custom-helpers.js',\n partials: ['src/partials/**/*.{hbs,md}']\n },\n pages: {\n options: {\n layout: 'default.hbs'\n },\n files: {\n './': ['src/templates/pages/index.hbs']\n }\n }\n }\n};", "# Creación de tablas en MARKDOWN\n# | Option | Description |\n# | ------ | ----------- |\n# Si se usa : en la linea anterior se alinea el texto a izquiera o derecha, con : a ambos lados se alinea centrado\n# | ------: | :----------- |\n# | data: | path to data files to supply the data that will be passed into templates. |\n# | engine | engine to be used for processing templates. Handlebars is the default. |\n# | ext | extension to be used for dest files. |\n", "| Opción | Descripción |\n| :----: | :---------- |\n| datos 1 | texto 1 |\n| datos 2 | texto 2 |\n| datos 3 | texto 3 |", "# Enlaces incrustados mediante MARKDOWN\n# [Texto](http://web \"comentario mouseover\")", "Enlace básico\nEnlace con información al realizar un mouseover", "# Incrustado de imágenes\n# ![Texto](http://imagen \"comentario mouseover\")", "" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jrieke/machine-intelligence-2
sheet04/4.ipynb
mit
[ "Machine Intelligence II (week 4) - Team MensaNord\n\nNikolai Zaki\nAlexander Moore\nJohannes Rieke\nGeorg Hoelger\nOliver Atanaszov", "from __future__ import division, print_function\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Exercise 1\na)", "std = 0.1\nmeans = np.array([[-0.5, -0.2], [0, 0.6], [0.5, 0]])\n\nnum_samples_per_mean = 30\nnum_samples = len(means) * num_samples_per_mean\n\nx = np.vstack([np.random.normal(mean, std, size=[num_samples_per_mean, 2]) for mean in means])\n\nplt.scatter(x[:, 0], x[:, 1], label='data')\nplt.scatter(means[:, 0], means[:, 1], c='r', label='means')\nplt.xlabel('x1')\nplt.ylabel('x2')\nplt.legend()", "b)", "def rbf_kernel(x_alpha, x_beta, sigma=1):\n return np.exp(-np.linalg.norm(x_alpha - x_beta)**2 / (2 * sigma**2))\n\nrbf_kernel(x[0], x[1]), rbf_kernel(x[0], x[-1])\n\nkernel_matrix = np.zeros((num_samples, num_samples))\n\nfor (i, j), value in np.ndenumerate(kernel_matrix):\n kernel_matrix[i, j] = rbf_kernel(x[i], x[j], sigma=0.5)\n \nplt.imshow(kernel_matrix, interpolation='none')\nplt.colorbar()\n\nnp.mean(kernel_matrix)\n\n# Normalize kernel matrix to zero mean.\nnormalized_kernel_matrix = np.zeros_like(kernel_matrix)\n\nfor (i, j), value in np.ndenumerate(kernel_matrix):\n normalized_kernel_matrix[i, j] = kernel_matrix[i, j] - 1 / num_samples * np.sum(kernel_matrix[i]) - 1 / num_samples * np.sum(kernel_matrix[:, j]) + 1 / num_samples**2 * np.sum(kernel_matrix)\n \nnp.mean(normalized_kernel_matrix)\n\n# Solve eigenvalue problem.\nfrom scipy.linalg import eig\nevals, evecs = eig(normalized_kernel_matrix)\nevecs = evecs.T # make each row one eigenvector\n\n# Normalize eigenvectors to unit length in feature space.\nnormalized_evecs = evecs.copy()\nfor evec, val in zip(normalized_evecs, evals):\n evec /= np.sqrt(num_samples * val) * np.linalg.norm(evec)", "c)", "grids_pc_values = [] # one grid for each PC, containing the projected values of the test points for this PC\n\ngrid_x = np.linspace(-0.8, 0.8, 10)\ngrid_y = np.linspace(-0.6, 1, 10)\n\nfor evec in evecs[:8]:\n grid = np.zeros((len(grid_x), len(grid_y)))\n\n for (i, j), _ in np.ndenumerate(grid):\n vec = np.array([grid_x[i], grid_y[j]])\n\n for beta in range(num_samples):\n grid[i, j] += evec[beta] * (rbf_kernel(x[beta], vec) - 1 / num_samples * np.sum(kernel_matrix[beta]) - 1 / num_samples * np.sum([rbf_kernel(x_vec, vec) for x_vec in x]) + 1 / num_samples**2 * np.sum(kernel_matrix))\n\n grids_pc_values.append(grid)\n\nfig, axes = plt.subplots(2, 4, figsize=(16, 7))\n\nfor ((i, j), ax), grid in zip(np.ndenumerate(axes), grids_pc_values):\n plt.sca(ax)\n plt.pcolor(grid_x, grid_y, grid)\n plt.scatter(x[:, 0], x[:, 1], c='gray')\n \n if i == 1:\n plt.xlabel('x1')\n \n if j == 0:\n plt.ylabel('x2')", "Each of the first 8 PCs (visualized in the 8 plots above) has a gradient-like structure in the input space. For example, the first PC (top left) seems like a linear gradient from bottom left to top right. \nd)\nKernel-PCA can be used in all cases where the data points in the original space are not distributed \"linearly\", i.e. the main variation is not along a line in the space. For example, if the data points are in the form of a parabola or circle, a Kernel PCA can help to transform the data into another vector space, where the principal components (i.e. the directions of variation) are easier to find.\nOne example use case of Kernel-PCA is image de-noising (http://citeseer.ist.psu.edu/old/mika99kernel.html)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
KnHuq/Dynamic-Tensorflow-Tutorial
Vhanilla_RNN/RNN.ipynb
mit
[ "<span style=\"color:green\"> VANILLA RNN ON 8*8 MNIST DATASET TO PREDICT TEN CLASS\n<span style=\"color:blue\">Its a dynamic sequence and batch vhanilla rnn. This is created with tensorflow scan and map higher ops!!!!\n<span style=\"color:blue\">This is a base rnn which can be used to create GRU, LSTM, Neural Stack Machine, Neural Turing Machine and RNN-EM and so on!\nImporting Libraries", "import numpy as np\nimport tensorflow as tf\nfrom sklearn.datasets import load_digits \nfrom sklearn.cross_validation import train_test_split\nimport pylab as pl\nfrom IPython import display\nimport sys\n%matplotlib inline", "Vhanilla RNN class and functions", "class RNN_cell(object):\n\n \"\"\"\n RNN cell object which takes 3 arguments for initialization.\n input_size = Input Vector size\n hidden_layer_size = Hidden layer size\n target_size = Output vector size\n\n \"\"\"\n\n def __init__(self, input_size, hidden_layer_size, target_size):\n\n # Initialization of given values\n self.input_size = input_size\n self.hidden_layer_size = hidden_layer_size\n self.target_size = target_size\n\n # Weights and Bias for input and hidden tensor\n self.Wx = tf.Variable(tf.zeros(\n [self.input_size, self.hidden_layer_size]))\n self.Wh = tf.Variable(tf.zeros(\n [self.hidden_layer_size, self.hidden_layer_size]))\n self.bi = tf.Variable(tf.zeros([self.hidden_layer_size]))\n\n # Weights for output layers\n self.Wo = tf.Variable(tf.truncated_normal(\n [self.hidden_layer_size, self.target_size],mean=0,stddev=.01))\n self.bo = tf.Variable(tf.truncated_normal([self.target_size],mean=0,stddev=.01))\n\n # Placeholder for input vector with shape[batch, seq, embeddings]\n self._inputs = tf.placeholder(tf.float32,\n shape=[None, None, self.input_size],\n name='inputs')\n\n # Processing inputs to work with scan function\n self.processed_input = process_batch_input_for_RNN(self._inputs)\n\n '''\n Initial hidden state's shape is [1,self.hidden_layer_size]\n In First time stamp, we are doing dot product with weights to\n get the shape of [batch_size, self.hidden_layer_size].\n For this dot product tensorflow use broadcasting. But during\n Back propagation a low level error occurs.\n So to solve the problem it was needed to initialize initial\n hiddden state of size [batch_size, self.hidden_layer_size].\n So here is a little hack !!!! Getting the same shaped\n initial hidden state of zeros.\n '''\n\n self.initial_hidden = self._inputs[:, 0, :]\n self.initial_hidden = tf.matmul(\n self.initial_hidden, tf.zeros([input_size, hidden_layer_size]))\n\n # Function for vhanilla RNN.\n def vanilla_rnn(self, previous_hidden_state, x):\n \"\"\"\n This function takes previous hidden state and input and\n outputs current hidden state.\n \"\"\"\n current_hidden_state = tf.tanh(\n tf.matmul(previous_hidden_state, self.Wh) +\n tf.matmul(x, self.Wx) + self.bi)\n\n return current_hidden_state\n\n # Function for getting all hidden state.\n def get_states(self):\n \"\"\"\n Iterates through time/ sequence to get all hidden state\n \"\"\"\n\n # Getting all hidden state throuh time\n all_hidden_states = tf.scan(self.vanilla_rnn,\n self.processed_input,\n initializer=self.initial_hidden,\n name='states')\n\n return all_hidden_states\n\n # Function to get output from a hidden layer\n def get_output(self, hidden_state):\n \"\"\"\n This function takes hidden state and returns output\n \"\"\"\n output = tf.nn.relu(tf.matmul(hidden_state, self.Wo) + self.bo)\n\n return output\n\n # Function for getting all output layers\n def get_outputs(self):\n \"\"\"\n Iterating through hidden states to get outputs for all timestamp\n \"\"\"\n all_hidden_states = self.get_states()\n\n all_outputs = tf.map_fn(self.get_output, all_hidden_states)\n\n return all_outputs\n\n\n# Function to convert batch input data to use scan ops of tensorflow.\ndef process_batch_input_for_RNN(batch_input):\n \"\"\"\n Process tensor of size [5,3,2] to [3,5,2]\n \"\"\"\n batch_input_ = tf.transpose(batch_input, perm=[2, 0, 1])\n X = tf.transpose(batch_input_)\n\n return X\n", "Placeholder and initializers", "hidden_layer_size = 110\ninput_size = 8\ntarget_size = 10\n\ny = tf.placeholder(tf.float32, shape=[None, target_size],name='inputs')", "Models", "#Initializing rnn object\nrnn=RNN_cell( input_size, hidden_layer_size, target_size)\n\n#Getting all outputs from rnn\noutputs = rnn.get_outputs()\n\n#Getting final output through indexing after reversing\nlast_output = outputs[-1]\n\n#As rnn model output the final layer through Relu activation softmax is used for final output.\noutput=tf.nn.softmax(last_output)\n\n#Computing the Cross Entropy loss \ncross_entropy = -tf.reduce_sum(y * tf.log(output))\n\n# Trainning with Adadelta Optimizer\ntrain_step = tf.train.AdamOptimizer().minimize(cross_entropy)\n\n#Calculatio of correct prediction and accuracy\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(output,1))\naccuracy = (tf.reduce_mean(tf.cast(correct_prediction, tf.float32)))*100", "Dataset Preparation", "sess=tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n\n#Using Sklearn MNIST dataset.\ndigits = load_digits()\nX=digits.images\nY_=digits.target\n\n# One hot encoding\nY = sess.run(tf.one_hot(indices=Y_, depth=target_size))\n\n#Getting Train and test Dataset\nX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.22, random_state=42)\n\n#Cuttting for simple iteration\nX_train=X_train[:1400]\ny_train=y_train[:1400]\n\n#Iterations to do trainning\nfor epoch in range(120):\n \n start=0\n end=100\n for i in range(14):\n \n X=X_train[start:end]\n Y=y_train[start:end]\n start=end\n end=start+100\n sess.run(train_step,feed_dict={rnn._inputs:X, y:Y})\n\n Loss=str(sess.run(cross_entropy,feed_dict={rnn._inputs:X, y:Y}))\n Train_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_train, y:y_train}))\n Test_accuracy=str(sess.run(accuracy,feed_dict={rnn._inputs:X_test, y:y_test}))\n \n pl.plot([epoch],Loss,'b.',)\n pl.plot([epoch],Train_accuracy,'r*',)\n pl.plot([epoch],Test_accuracy,'g+')\n display.clear_output(wait=True)\n display.display(pl.gcf()) \n \n sys.stdout.flush()\n print(\"\\rIteration: %s Loss: %s Train Accuracy: %s Test Accuracy: %s\"%(epoch,Loss,Train_accuracy,Test_accuracy)),\n sys.stdout.flush()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
smharper/openmc
examples/jupyter/mgxs-part-ii.ipynb
mit
[ "This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:\n\nCreation of multi-group cross sections on a heterogeneous geometry\nCalculation of cross sections on a nuclide-by-nuclide basis\nThe use of tally precision triggers with multi-group cross sections\nBuilt-in features for energy condensation in downstream data processing\nThe use of the openmc.data module to plot continuous-energy vs. multi-group cross sections\nValidation of multi-group cross sections with OpenMOC\n\nNote: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.\nGenerate Input Files", "import numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('seaborn-dark')\n\nimport openmoc\n\nimport openmc\nimport openmc.mgxs as mgxs\nimport openmc.data\nfrom openmc.openmoc_compatible import get_openmoc_geometry\n\n%matplotlib inline", "First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.", "# 1.6% enriched fuel\nfuel = openmc.Material(name='1.6% Fuel')\nfuel.set_density('g/cm3', 10.31341)\nfuel.add_nuclide('U235', 3.7503e-4)\nfuel.add_nuclide('U238', 2.2625e-2)\nfuel.add_nuclide('O16', 4.6007e-2)\n\n# borated water\nwater = openmc.Material(name='Borated Water')\nwater.set_density('g/cm3', 0.740582)\nwater.add_nuclide('H1', 4.9457e-2)\nwater.add_nuclide('O16', 2.4732e-2)\n\n# zircaloy\nzircaloy = openmc.Material(name='Zircaloy')\nzircaloy.set_density('g/cm3', 6.55)\nzircaloy.add_nuclide('Zr90', 7.2758e-3)", "With our materials, we can now create a Materials object that can be exported to an actual XML file.", "# Instantiate a Materials collection\nmaterials_file = openmc.Materials([fuel, water, zircaloy])\n\n# Export to \"materials.xml\"\nmaterials_file.export_to_xml()", "Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.", "# Create cylinders for the fuel and clad\nfuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)\nclad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)\n\n# Create box to surround the geometry\nbox = openmc.model.rectangular_prism(1.26, 1.26, boundary_type='reflective')", "With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.", "# Create a Universe to encapsulate a fuel pin\npin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')\n\n# Create fuel Cell\nfuel_cell = openmc.Cell(name='1.6% Fuel')\nfuel_cell.fill = fuel\nfuel_cell.region = -fuel_outer_radius\npin_cell_universe.add_cell(fuel_cell)\n\n# Create a clad Cell\nclad_cell = openmc.Cell(name='1.6% Clad')\nclad_cell.fill = zircaloy\nclad_cell.region = +fuel_outer_radius & -clad_outer_radius\npin_cell_universe.add_cell(clad_cell)\n\n# Create a moderator Cell\nmoderator_cell = openmc.Cell(name='1.6% Moderator')\nmoderator_cell.fill = water\nmoderator_cell.region = +clad_outer_radius & box\npin_cell_universe.add_cell(moderator_cell)", "We now must create a geometry with the pin cell universe and export it to XML.", "# Create Geometry and set root Universe\nopenmc_geometry = openmc.Geometry(pin_cell_universe)\n\n# Export to \"geometry.xml\"\nopenmc_geometry.export_to_xml()", "Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.", "# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 10000\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.Source(space=uniform_dist)\n\n# Activate tally precision triggers\nsettings_file.trigger_active = True\nsettings_file.trigger_max_batches = settings_file.batches * 4\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define \"coarse\" 2-group and \"fine\" 8-group structures using the built-in EnergyGroups class.", "# Instantiate a \"coarse\" 2-group EnergyGroups object\ncoarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])\n\n# Instantiate a \"fine\" 8-group EnergyGroups object\nfine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,\n 0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])", "Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.", "# Extract all Cells filled by Materials\nopenmc_cells = openmc_geometry.get_all_material_cells().values()\n\n# Create dictionary to store multi-group cross sections for all cells\nxs_library = {}\n\n# Instantiate 8-group cross sections for each cell\nfor cell in openmc_cells:\n xs_library[cell.id] = {}\n xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)\n xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)\n xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)\n xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)\n xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)", "Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.", "# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections\ntally_trigger = openmc.Trigger('std_dev', 1e-2)\n\n# Add the tally trigger to each of the multi-group cross section tallies\nfor cell in openmc_cells:\n for mgxs_type in xs_library[cell.id]:\n xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger", "Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.", "# Instantiate an empty Tallies object\ntallies_file = openmc.Tallies()\n\n# Iterate over all cells and cross section types\nfor cell in openmc_cells:\n for rxn_type in xs_library[cell.id]:\n\n # Set the cross sections domain to the cell\n xs_library[cell.id][rxn_type].domain = cell\n \n # Tally cross sections by nuclide\n xs_library[cell.id][rxn_type].by_nuclide = True\n \n # Add OpenMC tallies to the tallies file for XML generation\n for tally in xs_library[cell.id][rxn_type].tallies.values():\n tallies_file.append(tally, merge=True)\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()", "Now we a have a complete set of inputs, so we can go ahead and run our simulation.", "# Run OpenMC\nopenmc.run()", "Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.", "# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.082.h5')", "The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.", "# Iterate over all cells and cross section types\nfor cell in openmc_cells:\n for rxn_type in xs_library[cell.id]:\n xs_library[cell.id][rxn_type].load_from_statepoint(sp)", "That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.\nExtracting and Storing MGXS Data\nLet's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.", "nufission = xs_library[fuel_cell.id]['nu-fission']\nnufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])", "Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.", "nufission = xs_library[fuel_cell.id]['nu-fission']\nnufission.print_xs(xs_type='macro', nuclides='sum')", "Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .", "nuscatter = xs_library[moderator_cell.id]['nu-scatter']\ndf = nuscatter.get_pandas_dataframe(xs_type='micro')\ndf.head(10)", "Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.", "# Extract the 8-group transport cross section for the fuel\nfine_xs = xs_library[fuel_cell.id]['transport']\n\n# Condense to the 2-group structure\ncondensed_xs = fine_xs.get_condensed_xs(coarse_groups)", "Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.", "condensed_xs.print_xs()\n\ndf = condensed_xs.get_pandas_dataframe(xs_type='micro')\ndf", "Verification with OpenMOC\nNow, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.", "# Create an OpenMOC Geometry from the OpenMC Geometry\nopenmoc_geometry = get_openmoc_geometry(sp.summary.geometry)", "Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.", "# Get all OpenMOC cells in the gometry\nopenmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()\n\n# Inject multi-group cross sections into OpenMOC Materials\nfor cell_id, cell in openmoc_cells.items():\n \n # Ignore the root cell\n if cell.getName() == 'root cell':\n continue\n \n # Get a reference to the Material filling this Cell\n openmoc_material = cell.getFillMaterial()\n \n # Set the number of energy groups for the Material\n openmoc_material.setNumEnergyGroups(fine_groups.num_groups)\n \n # Extract the appropriate cross section objects for this cell\n transport = xs_library[cell_id]['transport']\n nufission = xs_library[cell_id]['nu-fission']\n nuscatter = xs_library[cell_id]['nu-scatter']\n chi = xs_library[cell_id]['chi']\n \n # Inject NumPy arrays of cross section data into the Material\n # NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC\n openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())\n openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())\n openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())\n openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())", "We are now ready to run OpenMOC to verify our cross-sections from OpenMC.", "# Generate tracks for OpenMOC\ntrack_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)\ntrack_generator.generateTracks()\n\n# Run OpenMOC\nsolver = openmoc.CPUSolver(track_generator)\nsolver.computeEigenvalue()", "We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.", "# Print report of keff and bias with OpenMC\nopenmoc_keff = solver.getKeff()\nopenmc_keff = sp.k_combined.n\nbias = (openmoc_keff - openmc_keff) * 1e5\n\nprint('openmc keff = {0:1.6f}'.format(openmc_keff))\nprint('openmoc keff = {0:1.6f}'.format(openmoc_keff))\nprint('bias [pcm]: {0:1.1f}'.format(bias))", "As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.", "openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)\nopenmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()\n\n# Inject multi-group cross sections into OpenMOC Materials\nfor cell_id, cell in openmoc_cells.items():\n \n # Ignore the root cell\n if cell.getName() == 'root cell':\n continue\n \n openmoc_material = cell.getFillMaterial()\n openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)\n \n # Extract the appropriate cross section objects for this cell\n transport = xs_library[cell_id]['transport']\n nufission = xs_library[cell_id]['nu-fission']\n nuscatter = xs_library[cell_id]['nu-scatter']\n chi = xs_library[cell_id]['chi']\n \n # Perform group condensation\n transport = transport.get_condensed_xs(coarse_groups)\n nufission = nufission.get_condensed_xs(coarse_groups)\n nuscatter = nuscatter.get_condensed_xs(coarse_groups)\n chi = chi.get_condensed_xs(coarse_groups)\n \n # Inject NumPy arrays of cross section data into the Material\n openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())\n openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())\n openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())\n openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())\n\n# Generate tracks for OpenMOC\ntrack_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)\ntrack_generator.generateTracks()\n\n# Run OpenMOC\nsolver = openmoc.CPUSolver(track_generator)\nsolver.computeEigenvalue()\n\n# Print report of keff and bias with OpenMC\nopenmoc_keff = solver.getKeff()\nopenmc_keff = sp.k_combined.n\nbias = (openmoc_keff - openmc_keff) * 1e5\n\nprint('openmc keff = {0:1.6f}'.format(openmc_keff))\nprint('openmoc keff = {0:1.6f}'.format(openmoc_keff))\nprint('bias [pcm]: {0:1.1f}'.format(bias))", "There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:\n\nAppropriate transport-corrected cross sections\nSpatial discretization of OpenMOC's mesh\nConstant-in-angle multi-group cross sections\n\nVisualizing MGXS Data\nIt is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.\nOne particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.\nThe MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.", "# Create a figure of the U-235 continuous-energy fission cross section \nfig = openmc.plot_xs('U235', ['fission'])\n\n# Get the axis to use for plotting the MGXS\nax = fig.gca()\n\n# Extract energy group bounds and MGXS values to plot\nfission = xs_library[fuel_cell.id]['fission']\nenergy_groups = fission.energy_groups\nx = energy_groups.group_edges\ny = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')\ny = np.squeeze(y)\n\n# Fix low energy bound\nx[0] = 1.e-5\n\n# Extend the mgxs values array for matplotlib's step plot\ny = np.insert(y, 0, y[0])\n\n# Create a step plot for the MGXS\nax.plot(x, y, drawstyle='steps', color='r', linewidth=3)\n\nax.set_title('U-235 Fission Cross Section')\nax.legend(['Continuous', 'Multi-Group'])\nax.set_xlim((x.min(), x.max()))", "Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.", "# Construct a Pandas DataFrame for the microscopic nu-scattering matrix\nnuscatter = xs_library[moderator_cell.id]['nu-scatter']\ndf = nuscatter.get_pandas_dataframe(xs_type='micro')\n\n# Slice DataFrame in two for each nuclide's mean values\nh1 = df[df['nuclide'] == 'H1']['mean']\no16 = df[df['nuclide'] == 'O16']['mean']\n\n# Cast DataFrames as NumPy arrays\nh1 = h1.values\no16 = o16.values\n\n# Reshape arrays to 2D matrix for plotting\nh1.shape = (fine_groups.num_groups, fine_groups.num_groups)\no16.shape = (fine_groups.num_groups, fine_groups.num_groups)", "Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.", "# Create plot of the H-1 scattering matrix\nfig = plt.subplot(121)\nfig.imshow(h1, interpolation='nearest', cmap='jet')\nplt.title('H-1 Scattering Matrix')\nplt.xlabel('Group Out')\nplt.ylabel('Group In')\n\n# Create plot of the O-16 scattering matrix\nfig2 = plt.subplot(122)\nfig2.imshow(o16, interpolation='nearest', cmap='jet')\nplt.title('O-16 Scattering Matrix')\nplt.xlabel('Group Out')\nplt.ylabel('Group In')\n\n# Show the plot on screen\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yingchi/fastai-notes
deeplearning1/nbs/char-rnn.ipynb
apache-2.0
[ "from theano.sandbox import cuda\ncuda.use('gpu2')\n\n%matplotlib inline\nimport utils; reload(utils)\nfrom utils import *\nfrom __future__ import division, print_function\n\nfrom keras.layers import TimeDistributed, Activation\nfrom numpy.random import choice", "Setup\nWe haven't really looked into the detail of how this works yet - so this is provided for self-study for those who are interested. We'll look at it closely next week.", "path = get_file('nietzsche.txt', origin=\"https://s3.amazonaws.com/text-datasets/nietzsche.txt\")\ntext = open(path).read().lower()\nprint('corpus length:', len(text))\n\n!tail {path} -n25\n\n#path = 'data/wiki/'\n#text = open(path+'small.txt').read().lower()\n#print('corpus length:', len(text))\n\n#text = text[0:1000000]\n\nchars = sorted(list(set(text)))\nvocab_size = len(chars)+1\nprint('total chars:', vocab_size)\n\nchars.insert(0, \"\\0\")\n\n''.join(chars[1:-6])\n\nchar_indices = dict((c, i) for i, c in enumerate(chars))\nindices_char = dict((i, c) for i, c in enumerate(chars))\n\nidx = [char_indices[c] for c in text]\n\nidx[:10]\n\n''.join(indices_char[i] for i in idx[:70])", "Preprocess and create model", "maxlen = 40\nsentences = []\nnext_chars = []\nfor i in range(0, len(idx) - maxlen+1):\n sentences.append(idx[i: i + maxlen])\n next_chars.append(idx[i+1: i+maxlen+1])\nprint('nb sequences:', len(sentences))\n\nsentences = np.concatenate([[np.array(o)] for o in sentences[:-2]])\nnext_chars = np.concatenate([[np.array(o)] for o in next_chars[:-2]])\n\nsentences.shape, next_chars.shape\n\nn_fac = 24\n\nmodel=Sequential([\n Embedding(vocab_size, n_fac, input_length=maxlen),\n LSTM(512, input_dim=n_fac,return_sequences=True, dropout_U=0.2, dropout_W=0.2,\n consume_less='gpu'),\n Dropout(0.2),\n LSTM(512, return_sequences=True, dropout_U=0.2, dropout_W=0.2,\n consume_less='gpu'),\n Dropout(0.2),\n TimeDistributed(Dense(vocab_size)),\n Activation('softmax')\n ]) \n\nmodel.compile(loss='sparse_categorical_crossentropy', optimizer=Adam())", "Train", "def print_example():\n seed_string=\"ethics is a basic foundation of all that\"\n for i in range(320):\n x=np.array([char_indices[c] for c in seed_string[-40:]])[np.newaxis,:]\n preds = model.predict(x, verbose=0)[0][-1]\n preds = preds/np.sum(preds)\n next_char = choice(chars, p=preds)\n seed_string = seed_string + next_char\n print(seed_string)\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nmodel.optimizer.lr=0.001\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nmodel.optimizer.lr=0.0001\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nmodel.save_weights('data/char_rnn.h5')\n\nmodel.optimizer.lr=0.00001\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nmodel.fit(sentences, np.expand_dims(next_chars,-1), batch_size=64, nb_epoch=1)\n\nprint_example()\n\nprint_example()\n\nmodel.save_weights('data/char_rnn.h5')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
flohorovicic/pynoddy
docs/notebooks/9-Topology.ipynb
gpl-2.0
[ "Simulation of a Noddy history and analysis of its voxel topology\nExample of how the module can be used to run Noddy simulations and analyse the output.", "from IPython.core.display import HTML\ncss_file = 'pynoddy.css'\nHTML(open(css_file, \"r\").read())\n\n# Basic settings\nimport sys, os\nimport subprocess\n\n# Now import pynoddy\nimport pynoddy\n%matplotlib inline\n\n# determine path of repository to set paths corretly below\n\nrepo_path = os.path.realpath('../..')", "Compute the model\nThe simplest way to perform the Noddy simulation through Python is simply to call the executable. One way that should be fairly platform independent is to use Python's own subprocess module:", "# Change to sandbox directory to store results\nos.chdir(os.path.join(repo_path, 'sandbox'))\n\n# Path to exmaple directory in this repository\nexample_directory = os.path.join(repo_path,'examples')\n# Compute noddy model for history file\nhistory_file = 'strike_slip.his'\nhistory = os.path.join(example_directory, history_file)\nnfiles = 1\nfiles = '_'+str(nfiles).zfill(4)\nprint \"files\", files\nroot_name = 'noddy_out'\noutput_name = root_name + files\nprint root_name\nprint output_name\n# call Noddy\n\n# NOTE: Make sure that the noddy executable is accessible in the system!!\nsys\nprint subprocess.Popen(['noddy.exe', history, output_name, 'TOPOLOGY'], \n shell=False, stderr=subprocess.PIPE, \n stdout=subprocess.PIPE).stdout.read()\n#\nsys\nprint subprocess.Popen(['topology.exe', root_name, files], \n shell=False, stderr=subprocess.PIPE, \n stdout=subprocess.PIPE).stdout.read()", "For convenience, the model computations are wrapped into a Python function in pynoddy:", "pynoddy.compute_model(history, output_name)\npynoddy.compute_topology(root_name, files)\n", "Note: The Noddy call from Python is, to date, calling Noddy through the subprocess function. In a future implementation, this call could be subsituted with a full wrapper for the C-functions written in Python. Therefore, using the member function compute_model is not only easier, but also the more \"future-proof\" way to compute the Noddy model.\nLoading Topology output files\nHere we load the binary adjacency matrix for one topology calculation and display it as an image", "from matplotlib import pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\n\nN1 = pynoddy.NoddyOutput(output_name)\nAM= pynoddy.NoddyTopology(output_name)\n\nam_name=root_name +'_uam.bin'\nprint am_name\nprint AM.maxlitho\n\nimage = np.empty((int(AM.maxlitho),int(AM.maxlitho)), np.uint8)\n\nimage.data[:] = open(am_name).read()\ncmap=plt.get_cmap('Paired')\ncmap.set_under('white') # Color for values less than vmin\n\nplt.imshow(image, interpolation=\"nearest\", vmin=1, cmap=cmap)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jbwhit/WSP-312-Tips-and-Tricks
notebooks/autoreload-example.ipynb
mit
[ "How to use autoreload\nI have been confused on how to use autoreload IPython extension for a long time. The documentation simply wasn't clear to me. Or, rather, it seemed clear, but then I was surprised by the behavior.\nAfter submitting a bug request w/ Daniel Margala (we were confused together), we got a response that helped to clear things up. \nI've updated this with my new understanding. Hopefully this helps anyone else who is confused.", "import os\nimport sys\nimport time\nsys.path.append(\"..\")\n%reload_ext autoreload", "Create a simple packe with a few simple modules that we will update.", "directory = \"../examplepackage/\"\nif not os.path.exists(directory):\n os.makedirs(directory)\n\n%%writefile ../examplepackage/neato.py\n\ndef torpedo():\n print('First module modification 0!')\n\n%%writefile ../examplepackage/neato2.py\n\ndef torpedo2():\n print('Second module modification 0!')\n\n%%writefile ../examplepackage/neato3.py\n\ndef torpedo3():\n print('Third module modification 0!')\n\n# when hitting 'run all' this needs a short delay (probable race condition).\ntime.sleep(1.5)", "%autoreload 1\nThe docs say:\n```\n%autoreload 1\nReload all modules imported with %aimport every time before executing the Python code typed.\n```", "import examplepackage.neato\nimport examplepackage.neato2\nimport examplepackage.neato3\n\n%autoreload 1\n%aimport examplepackage", "You might think that importing examplepackage would result in that package being auto-reloaded if you updated code inside of it. You'd be wrong. Follow along!", "examplepackage.neato.torpedo()\n\nexamplepackage.neato2.torpedo2()\n\nexamplepackage.neato3.torpedo3()\n\n%%writefile ../examplepackage/neato.py\n\ndef torpedo():\n print('First module modification 1')\n\n%%writefile ../examplepackage/neato2.py\n\ndef torpedo2():\n print('Second module modification 1')\n\n%%writefile ../examplepackage/neato3.py\n\ndef torpedo3():\n print('Third module modification 1!')\n\n# when hitting 'run all' this needs a short delay (probable race condition).\ntime.sleep(1.5)\n\nexamplepackage.neato.torpedo()\n\nexamplepackage.neato2.torpedo2()\n\nexamplepackage.neato3.torpedo3()", "Nothing is updated. You have to import the module explicitly like:", "%autoreload 1\n%aimport examplepackage.neato\n\nexamplepackage.neato.torpedo()\n\nexamplepackage.neato2.torpedo2()\n\nexamplepackage.neato3.torpedo3()", "%autoreload 2\nThe docs say: \n```\n%autoreload 2\nReload all modules (except those excluded by %aimport) every time before executing the Python code typed.\n```\nI read this as \"if you set %autoreload 2, then it will reload all modules except whatever you %aimport examplepackage.module\". This is not how it works. When using %aimport you also have to flag it with a -. See below.", "%autoreload 2\n%aimport examplepackage.neato\n%aimport -examplepackage.neato2\n\nexamplepackage.neato.torpedo()\n\nexamplepackage.neato2.torpedo2()\n\nexamplepackage.neato3.torpedo3()\n\n%%writefile ../examplepackage/neato.py\n\ndef torpedo():\n print('First module modification 2!')\n\n%%writefile ../examplepackage/neato2.py\n\ndef torpedo2():\n print('Second module modification 2!')\n\n%%writefile ../examplepackage/neato3.py\n\ndef torpedo3():\n print('Third module modification 2!')\n\n# when hitting 'run all' this needs a short delay (race condition).\ntime.sleep(1.5)\n\nexamplepackage.neato.torpedo()\n\nexamplepackage.neato2.torpedo2()\n\nexamplepackage.neato3.torpedo3()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kcyu1993/ML_course_kyu
labs/ex05/template/ex05.ipynb
mit
[ "# Useful starting lines\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n%load_ext autoreload\n%autoreload 2", "Logistic Regression\nClassification Using Linear Regression\nLoad your data.", "from helpers import sample_data, load_data, standardize\n\n# load data.\nheight, weight, gender = load_data()\n\n# build sampled x and y.\nseed = 1\ny = np.expand_dims(gender, axis=1)\nX = np.c_[height.reshape(-1), weight.reshape(-1)]\ny, X = sample_data(y, X, seed, size_samples=200)\nx, mean_x, std_x = standardize(X)\n", "Use least_squares to compute w, and visualize the results.", "from least_squares import least_squares\nfrom plots import visualization\n\ndef least_square_classification_demo(y, x):\n # ***************************************************\n # INSERT YOUR CODE HERE\n # classify the data by linear regression: TODO\n # ***************************************************\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n # w = least squares with respect to tx\n w = least_squares(y,tx)\n \n visualization(y, x, mean_x, std_x, w, \"classification_by_least_square\")\n \nleast_square_classification_demo(y, x)", "Logistic Regression\nCompute your cost by negative log likelihood.", "def sigmoid(t):\n \"\"\"apply sigmoid function on t.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # ?? Sigmoid or Logistic function\n # ***************************************************\n return 1/(1+np.exp(-t))\n\ndef calculate_loss(y, tx, w):\n \"\"\"compute the cost by negative log likelihood.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # \n # ***************************************************\n loss = 0\n for index in range(len(tx)):\n e = np.dot(np.transpose(tx[index,:]), w)\n loss += np.log(1 + np.exp(e)) - y[index]*e\n return loss\n\ndef calculate_gradient(y, tx, w):\n \"\"\"compute the gradient of loss.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # TODO\n # ***************************************************\n return np.dot(tx.T, sigmoid(np.dot(tx, w)) - y)", "Using Gradient Descent\nImplement your function to calculate the gradient for logistic regression.", "def learning_by_gradient_descent(y, tx, w, alpha):\n \"\"\"\n Do one step of gradient descen using logistic regression.\n Return the loss and the updated w.\n \"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # compute the cost: TODO\n # ***************************************************\n loss = calculate_loss(y, tx, w)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # compute the gradient: TODO\n # ***************************************************\n grad = calculate_gradient(y, tx, w)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # update w: TODO\n # ***************************************************\n w = w - alpha * grad\n return loss, w", "Demo!", "from helpers import de_standardize\n\ndef logistic_regression_gradient_descent_demo(y, x):\n # init parameters\n max_iter = 10000\n threshold = 1e-8\n alpha = 0.001\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_gradient_descent(y, tx, w, alpha)\n # log info\n if iter % 1000 == 0:\n print(\"Current iteration={i}, the loss={l}\".format(i=iter, l=loss))\n # converge criteria\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_gradient_descent\")\n print(\"The loss={l}\".format(l=calculate_loss(y, tx, w)))\n\nlogistic_regression_gradient_descent_demo(y, x)\n", "Calculate your hessian below", "def calculate_hessian(y, tx, w):\n \"\"\"return the hessian of the loss function.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # calculate hessian: TODO\n # ***************************************************\n S = np.zeros((len(tx),len(tx)))\n for i in range(len(tx)):\n S[i,i] = sigmoid(np.dot(np.transpose(tx[i,:]),w))*(1-sigmoid(np.dot(np.transpose(tx[i,:]),w))) \n H = np.dot(tx.T, np.dot(S, tx))\n return H", "Write a function below to return loss, gradient, and hessian.", "def logistic_regression(y, tx, w):\n \"\"\"return the loss, gradient, and hessian.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # return loss, gradient, and hessian: TODO\n # ***************************************************\n return calculate_loss(y, tx, w), calculate_gradient(y, tx, w), calculate_hessian(y, tx, w)", "Using Newton method\nUse Newton method for logistic regression.", "def learning_by_newton_method(y, tx, w, alpha):\n \"\"\"\n Do one step on Newton's method.\n return the loss and updated w.\n \"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # return loss, gradient and hessian: TODO\n # ***************************************************\n loss, grad, hess = logistic_regression(y, tx, w)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # update w: TODO\n # ***************************************************\n w = w - alpha* np.dot(np.linalg.inv(hess), grad)\n return loss, w", "demo", "def logistic_regression_newton_method_demo(y, x):\n # init parameters\n max_iter = 10000\n alpha = 0.01\n threshold = 1e-8\n lambda_ = 0.1\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_newton_method(y, tx, w, alpha)\n # log info\n if iter % 500 == 0:\n print(\"Current iteration={i}, the loss={l}\".format(i=iter, l=loss))\n # converge criteria\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_newton_method\")\n print(\"The loss={l}\".format(l=calculate_loss(y, tx, w)))\n\nlogistic_regression_newton_method_demo(y, x)", "Using penalized logistic regression\nFill in the function below.", "def penalized_logistic_regression(y, tx, w, lambda_):\n \"\"\"return the loss, gradient, and hessian.\"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # return loss, gradient: TODO\n # ***************************************************\n loss = calculate_loss(y,tx,w) # origa\n grad = calculate_gradient(y, tx, w) + 2 * lambda_ * w\n # hess = calculate_hessian(y, tx, w) + 2 * lambda_\n return loss, grad\n\ndef learning_by_penalized_gradient(y, tx, w, alpha, lambda_):\n \"\"\"\n Do one step of gradient descent, using the penalized logistic regression.\n Return the loss and updated w.\n \"\"\"\n # ***************************************************\n # INSERT YOUR CODE HERE\n # return loss, gradient and hessian: TODO\n # ***************************************************\n loss, grad = penalized_logistic_regression(y, tx, w, lambda_)\n # ***************************************************\n # INSERT YOUR CODE HERE\n # update w: TODO\n # ***************************************************\n w = w - alpha* grad\n return loss, w\n\ndef logistic_regression_penalized_gradient_descent_demo(y, x):\n # init parameters\n max_iter = 10000\n alpha = 0.01\n lambda_ = 0.1\n threshold = 1e-8\n losses = []\n\n # build tx\n tx = np.c_[np.ones((y.shape[0], 1)), x]\n w = np.zeros((tx.shape[1], 1))\n\n # start the logistic regression\n for iter in range(max_iter):\n # get loss and update w.\n loss, w = learning_by_penalized_gradient(y, tx, w, alpha, lambda_)\n # log info\n if iter % 500 == 0:\n print(\"Current iteration={i}, the loss={l}\".format(i=iter, l=loss))\n # converge criteria\n losses.append(loss)\n if len(losses) > 1 and np.abs(losses[-1] - losses[-2]) < threshold:\n break\n # visualization\n visualization(y, x, mean_x, std_x, w, \"classification_by_logistic_regression_penalized_gradient_descent\")\n print(\"The loss={l}\".format(l=calculate_loss(y, tx, w)))\n pred = sigmoid(np.dot(tx, w))\n pred[np.where(pred <= 0.5)] = 0\n pred[np.where(pred > 0.5)] = 1\n mis = np.count_nonzero(y - pred)\n print(\"Mis rate = {}\".format(mis/len(y)))\nlogistic_regression_penalized_gradient_descent_demo(y, x)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nilmtk/nilmtk
docs/manual/user_guide/elecmeter_and_metergroup.ipynb
apache-2.0
[ "MeterGroup, ElecMeter, selection and basic statistics\nAll NILM datasets consists of various groupings of electricity meters. We can group the meters by house. Or by the type of appliance they are directly connected to. Or by sample rate. Or by whether the meter is a whole-house \"site meter\" or an appliance-level submeter, or a circuit-level submeter.\nIn NILMTK, one of the key classes is MeterGroup which stores a list of meters and allows us to select a subset of meters, aggregate power from all meters and many other functions.\nWhen we first open a DataSet, NILMTK creates several MeterGroup objects. There's nilmtk.global_meter_group which holds every meter currently loaded (including from multiple datasets if you have opened more than one dataset). There is also one MeterGroup per building (which live in the Building.elec attribute). We also nested MeterGroups for aggregating together split-phase mains, 3-phase mains and dual-supply (240 volt) appliances in North American and Canadian datasets. For example, here is the MeterGroup for building 1 in REDD:\nNOTE: If you are on Windows, remember to escape the back-slashes, use forward-slashs, or use raw-strings when passing paths in Python, e.g. one of the following would work:\npython\nredd = DataSet('c:\\\\data\\\\redd.h5')\nredd = DataSet('c:/data/redd.h5')\nredd = DataSet(r'c:\\data\\redd.h5')", "%matplotlib inline\n\nfrom matplotlib import rcParams\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport nilmtk\nfrom nilmtk import DataSet, MeterGroup\n\nplt.style.use('ggplot')\nrcParams['figure.figsize'] = (13, 10)\n\nredd = DataSet('/data/redd.h5')\nelec = redd.buildings[1].elec\nelec", "Note that there are two nested MeterGroups: one for the electric oven, and one for the washer dryer (both of which are 240 volt appliances and have two meters per appliance):", "elec.nested_metergroups()", "Putting these meters into a MeterGroup allows us to easily sum together the power demand recorded by both meters to get the total power demand for the entire appliance (but it's also very easy to see the individual meter power demand too).\nWe can easily get a MeterGroup of either the submeters or the mains:", "elec.mains()", "We can easily get the power data for both mains meters summed together:", "elec.mains().power_series_all_data().head()\n\nelec.submeters()", "Stats for MeterGroups\nProportion of energy submetered\nLet's work out the proportion of energy submetered in REDD building 1:", "elec.proportion_of_energy_submetered()", "Note that NILMTK has raised a warning that Mains uses a different type of power measurement than all the submeters, so it's not an entirely accurate comparison. Which raises the question: which type of power measurements are used for the mains and submeters? Let's find out...\nActive, apparent and reactive power", "mains = elec.mains()\n\nmains.available_ac_types('power')\n\nelec.submeters().available_ac_types('power')\n\nnext(elec.load())", "Total Energy", "elec.mains().total_energy() # returns kWh", "Energy per submeter", "energy_per_meter = elec.submeters().energy_per_meter() # kWh, again\nenergy_per_meter", "column headings are the ElecMeter instance numbers.\nThe function fraction_per_meter does the same thing as energy_per_submeter but returns the fraction of energy per meter.\nSelect meters on the basis of their energy consumption\nLet's make a new MeterGroup which only contains the ElecMeters which used more than 20 kWh:", "# energy_per_meter is a DataFrame where each row is a \n# power type ('active', 'reactive' or 'apparent').\n# All appliance meters in REDD are record 'active' so just select\n# the 'active' row:\nenergy_per_meter = energy_per_meter.loc['active']\nmore_than_20 = energy_per_meter[energy_per_meter > 20]\nmore_than_20\n\ninstances = more_than_20.index\ninstances", "Plot fraction of energy consumption of each appliance", "fraction = elec.submeters().fraction_per_meter().dropna()\n\n# Create convenient labels\nlabels = elec.get_labels(fraction.index)\nplt.figure(figsize=(10,30))\nfraction.plot(kind='pie', labels=labels);", "Draw wiring diagram\nWe can get the wiring diagram for the MeterGroup:", "elec.draw_wiring_graph()", "It's not very pretty but it shows that meters (1,2) (the site meters) are upstream of all other meters.\nBuildings in REDD have only two levels in their meter hierarchy (mains and submeters). If there were more than two levels then it might be useful to get only the meters immediately downstream of mains:", "elec.meters_directly_downstream_of_mains()", "Plot appliances when they are in use", "#sns.set_palette(\"Set3\", n_colors=12)\n# Set a threshold to remove residual power noise when devices are off\nelec.plot_when_on(on_power_threshold = 40)", "Stats and info for individual meters\nThe ElecMeter class represents a single electricity meter. Each ElecMeter has a list of associated Appliance objects. ElecMeter has many of the same stats methods as MeterGroup such as total_energy and available_power_ac_types and power_series and power_series_all_data. We will now explore some more stats functions (many of which are also available on MeterGroup)...", "fridge_meter = elec['fridge']", "Get upstream meter", "fridge_meter.upstream_meter() # happens to be the mains meter group!", "Metadata about the class of meter", "fridge_meter.device", "Dominant appliance\nIf the metadata specifies that a meter has multiple meters connected to it then one of those can be specified as the 'dominant' appliance, and this appliance can be retrieved with this method:", "fridge_meter.dominant_appliance()", "Total energy", "fridge_meter.total_energy() # kWh", "Get good sections\nIf we plot the raw power data then we see there is one large gap where, supposedly, the metering system was not working. (if we were to zoom in then we'd see lots of smaller gaps too):", "fridge_meter.plot()", "We can automatically identify the 'good sections' (i.e. the sections where every pair of consecutive samples is less than max_sample_period specified in the dataset metadata):", "good_sections = fridge_meter.good_sections(full_results=True)\n# specifying full_results=False would give us a simple list of \n# TimeFrames. But we want the full GoodSectionsResults object so we can\n# plot the good sections...\n\ngood_sections.plot()", "The blue chunks show where the data is good. The white gap is the large gap seen in the raw power data. There are lots of smaller gaps that we cannot see at this zoom level.\nWe can also see the exact sections identified:", "good_sections.combined()", "Dropout rate\nAs well as large gaps appearing because the entire system is down, we also get frequent small gaps from wireless sensors dropping data. This is sometimes called 'dropout'. The dropout rate is a number between 0 and 1 which specifies the proportion of missing samples. A dropout rate of 0 means no samples are missing. A value of 1 would mean all samples are missing:", "fridge_meter.dropout_rate()", "Note that the dropout rate has gone down (which is good!) now that we are ignoring the gaps. This value is probably more representative of the performance of the wireless system.\nSelect subgroups of meters\nWe use ElecMeter.select_using_appliances() to select a new MeterGroup using an metadata field. For example, to get all the washer dryers in the whole of the REDD dataset:", "import nilmtk\nnilmtk.global_meter_group.select_using_appliances(type='washer dryer')", "Or select multiple appliance types:", "elec.select_using_appliances(type=['fridge', 'microwave'])", "Or all appliances in the 'heating' category:", "nilmtk.global_meter_group.select_using_appliances(category='heating')", "Or all appliances in building 1 with a single-phase induction motor(!):", "nilmtk.global_meter_group.select_using_appliances(building=1, category='single-phase induction motor')", "(NILMTK imports the 'common metadata' from the NILM Metadata project, which includes a wide range of different category taxonomies)", "nilmtk.global_meter_group.select_using_appliances(building=2, category='laundry appliances')", "Select a group of meters from properties of the meters (not the appliances)", "elec.select(device_model='REDD_whole_house')\n\nelec.select(sample_period=3)", "Select a single meter from a MeterGroup\nWe use [] to retrive a single ElecMeter from a MeterGroup.\nSearch for a meter using appliances connected to each meter", "elec['fridge']", "Appliances are uniquely identified within a building by a type (fridge, kettle, television, etc.) and an instance number. If we do not specify an instance number then ElecMeter retrieves instance 1 (instance numbering starts from 1). If you want a different instance then just do this:", "elec.select_using_appliances(type='fridge')\n\nelec['light', 2]", "To uniquely identify an appliance in nilmtk.global_meter_group then we must specify the dataset name, building instance number, appliance type and appliance instance in a dict:", "import nilmtk\nnilmtk.global_meter_group[{'dataset': 'REDD', 'building': 1, 'type': 'fridge', 'instance': 1}]", "Search for a meter using details of the ElecMeter\nget ElecMeter with instance = 1:", "elec[1]", "Instance numbering\nElecMeter and Appliance instance numbers uniquely identify the meter or appliance type within the building, not globally. To uniquely identify a meter globally, we need three keys:", "from nilmtk.elecmeter import ElecMeterID \n# ElecMeterID is a namedtuple for uniquely identifying each ElecMeter\n\nnilmtk.global_meter_group[ElecMeterID(instance=8, building=1, dataset='REDD')]", "Select nested MeterGroup\nWe can also select a single, existing nested MeterGroup. There are two ways to specify a nested MeterGroup:", "elec[[ElecMeterID(instance=3, building=1, dataset='REDD'), \n ElecMeterID(instance=4, building=1, dataset='REDD')]]\n\nelec[ElecMeterID(instance=(3,4), building=1, dataset='REDD')]", "We can also specify the mains by asking for meter instance 0:", "elec[ElecMeterID(instance=0, building=1, dataset='REDD')]", "which is equivalent to elec.mains():", "elec.mains() == elec[ElecMeterID(instance=0, building=1, dataset='REDD')]", "Plot sub-metered data for a single day", "redd.set_window(start='2011-04-21', end='2011-04-22')\nelec.plot();\nplt.xlabel(\"Time\");", "Autocorrelation Plot", "from pandas.plotting import autocorrelation_plot\n\nelec.mains().plot_autocorrelation();", "Daily energy consumption across fridges in the dataset", "fridges_restricted = nilmtk.global_meter_group.select_using_appliances(type='fridge')\ndaily_energy = pd.Series([meter.average_energy_per_period(offset_alias='D') \n for meter in fridges_restricted.meters])\n\n# daily_energy.plot(kind='hist');\n# plt.title('Histogram of daily fridge energy');\n# plt.xlabel('energy (kWh)');\n# plt.ylabel('occurences');\n# plt.legend().set_visible(False)\n\ndaily_energy", "Correlation dataframe of the appliances", "correlation_df = elec.pairwise_correlation()\ncorrelation_df" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sgrindy/Bayesian-estimation-of-relaxation-spectra
Double_Maxwell_Lognormal_prior.ipynb
mit
[ "from __future__ import print_function\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pymc3 as pm\nimport numpy as np\nimport theano.tensor as tt", "First, we need to set up our test data. We'll use two relaxation modes that are themselves log-normally distributed.", "def H(tau):\n h1 = 1; tau1 = 0.03; sd1 = 0.5;\n h2 = 7; tau2 = 10; sd2 = 0.5;\n term1 = h1/np.sqrt(2*sd1**2*np.pi) * np.exp(-(np.log10(tau/tau1)**2)/(2*sd1**2))\n term2 = h2/np.sqrt(2*sd2**2*np.pi) * np.exp(-(np.log10(tau/tau2)**2)/(2*sd2**2))\n return term1 + term2\n\nNfreq = 50\nNmodes = 30\nw = np.logspace(-4,4,Nfreq).reshape((1,Nfreq))\ntau = np.logspace(-np.log10(w.max()),-np.log10(w.min()),Nmodes).reshape((Nmodes,1))\n\n# get equivalent discrete spectrum\ndelta_log_tau = np.log10(tau[1]/tau[0])\ng_true = (H(tau) * delta_log_tau).reshape((1,Nmodes))\n\nplt.loglog(tau,H(tau), label='Continuous spectrum')\nplt.plot(tau.ravel(),g_true.ravel(), 'or', label='Equivalent discrete spectrum')\nplt.legend(loc=4)\nplt.xlabel(r'$\\tau$')\nplt.ylabel(r'$H(\\tau)$ or $g$')\nplt.savefig('Original_relax_spec.png',dpi=500)", "Now, let's calculate the moduli. We'll have both a true version and a noisy version with some random noise added to simulate experimental variance.", "wt = tau*w\nKp = wt**2/(1+wt**2)\nKpp = wt/(1+wt**2)\nnoise_level = 0.02\nGp_true = np.dot(g_true,Kp)\nGp_noise = Gp_true + Gp_true*noise_level*np.random.randn(Nfreq)\nGpp_true = np.dot(g_true,Kpp)\nGpp_noise = Gpp_true + Gpp_true*noise_level*np.random.randn(Nfreq)\nplt.loglog(w.ravel(),Gp_true.ravel(),label=\"True G'\")\nplt.plot(w.ravel(),Gpp_true.ravel(), label='True G\"')\nplt.plot(w.ravel(),Gp_noise.ravel(),'xr',label=\"Noisy G'\")\nplt.plot(w.ravel(),Gpp_noise.ravel(),'+r',label='Noisy G\"')\nplt.xlabel(r'$\\omega$')\nplt.ylabel(\"Moduli\")\nplt.legend(loc=4)\nplt.savefig('Original_Moduli_spec.png',dpi=500)", "Now, we can build the model with PyMC3. I'll make 2: one with noise, and one without.", "noisyModel = pm.Model()\nwith noisyModel:\n g = pm.Lognormal('g', mu=0, tau=0.1, shape=g_true.shape)\n sd1 = pm.HalfNormal('sd1',tau=1)\n sd2 = pm.HalfNormal('sd2',tau=1)\n # we'll log-weight the moduli as in other fitting methods\n logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)), \n sd=sd1, observed=np.log(Gp_noise))\n logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)), \n sd=sd2, observed=np.log(Gpp_noise))\n\ntrueModel = pm.Model()\nwith trueModel:\n g = pm.Lognormal('g', mu=0, tau=0.1, shape=g_true.shape)\n sd1 = pm.HalfNormal('sd1',tau=1)\n sd2 = pm.HalfNormal('sd2',tau=1)\n # we'll log-weight the moduli as in other fitting methods\n logGp = pm.Normal('logGp',mu=np.log(tt.dot(g,Kp)), \n sd=sd1, observed=np.log(Gp_true))\n logGpp = pm.Normal('logGpp',mu=np.log(tt.dot(g,Kpp)), \n sd=sd2, observed=np.log(Gpp_true)) ", "Now we can sample the models to get our parameter distributions:", "Nsamples = 2000\ntrueMapEstimate = pm.find_MAP(model=trueModel)\nwith trueModel:\n trueTrace = pm.sample(Nsamples, start=trueMapEstimate)\npm.backends.text.dump('./Double_Maxwell_v3_true', trueTrace)\n\nnoisyMapEstimate = pm.find_MAP(model=noisyModel)\nwith noisyModel:\n noisyTrace = pm.sample(Nsamples, start=noisyMapEstimate)\npm.backends.text.dump('./Double_Maxwell_v3_noisy', noisyTrace)\n\nburn = 500\ntrueQ = pm.quantiles(trueTrace[burn:])\nnoisyQ = pm.quantiles(noisyTrace[burn:])", "Plotting the quantiles gives us a sense of the uncertainty in our estimation of $g_i$:", "def plot_quantiles(Q,ax):\n ax.fill_between(tau.ravel(), y1=Q['g'][2.5], y2=Q['g'][97.5], color='c',\n alpha=0.25)\n ax.fill_between(tau.ravel(), y1=Q['g'][25], y2=Q['g'][75], color='c',\n alpha=0.5)\n ax.plot(tau.ravel(), Q['g'][50], 'b-')\n # sampling localization lines:\n ax.axvline(x=np.exp(np.pi/2)/w.max(), color='k', linestyle='--')\n ax.axvline(x=(np.exp(np.pi/2)*w.min())**-1, color='k', linestyle='--')\n\nfig,ax = plt.subplots(nrows=2, sharex=True, \n subplot_kw={'xscale':'log','yscale':'log', \n 'ylabel':'$g_i$'})\nplot_quantiles(trueQ,ax[0])\nplot_quantiles(noisyQ,ax[1])\n# true spectrum\ntrueSpectrumline0 = ax[0].plot(tau.ravel(), g_true.ravel(),'xr', \n label='True Spectrum')\ntrueSpectrumline1 = ax[1].plot(tau.ravel(), g_true.ravel(),'xr', \n label='True Spectrum')\n\n\nax[0].legend(loc=4)\nax[0].set_title('Using True Moduli')\n\n\nax[1].set_xlabel(r'$\\tau$')\nax[1].legend(loc=4)\nax[1].set_title('Using Noisy Moduli')\n\nfig.set_size_inches(5,8)\nfig.savefig('True,Noisy_moduli.png',dpi=500)\n\nnoisySample = pm.sample_ppc(noisyTrace[burn:],model=noisyModel,samples=250)\n\nfig,ax = plt.subplots()\nfor logg1,logg2 in zip(noisySample['logGp'].reshape(250,50),\n noisySample['logGpp'].reshape(250,50)):\n ax.plot(w.ravel(),np.exp(logg1),'b-',alpha=0.05)\n ax.plot(w.ravel(),np.exp(logg2),'r-',alpha=0.01)\nax.set_xscale('log')\nax.set_yscale('log')\nax.set_xlabel(r'$\\omega$')\nax.set_ylabel('G\\',G\"')\nax.plot(w.ravel(), Gp_true.ravel(),'xk', label='True G\\'' )\nax.plot(w.ravel(), Gpp_true.ravel(), '+k', label='True G\"')\nax.set_title('Moduli estimated from noisy sample')\nplt.legend(loc=4)\nplt.savefig('Re-estimated_moduli.png',dpi=500)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
catalyst-cooperative/pudl
devtools/ferc1/ferc1-new-year.ipynb
mit
[ "Integrating a New Year of FERC Form 1\n\nEvery September / October we integrate a new year of FERC Form 1 data.\nThis notebook contains some tools to help with that somewhat manual process.\n\nBefore you start!\nYou will need:\n* An up-to-date FERC Form 1 database with all years of available data in it (including the new year).\n* An up-to-date PUDL database with all years of available EIA data in it (including the new year).\n* An up-to-date PUDL database with all years of available FERC Form 1 data in it (NOT including the new year).", "%load_ext autoreload\n%autoreload 2\n\nimport sys\nimport re\nimport pandas as pd\nimport sqlalchemy as sa\nimport pudl\nimport dbfread\nimport pathlib\nimport pudl.constants as pc\n\nimport logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\nhandler = logging.StreamHandler(stream=sys.stdout)\nformatter = logging.Formatter('%(message)s')\nhandler.setFormatter(formatter)\nlogger.handlers = [handler]\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\nsns.set()\n%matplotlib inline\n\nmpl.rcParams['figure.figsize'] = (10,4)\nmpl.rcParams['figure.dpi'] = 150\npd.options.display.max_columns = 100\npd.options.display.max_rows = 200\n\npudl_settings = pudl.workspace.setup.get_defaults()\nferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])\npudl_engine = sa.create_engine(pudl_settings['pudl_db'])\npudl_settings", "Generate new Row Maps\n\nThe FERC 1 Row Maps function similarly to the xlsx_maps that we use to track which columns contain what data across years in the EIA spreadsheets.\nIn many FERC 1 tables, a particular piece of reported data is associated not only with a named column in the database, but also what \"row\" the data showed up on.\nSo for instance, in the Plant in Service table, the column might contain \"additions\" to plant in service, while each numbered row corrresponds to an individual FERC Account to which value was added.\nHowever, from year to year which row corresponds to which value (e.g. which FERC account) changes, as new rows are added, or obsolete rows are removed.\nTo keep all this straight, we look at the \"row literals\" -- the labels that are associated with each row number -- year by year.\nAny time the row literals change between years, we compare the tables for those two adjacent years to see if the row numbers associated with a given piece of data have actually changed.\nHowever, many tables are not organized this way, and in most tables that are organized this way, in most years, the rows don't change.\nThe row maps are stored in CSVs, under src/pudl/package_data/meta/ferc1_row_maps", "def get_row_literals(table_name, report_year, ferc1_engine):\n row_literals = (\n pd.read_sql(\"f1_row_lit_tbl\", ferc1_engine)\n .query(f\"sched_table_name=='{table_name}'\")\n .query(f\"report_year=={report_year}\")\n .sort_values(\"row_number\")\n )\n return row_literals\n\ndef compare_row_literals(table_name, old_year, new_year, ferc1_engine):\n idx_cols = [\"row_number\", \"row_seq\"]\n old_df = get_row_literals(table_name, old_year, ferc1_engine).drop(columns=[\"row_status\", \"sched_table_name\"])\n new_df = get_row_literals(table_name, new_year, ferc1_engine).drop(columns=[\"row_status\", \"sched_table_name\"])\n merged_df = (\n pd.merge(old_df, new_df, on=idx_cols, suffixes=[\"_old\", \"_new\"], how=\"outer\")\n .set_index(idx_cols)\n )\n merged_df = (\n merged_df.loc[:, merged_df.columns.sort_values()]\n .assign(match=lambda x: x.row_literal_new == x.row_literal_old)\n )\n return merged_df \n\ndef check_all_row_years(table_name, ferc1_engine):\n years = list(range(min(pc.WORKING_PARTITIONS[\"ferc1\"][\"years\"]), max(pc.WORKING_PARTITIONS[\"ferc1\"][\"years\"])))\n years.sort()\n for old_year in years:\n compared = compare_row_literals(table_name, old_year, old_year+1, ferc1_engine)\n if not compared.match.all():\n logger.error(f\" * CHECK: {old_year} vs. {old_year+1}\")\n\nrecent_year_comparison = compare_row_literals(\"f1_plant_in_srvce\", max(pc.WORKING_PARTITIONS[\"ferc1\"][\"years\"]) - 1, max(pc.WORKING_PARTITIONS[\"ferc1\"][\"years\"]), ferc1_engine)\n \nunmatched_recent_rows = recent_year_comparison[~recent_year_comparison.match]\nif len(unmatched_recent_rows) > 0:\n print(\"HEY!... check most recent row mappings!\")\n display(recent_year_comparison[~recent_year_comparison.match])\nelse:\n print(\"Recent row mappings look consistent. No need to change anything.\")\n\nrow_mapped_tables = [\n \"f1_dacs_epda\", # Depreciation data.\n # \"f1_edcfu_epda\", # Additional depreciation data. Not yet row mapped\n # \"f1_acb_epda\", # Additional depreciation data. Not yet row mapped\n \"f1_elc_op_mnt_expn\", # Electrical operating & maintenance expenses.\n \"f1_elctrc_oper_rev\", # Electrical operating revenues.\n # \"f1_elc_oper_rev_nb\", # Additional electric operating revenues. One-line table. Not yet row mapped.\n \"f1_income_stmnt\", # Utility income statements.\n # \"f1_incm_stmnt_2\", # Additional income statement info. Not yet row mapped.\n \"f1_plant_in_srvce\", # Utility plant in service, by FERC account number.\n \"f1_sales_by_sched\", # Electricity sales by rate schedule -- it's a mess.\n]\n\nrow_mapped_dfs = {t: pd.read_sql(t, ferc1_engine) for t in row_mapped_tables}\nfor tbl in row_mapped_tables:\n print(f\"{tbl}:\")\n check_all_row_years(tbl, ferc1_engine)\n print(\"\\n\", end=\"\")\n\ncompare_row_literals(\"f1_plant_in_srvce\", max(pc.DATA_YEARS[\"ferc1\"]) - 1, max(pc.DATA_YEARS[\"ferc1\"]), ferc1_engine)", "Identify Missing Respondents\n\nSome FERC 1 respondents appear in the data tables, but not in the f1_respondent_id table.\nDuring the database cloning process we create dummy entries for these respondents to ensure database integrity.\nSome of these missing respondents can be identified based on the data they report.\nFor instance, f1_respondent_id==519 reports two plants in the f1_steam table, named \"Kuester\" & \"Mihm\".\nSearching for those plant names in the EIA 860 data (and Google) reveals those plants are owned by Upper Michigan Energy Resources Company (utility_id_eia==61029).\nThese \"PUDL Determined\" respondent names are stored in the pudl.extract.ferc1.PUDL_RIDS dictionary, and used to populate the f1_respondent_id table when they're available.\nHowever, since many plants are owned by multiple utilities, we need to identify a utility that matches all of the reported plant names, hopefully uniquely.\nThe following functions help us identify that kind of utility.", "def get_util_from_plants(pudl_out, patterns, display=False):\n \"\"\"\n Find any utilities associated with a list patterns for matching plant names.\n \n Args:\n pudl_out (pudl.output.pudltabl.PudlTable): A PUDL Output Object.\n patterns (iterable of str): Collection of patterns with which to match\n the names of power plants in the EIA 860. E.g. \".*Craig.*\".\n display (bool): Whether or not to display matching records for\n debugging and refinement purposes.\n\n Returns:\n pandas.DataFrame: All records from the utilities_eia860 table\n pertaining to the utilities identified as being associated with\n plants that matched all of the patterns.\n\n \"\"\"\n own_eia860 = pudl_out.own_eia860()\n plants_eia860 = pudl_out.plants_eia860()\n \n util_ids = []\n for pat in patterns:\n owners_df = own_eia860[own_eia860.plant_name_eia.fillna(\"\").str.match(pat, case=False)]\n plants_df = plants_eia860[plants_eia860.plant_name_eia.fillna(\"\").str.match(pat, case=False)]\n if display:\n print(f\"Pattern: \\\"{pattern}\\\"\")\n display(owners_df)\n display(plants_df)\n util_ids.append(set.union(set(owners_df.owner_utility_id_eia), set(plants_df.utility_id_eia)))\n \n util_ids = set.intersection(*util_ids)\n utils_eia860 = pudl_out.utils_eia860()\n\n return utils_eia860[utils_eia860.utility_id_eia.isin(util_ids)]\n", "Missing Respondents\n\nThis will show all the as of yet unidentified respondents\nYou can then use these respondent IDs to search through other tables for identifying information", "f1_respondent_id = pd.read_sql(\"f1_respondent_id\", ferc1_engine)\nmissing_respondent_ids = f1_respondent_id[f1_respondent_id.respondent_name.str.contains(\"Missing Respondent\")].respondent_id.unique()\nmissing_respondent_ids", "Utility identification example using Plants\n\nLet's use respondent_id==529 which was identified as Tri-State Generation & Transmission in 2019\nSearching for that respondent_id in all of the plant-related tables we find the following plants:", "(\n pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS[\"ferc1\"])\n .query(\"utility_id_ferc1==529\")\n)", "Create a list of patterns based on plant names\n\nPretend this respondent hadn't already been identified\nGenerate a list of plant name patterns based on what we see here\nUse the above function get_utils_from_plants to identify candidate utilities involved with those plants, in the EIA data.\nNote that the list of patterns doesn't need to be exhaustive -- just enough to narrow down to a single utility.", "pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine=pudl_engine)\n\nget_util_from_plants(\n pudl_out,\n patterns=[\n \".*laramie.*\",\n \".*craig.*\",\n \".*escalante.*\",\n])", "Another example with respondent_id==519", "(\n pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS[\"ferc1\"])\n .query(\"utility_id_ferc1==519\")\n)\n\nget_util_from_plants(\n pudl_out,\n patterns=[\n \".*kuester.*\",\n \".*mihm.*\",\n])", "And again with respondent_id==531", "(\n pudl.glue.ferc1_eia.get_db_plants_ferc1(pudl_settings, years=pc.DATA_YEARS[\"ferc1\"])\n .query(\"utility_id_ferc1==531\")\n)\n\nget_util_from_plants(\n pudl_out,\n patterns = [\n \".*leland.*\",\n \".*antelope.*\",\n \".*dry fork.*\",\n \".*laramie.*\",\n])", "What about missing respondents in the Plant in Service table?\n\nThere are a couple of years worth of plant in service data associated with unidentified respondents.\nUnfortunately the plant in service table doesn't have a lot of identifying information.\nThe same is true of the f1_dacs_epda depreciation table", "f1_plant_in_srvce = pd.read_sql_table(\"f1_plant_in_srvce\", ferc1_engine)\nf1_plant_in_srvce[f1_plant_in_srvce.respondent_id.isin(missing_respondent_ids)]", "Identify new strings for cleaning\n\nSeveral FERC 1 fields contain freeform strings that should have a controlled vocabulary imposed on them.\nThis function help identify new, unrecognized strings in those fields each year.\nUse regular expressions to identify collections of new, related strings, and add them to the appropriate string cleaning dictionary entry in pudl.transform.ferc1.\nthen re-run the cell with new search terms, until everything left is impossible to confidently categorize.", "clean_me = [\n {\"table\": \"f1_fuel\", \"field\": \"fuel\", \"strdict\": pudl.transform.ferc1.FUEL_STRINGS},\n {\"table\": \"f1_fuel\", \"field\": \"fuel_unit\", \"strdict\": pudl.transform.ferc1.FUEL_UNIT_STRINGS},\n {\"table\": \"f1_steam\", \"field\": \"plant_kind\", \"strdict\": pudl.transform.ferc1.PLANT_KIND_STRINGS},\n {\"table\": \"f1_steam\", \"field\": \"type_const\", \"strdict\": pudl.transform.ferc1.CONSTRUCTION_TYPE_STRINGS},\n]\n\nfor kwargs in clean_me:\n unmapped_strings = pudl.helpers.find_new_ferc1_strings(ferc1_engine=ferc1_engine, **kwargs)\n print(f\"{len(unmapped_strings)} unmapped {kwargs['field']} strings found.\")\n if unmapped_strings:\n display(unmapped_strings)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
maxis42/ML-DA-Coursera-Yandex-MIPT
5 Data analysis applications/Homework/1 test autocorrelation and stationarity/Test Autocorrelation and stationarity.ipynb
mit
[ "Тест. Автокорреляция и стационарность", "from __future__ import division\n\nimport numpy as np\nimport pandas as pd\n\nimport statsmodels.api as sm\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n\n#Reading milk data\nmilk = pd.read_csv('monthly-milk-production.csv', ';', index_col=['month'], parse_dates=['month'], dayfirst=True)\nmilk.info()\n\nmilk.head()\n\n_ = plt.plot(milk.index, milk.values)\n\nsm.tsa.stattools.adfuller(milk.values.flatten())", "Часто, когда вы имеете дело с величинами, представляющими собой сумму значений показателя за каждый день или за каждый рабочий день, имеет смысл перед началом прогнозирования поделить весь ряд на число дней в периоде. Например, если поделить ряд с объёмом производства молока на одну корову на число дней в месяце, полученная величина будет меняться более плавно, и для неё легче будет построить прогнозирующую модель.\nКорректно определить число дней в месяце можно с помощью свойства days_in_month у индекса ряда или функции monthrange из пакета calendar. Используйте число дней в месяце для того, чтобы вычислить новый показатель — среднее дневное число полученного молока на одну корову. Постройте график этого ряда и убедитесь, что он стал более гладким.", "milk['daily'] = milk.milk.values.flatten() / milk.index.days_in_month\n_ = plt.plot(milk.index, milk.daily)\n\nmilk.daily.values.sum()", "Для ряда со средним дневным количеством молока на корову из предыдущего вопроса давайте с помощью критерия Дики-Фуллера подберём порядок дифференцирования, при котором ряд становится стационарным.\nДифференцирование можно делать так:\nmilk.daily_diff1 = milk.daily - milk.daily.shift(1)\nЧтобы сделать сезонное дифференцирование, нужно изменить значение параметра у функции shift:\nmilk.daily_diff12 = milk.daily - milk.daily.shift(12)\nПри дифференцировании длина ряда сокращается, поэтому в части строк в новой колонке значения будут не определены (NaN). Подавая полученные столбцы на вход критерию Дики-Фуллера, отрезайте неопределённые значения, иначе вы получите неопределённый достигаемый уровень значимости.", "milk.daily_diff1 = milk.daily - milk.daily.shift(1)\n_ = plt.plot(milk.index, milk.daily_diff1)\n\nsm.tsa.stattools.adfuller(milk.daily_diff1.dropna())\n\nmilk.daily_diff12 = milk.daily - milk.daily.shift(12)\n_ = plt.plot(milk.index, milk.daily_diff12)\n\nsm.tsa.stattools.adfuller(milk.daily_diff12.dropna())\n\nmilk.daily_diff12_1 = milk.daily_diff12 - milk.daily_diff12.shift(1)\n_ = plt.plot(milk.index, milk.daily_diff12_1)\n\nsm.tsa.stattools.adfuller(milk.daily_diff12_1.dropna())", "Для стационарного ряда из предыдущего вопроса постройте график автокорреляционной функции.", "sm.graphics.tsa.plot_acf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50),\n\nsm.graphics.tsa.plot_pacf(milk.daily_diff12_1.dropna().values.squeeze(), lags=50);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
statsmodels/statsmodels.github.io
v0.13.2/examples/notebooks/generated/variance_components.ipynb
bsd-3-clause
[ "Variance Component Analysis\nThis notebook illustrates variance components analysis for two-level\nnested and crossed designs.", "import numpy as np\nimport statsmodels.api as sm\nfrom statsmodels.regression.mixed_linear_model import VCSpec\nimport pandas as pd", "Make the notebook reproducible", "np.random.seed(3123)", "Nested analysis\nIn our discussion below, \"Group 2\" is nested within \"Group 1\". As a\nconcrete example, \"Group 1\" might be school districts, with \"Group\n2\" being individual schools. The function below generates data from\nsuch a population. In a nested analysis, the group 2 labels that\nare nested within different group 1 labels are treated as\nindependent groups, even if they have the same label. For example,\ntwo schools labeled \"school 1\" that are in two different school\ndistricts are treated as independent schools, even though they have\nthe same label.", "def generate_nested(\n n_group1=200, n_group2=20, n_rep=10, group1_sd=2, group2_sd=3, unexplained_sd=4\n):\n\n # Group 1 indicators\n group1 = np.kron(np.arange(n_group1), np.ones(n_group2 * n_rep))\n\n # Group 1 effects\n u = group1_sd * np.random.normal(size=n_group1)\n effects1 = np.kron(u, np.ones(n_group2 * n_rep))\n\n # Group 2 indicators\n group2 = np.kron(np.ones(n_group1), np.kron(np.arange(n_group2), np.ones(n_rep)))\n\n # Group 2 effects\n u = group2_sd * np.random.normal(size=n_group1 * n_group2)\n effects2 = np.kron(u, np.ones(n_rep))\n\n e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)\n y = effects1 + effects2 + e\n\n df = pd.DataFrame({\"y\": y, \"group1\": group1, \"group2\": group2})\n\n return df", "Generate a data set to analyze.", "df = generate_nested()", "Using all the default arguments for generate_nested, the population\nvalues of \"group 1 Var\" and \"group 2 Var\" are 2^2=4 and 3^2=9,\nrespectively. The unexplained variance, listed as \"scale\" at the\ntop of the summary table, has population value 4^2=16.", "model1 = sm.MixedLM.from_formula(\n \"y ~ 1\",\n re_formula=\"1\",\n vc_formula={\"group2\": \"0 + C(group2)\"},\n groups=\"group1\",\n data=df,\n)\nresult1 = model1.fit()\nprint(result1.summary())", "If we wish to avoid the formula interface, we can fit the same model\nby building the design matrices manually.", "def f(x):\n n = x.shape[0]\n g2 = x.group2\n u = g2.unique()\n u.sort()\n uv = {v: k for k, v in enumerate(u)}\n mat = np.zeros((n, len(u)))\n for i in range(n):\n mat[i, uv[g2.iloc[i]]] = 1\n colnames = [\"%d\" % z for z in u]\n return mat, colnames", "Then we set up the variance components using the VCSpec class.", "vcm = df.groupby(\"group1\").apply(f).to_list()\nmats = [x[0] for x in vcm]\ncolnames = [x[1] for x in vcm]\nnames = [\"group2\"]\nvcs = VCSpec(names, [colnames], [mats])", "Finally we fit the model. It can be seen that the results of the\ntwo fits are identical.", "oo = np.ones(df.shape[0])\nmodel2 = sm.MixedLM(df.y, oo, exog_re=oo, groups=df.group1, exog_vc=vcs)\nresult2 = model2.fit()\nprint(result2.summary())", "Crossed analysis\nIn a crossed analysis, the levels of one group can occur in any\ncombination with the levels of the another group. The groups in\nStatsmodels MixedLM are always nested, but it is possible to fit a\ncrossed model by having only one group, and specifying all random\neffects as variance components. Many, but not all crossed models\ncan be fit in this way. The function below generates a crossed data\nset with two levels of random structure.", "def generate_crossed(\n n_group1=100, n_group2=100, n_rep=4, group1_sd=2, group2_sd=3, unexplained_sd=4\n):\n\n # Group 1 indicators\n group1 = np.kron(\n np.arange(n_group1, dtype=int), np.ones(n_group2 * n_rep, dtype=int)\n )\n group1 = group1[np.random.permutation(len(group1))]\n\n # Group 1 effects\n u = group1_sd * np.random.normal(size=n_group1)\n effects1 = u[group1]\n\n # Group 2 indicators\n group2 = np.kron(\n np.arange(n_group2, dtype=int), np.ones(n_group2 * n_rep, dtype=int)\n )\n group2 = group2[np.random.permutation(len(group2))]\n\n # Group 2 effects\n u = group2_sd * np.random.normal(size=n_group2)\n effects2 = u[group2]\n\n e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)\n y = effects1 + effects2 + e\n\n df = pd.DataFrame({\"y\": y, \"group1\": group1, \"group2\": group2})\n\n return df", "Generate a data set to analyze.", "df = generate_crossed()", "Next we fit the model, note that the groups vector is constant.\nUsing the default parameters for generate_crossed, the level 1\nvariance should be 2^2=4, the level 2 variance should be 3^2=9, and\nthe unexplained variance should be 4^2=16.", "vc = {\"g1\": \"0 + C(group1)\", \"g2\": \"0 + C(group2)\"}\noo = np.ones(df.shape[0])\nmodel3 = sm.MixedLM.from_formula(\"y ~ 1\", groups=oo, vc_formula=vc, data=df)\nresult3 = model3.fit()\nprint(result3.summary())", "If we wish to avoid the formula interface, we can fit the same model\nby building the design matrices manually.", "def f(g):\n n = len(g)\n u = g.unique()\n u.sort()\n uv = {v: k for k, v in enumerate(u)}\n mat = np.zeros((n, len(u)))\n for i in range(n):\n mat[i, uv[g[i]]] = 1\n colnames = [\"%d\" % z for z in u]\n return [mat], [colnames]\n\n\nvcm = [f(df.group1), f(df.group2)]\nmats = [x[0] for x in vcm]\ncolnames = [x[1] for x in vcm]\nnames = [\"group1\", \"group2\"]\nvcs = VCSpec(names, colnames, mats)", "Here we fit the model without using formulas, it is simple to check\nthat the results for models 3 and 4 are identical.", "oo = np.ones(df.shape[0])\nmodel4 = sm.MixedLM(df.y, oo[:, None], exog_re=None, groups=oo, exog_vc=vcs)\nresult4 = model4.fit()\nprint(result4.summary())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
superbock/parallel2015
parallel.ipynb
mit
[ "Parallel/efficient coding in Python\nPython (more presicely: the CPython interpreter) uses a Global interpreter lock (GIL) which prohibits simultaneous execution of Python code.\nOverview\nIn this tutorial, we'll cover these common problems/situations/solutions:\n\nIf your code is I/O bound, threads is a simple solution\nIf your code is CPU bound, multiprocessing is the way to go\nIf your code can be vectorised: use numpy\nIf your code is still too slow: write your own C extensions with cython\n\nThis list is by no means complete and is just a starting point for getting you into parallel/efficient coding with Python. There are plenty of other very nice solutions for all kinds of problems out there, but this is beyond the scope of this tutorial.\nA very short Python primer\nIf you are not familiar with Python, I'd suggest learning it the (not so) hard way. Or continue reading, it describes very briefly the needed basics.\nThis is how we define functions:", "def function(argument):\n \"\"\"\n This function just prints the given argument.\n \n :param argument: what to print\n :returns: also returns the argument\n \n \"\"\"\n print(\"Calling function with argument %s\" % argument)\n return argument\n\nfunction(42)", "This is how we define classes:", "class Class(object):\n \"\"\"A simple meaningless Class\"\"\"\n def __init__(self, attribute):\n \"\"\"\n This method get's called during object initialisation.\n \n :param attribute: attribute of the class to save\n \n \"\"\"\n self.attribute = attribute\n \n def tell(self):\n \"\"\"Tell the world about us.\"\"\"\n print(\"I'm a Class with a very nice attribute: %s\" % self.attribute)\n\nA = Class('smart')\nA.tell()", "Since all (our) functions and classes are well documented, we can always ask how to use them:", "function?\n\nClass?", "Python also has lists which can be accessed by index (indices always start at 0):", "x = [1, 2, 4.5, 'bla']\n\nx[1]\n\nx[3]\n\nx.index('bla')", "One of the more fancy stuff we can do in Python is list comprehensions.\nHere's a simple example of how to calculate the squares of some numbers:", "[x ** 2 for x in range(10)]", "Ok, that's it for now, let's speed things up :)\nThreads\nFrom the Python threading documentation (https://docs.python.org/2/library/threading.html):\n\nIn CPython, due to the Global Interpreter Lock, only one thread can execute Python code at once (even though certain performance-oriented libraries might overcome this limitation). If you want your application to make better use of the computational resources of multi-core machines, you are advised to use multiprocessing. However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.\n\nAll my examples are CPU bound, so let's move on to multiprocessing -- it also has a simpler interface ;)\nMultiprocessing\nThe multiprocessing module can start multiple Python interpreters and thus can run code in parallel.\nInside each of these interpreters, the GIL still applies, but they do not interfere with each other.", "import multiprocessing as mp", "A simple CPU bound example:\nWe consider a function which sums all primes below a given number. Source: http://www.parallelpython.com/content/view/17/31/#SUM_PRIMES", "import math\n\ndef isprime(n):\n \"\"\"Returns True if n is prime and False otherwise\"\"\"\n if not isinstance(n, int):\n raise TypeError(\"argument passed to is_prime is not of 'int' type\")\n if n < 2:\n return False\n if n == 2:\n return True\n max = int(math.ceil(math.sqrt(n)))\n i = 2\n while i <= max:\n if n % i == 0:\n return False\n i += 1\n return True\n\ndef sum_primes(n):\n \"\"\"Calculates sum of all primes below given integer n\"\"\"\n return sum([x for x in xrange(2,n) if isprime(x)])", "If we simply use the list comprehension without the sum, we get a list of primes smaller than the given one:", "[x for x in xrange(10) if isprime(x)]\n\nsum_primes(10)\n\n%timeit sum_primes(10)", "Not bad, why should we speed up something like this?\nLet's see what happens if we ask for the sum of primes below a larger number:", "%timeit sum_primes(10000)", "What if we do this for a bunch of numbers, e.g.:", "%timeit [sum_primes(n) for n in xrange(1000)]", "Ok, this is definitely too slow, but we should be able to run this in parallel easily, since we are asking for the same thing (summing all prime numbers below the given number) for a lot of of numbers.\nWe do this by calling this very same function a thousand times (inside the list comprehension).\nPython has a map function, which maps a list of arguments to a single function.", "map(sum_primes, xrange(10))", "This is basically the same as what we had before with our list comprehension.", "%timeit map(sum_primes, xrange(1000))", "multiprocessing offers the same map function in it's Pool class. Let's create a Pool with 2 simultaneous threads (i.e. processes).", "pool = mp.Pool(2)", "Now we can use the pool's map function to do the same as before but in parallel.", "pool.map(sum_primes, xrange(10))\n\n%timeit pool.map(sum_primes, xrange(1000))", "This is a speed-up of almost 2x, which is exactly what we expected (using 2 processes minus some overhead).\nVectorisation\nWe can solve a lot of stuff in almost no time if we avoid loops and vectorise the expressions instead.\nNumpy does an excellent job here and automatically does some things in parallel (e.g. matrix multiplication).\nIt uses highly optimised linear algebra packages to do things really fast.", "import numpy as np", "Some Numpy basics:\nWe can define arrays simple as that:", "x = np.array([1, 2, 5, 15])\n\nx", "As long as it can be casted to an array, we can use almost everything as input for an array:", "np.array(map(sum_primes, xrange(10)))", "Or we define arrays with some special functions:", "np.zeros(10)\n\nnp.arange(10.)", "Numpy supports indexing and slicing:", "x = np.arange(10)", "Get a single item of the array:", "x[3]", "Get a slice of the array:", "x[1:5]", "Get everything starting from index 4 to the end:", "x[4:]", "Negative indices are counted backwards from the end.\nGet everything before the last element:", "x[:-1]", "Let's define another problem: comb filters.\n\nIn signal processing, a comb filter adds a delayed version of a signal to itself, causing constructive and destructive interference [Wikipedia].\n\nThese filters can be either feed forward or backward, depending on wheter the signal itself or the output of the filter is delayed and added to the signal.\nFeed forward:\n$y[n] = x[n] + \\alpha * x[n - \\tau]$\nFeed backward:\n$y[n] = x[n] + \\alpha * y[n - \\tau]$\nAgain, we start with a Python only solution (it has some Numpy stuff in there already, but that's not the point. It uses a loop for adding the delayed portion of the signal to the output).", "def feed_forward_comb_filter_loop(signal, tau, alpha):\n \"\"\"\n Filter the signal with a feed forward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * x[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # init the output array \n y = np.zeros(len(signal))\n # iterate over the signal\n for i in range(len(signal)):\n # add the delayed version of the signal to itself\n if i < tau:\n y[i] = signal[i]\n else:\n y[i] = signal[i] + alpha * signal[i - tau]\n # return the filtered signal\n return y\n\nfeed_forward_comb_filter_loop(np.arange(10.), 4, 0.5)\n\n%timeit feed_forward_comb_filter_loop(np.arange(100000.), 4, 0.5)", "Let's vectorise this by removig the loop (the for i in range(len(signal)) stuff):", "def feed_forward_comb_filter(signal, tau, alpha):\n \"\"\"\n Filter the signal with a feed forward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * x[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # init the output array as a copy of the input signal, since\n # the output is the input signal plus a delayed version of it\n y = signal.copy()\n # add the delayed signal, starting at index tau\n y[tau:] += alpha * signal[:-tau]\n # return the filtered signal\n return y\n\n%timeit feed_forward_comb_filter(np.arange(100000.), 4, 0.5)", "This is a nice ~67x speed-up (523 µs vs. 35.1 ms). Continue with the feed backward example...\nThe feed backward variant comb filter function containing a loop:", "def feed_backward_comb_filter_loop(signal, tau, alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # init the output array as a copy of the input signal\n y = signal.copy()\n # loop over the signal, starting at tau\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y\n\n%timeit feed_backward_comb_filter_loop(np.arange(100000.), 4, 0.5)", "The backward variant has basically the same runtime as the forward (loop-)version, but unfortunately, we cannot speed this up further with a vectorised expression, since the output depends on the output of a previous step.\nAnd this is where Cython comes in.\nCython\nCython can be used to write C-extensions in Python.\nThe Cython compiler converts everything into C code, compiles it to a shared object which can be loaded/imported like any other python module.\nFirst we need to import Cython via cimport cython in normal Cython files or load the Cython extension (in IPython):", "%load_ext Cython", "To be able to use Cython code within IPython, we need to add the magic %%cython handler as a first line into a cell. Then we can start writing normal Python code.", "%%cython\n# magic cython handler for IPython (must be first line of a cell)\n\ndef sum_two_numbers(a, b):\n return a + b", "Cython then compiles and loads everything transparently.", "sum_two_numbers(10, 5)", "Cython gains most of its tremendous speed gains from static typing.\nLet's try with the isprime example from before again:", "%%cython\n\n# import some C functions to avoid calls to the Python interpreter\nfrom libc.math cimport sqrt, ceil\n\n# define type n as integer\ndef isprime_cython(int n):\n \"\"\"Returns True if n is prime and False otherwise\"\"\"\n # we can skip the instance check since we have strong typing now\n if n < 2:\n return False\n if n == 2:\n return True\n # define type max as integer\n cdef int max\n # use the C ceil & sqrt functions instead of Python's own\n max = int(ceil(sqrt(n)))\n # define type n as integer\n cdef int i = 2\n while i <= max:\n if n % i == 0:\n return False\n i += 1\n return True\n\ndef sum_primes_cython(n):\n \"\"\"Calculates sum of all primes below given integer n\"\"\"\n return sum([x for x in xrange(2,n) if isprime_cython(x)])", "Let's see if (and how much) it helps:", "%timeit sum_primes(10000)\n%timeit sum_primes_cython(10000)", "speed-up ~24x (686 µs vs. 16.5 ms)", "%timeit -n 1 [sum_primes(n) for n in xrange(10000)]\n%timeit -n 1 [sum_primes_cython(n) for n in xrange(10000)]", "speed-up ~25x (3.1 s vs. 76 s)\nWhat if we use this faster version of the function with multiple processes?", "%timeit mp.Pool(2).map(sum_primes_cython, (n for n in xrange(10000)))\n%timeit mp.Pool(4).map(sum_primes_cython, (n for n in xrange(10000)))", "Starting more threads than physical CPU cores gives some more performance, but does not scale as good because of hyper-threading. Total speed-up compared to the single-process pure Python variant is ~49x (1.56 s vs. 76 s)\nStill, the isprime_cython is a Python function, which adds some overhead. Since we call this function only from sum_primes_cython, we can make it a C-only function by using the cdef statement instead of def and also define the return type.", "%%cython\n\nfrom libc.math cimport sqrt, ceil\n\n# make this a C function\ncdef int isprime_cython_nogil(int n) nogil:\n \"\"\"Returns True if n is prime and False otherwise\"\"\"\n if n < 2:\n return 0\n if n == 2:\n return 1\n cdef int max\n max = int(ceil(sqrt(n)))\n cdef int i = 2\n while i <= max:\n if n % i == 0:\n return 0\n i += 1\n return 1\n\ndef sum_primes_cython_nogil(n):\n \"\"\"Calculates sum of all primes below given integer n\"\"\"\n return sum([x for x in xrange(2,n) if isprime_cython_nogil(x)])\n\n%timeit [sum_primes_cython_nogil(n) for n in xrange(10000)]\n%timeit mp.Pool(4).map(sum_primes_cython_nogil, (n for n in xrange(10000)))", "Again, a bit faster; total speed-up compared to the single-process pure Python variant is ~64x (1.18 s vs. 76 s)\nThese are the Cython basics. Let's apply them to the other example, the backward comb filter which we were not able to vectorise:", "%%cython\n\nimport numpy as np\n# we also want to load the C-bindings of numpy with cimport\ncimport numpy as np\n\n# statically type the obvious variables (tau, alpha, n)\ndef feed_backward_comb_filter(signal, unsigned int tau, float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y\n\n%timeit feed_backward_comb_filter(np.arange(100000.), 4, 0.5)", "A bit better (roughly half the time), but still far away from the feed forward variant.\nLet's see, what kills performance and fix it. Cython has this nice -a switch to highlight calls to Python in yellow.", "%%cython -a\n\nimport numpy as np\ncimport cython\ncimport numpy as np\n\ndef feed_backward_comb_filter(signal, unsigned int tau, float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] = signal[n] + alpha * y[n - tau]\n # return the filtered signal\n return y", "In line 25, we still have calls to Python within the loop (e.g. PyNumber_Multiply and PyNumber_Add).\nWe can get rid of these by statically typing the signal as well. Unfortunately, we lose the ability to call the filter function with a signal of arbitrary dimensions, since Cython needs to know the dimensions of the signal beforehand.", "%%cython\n\nimport numpy as np\ncimport cython\ncimport numpy as np\n\ndef feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal,\n unsigned int tau,\n float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n cdef np.ndarray[np.float_t, ndim=1] y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y\n\n%timeit feed_backward_comb_filter_1d(np.arange(100000.), 4, 0.5)", "Much better, let's check again.", "%%cython -a\n\nimport numpy as np\ncimport cython\ncimport numpy as np\n\ndef feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal,\n unsigned int tau,\n float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n cdef np.ndarray[np.float_t, ndim=1] y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y", "For the sake of completeness, let's get rid of these Pyx_RaiseBufferIndexError in line 27 as well. We tell Cython that it does not need to check for bounds by adding a @cython.boundscheck(False) decorator.", "%%cython\n\nimport numpy as np\ncimport cython\ncimport numpy as np\n\n@cython.boundscheck(False)\ndef feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal,\n unsigned int tau,\n float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n cdef np.ndarray[np.float_t, ndim=1] y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y\n\n%timeit feed_backward_comb_filter_1d(np.arange(100000.), 4, 0.5)", "Ok, we are now on par with the feed forward Numpy variant -- or even a bit better :)\nTo get back the flexibility of the Python/Numpy solution to be able to handle signals of arbitrary dimension, we need to define a wrapper function (in pure Python):", "%%cython\n\nimport numpy as np\n\ncimport cython\ncimport numpy as np\n\ndef feed_backward_comb_filter(signal, tau, alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n if signal.ndim == 1:\n return feed_backward_comb_filter_1d(signal, tau, alpha)\n elif signal.ndim == 2:\n return feed_backward_comb_filter_2d(signal, tau, alpha)\n else:\n raise ValueError('signal must be 1d or 2d')\n\n@cython.boundscheck(False)\ndef feed_backward_comb_filter_1d(np.ndarray[np.float_t, ndim=1] signal,\n unsigned int tau,\n float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n cdef np.ndarray[np.float_t, ndim=1] y = np.copy(signal)\n cdef unsigned int n\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n] += alpha * y[n - tau]\n # return the filtered signal\n return y\n\n@cython.boundscheck(False)\ndef feed_backward_comb_filter_2d(np.ndarray[np.float_t, ndim=2] signal,\n unsigned int tau,\n float alpha):\n \"\"\"\n Filter the signal with a feed backward comb filter.\n\n :param signal: signal\n :param tau: delay length\n :param alpha: scaling factor\n :return: comb filtered signal\n\n \"\"\"\n # y[n] = x[n] + α * y[n - τ]\n if tau <= 0:\n raise ValueError('tau must be greater than 0')\n # type definitions\n cdef np.ndarray[np.float_t, ndim=2] y = np.copy(signal)\n cdef unsigned int d, n\n # loop over the dimensions\n for d in range(2):\n # loop over the complete signal\n for n in range(tau, len(signal)):\n # add a delayed version of the output signal\n y[n, d] += alpha * y[n - tau, d]\n # return\n return y\n\n\n%timeit feed_backward_comb_filter(np.arange(100000.), 4, 0.5)\n%timeit feed_backward_comb_filter(np.arange(100000.).reshape(-1, 2), 4, 0.5)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/2455121b46e43615a45b660a36d0ad93/30_epochs_metadata.ipynb
bsd-3-clause
[ "%matplotlib inline", "Working with Epoch metadata\nThis tutorial shows how to add metadata to ~mne.Epochs objects, and\nhow to use Pandas query strings &lt;pandas:indexing.query&gt; to select and\nplot epochs based on metadata properties.\nFor this tutorial we'll use a different dataset than usual: the\nkiloword-dataset, which contains EEG data averaged across 75 subjects\nwho were performing a lexical decision (word/non-word) task. The data is in\n~mne.Epochs format, with each epoch representing the response to a\ndifferent stimulus (word). As usual we'll start by importing the modules we\nneed and loading the data:", "import os\nimport numpy as np\nimport pandas as pd\nimport mne\n\nkiloword_data_folder = mne.datasets.kiloword.data_path()\nkiloword_data_file = os.path.join(kiloword_data_folder,\n 'kword_metadata-epo.fif')\nepochs = mne.read_epochs(kiloword_data_file)", "Viewing Epochs metadata\n.. sidebar:: Restrictions on metadata DataFrames\nMetadata dataframes are less flexible than typical\n :class:Pandas DataFrames &lt;pandas.DataFrame&gt;. For example, the allowed\n data types are restricted to strings, floats, integers, or booleans;\n and the row labels are always integers corresponding to epoch numbers.\n Other capabilities of :class:DataFrames &lt;pandas.DataFrame&gt; such as\n :class:hierarchical indexing &lt;pandas.MultiIndex&gt; are possible while the\n ~mne.Epochs object is in memory, but will not survive saving and\n reloading the ~mne.Epochs object to/from disk.\nThe metadata attached to ~mne.Epochs objects is stored as a\n:class:pandas.DataFrame containing one row for each epoch. The columns of\nthis :class:~pandas.DataFrame can contain just about any information you\nwant to store about each epoch; in this case, the metadata encodes\ninformation about the stimulus seen on each trial, including properties of\nthe visual word form itself (e.g., NumberOfLetters, VisualComplexity)\nas well as properties of what the word means (e.g., its Concreteness) and\nits prominence in the English lexicon (e.g., WordFrequency). Here are all\nthe variables; note that in a Jupyter notebook, viewing a\n:class:pandas.DataFrame gets rendered as an HTML table instead of the\nnormal Python output block:", "epochs.metadata", "Viewing the metadata values for a given epoch and metadata variable is done\nusing any of the Pandas indexing &lt;pandas:/reference/indexing.rst&gt;\nmethods such as :obj:~pandas.DataFrame.loc,\n:obj:~pandas.DataFrame.iloc, :obj:~pandas.DataFrame.at,\nand :obj:~pandas.DataFrame.iat. Because the\nindex of the dataframe is the integer epoch number, the name- and index-based\nselection methods will work similarly for selecting rows, except that\nname-based selection (with :obj:~pandas.DataFrame.loc) is inclusive of the\nendpoint:", "print('Name-based selection with .loc')\nprint(epochs.metadata.loc[2:4])\n\nprint('\\nIndex-based selection with .iloc')\nprint(epochs.metadata.iloc[2:4])", "Modifying the metadata\nLike any :class:pandas.DataFrame, you can modify the data or add columns as\nneeded. Here we convert the NumberOfLetters column from :class:float to\n:class:integer &lt;int&gt; data type, and add a :class:boolean &lt;bool&gt; column\nthat arbitrarily divides the variable VisualComplexity into high and low\ngroups.", "epochs.metadata['NumberOfLetters'] = \\\n epochs.metadata['NumberOfLetters'].map(int)\n\nepochs.metadata['HighComplexity'] = epochs.metadata['VisualComplexity'] > 65\nepochs.metadata.head()", "Selecting epochs using metadata queries\nAll ~mne.Epochs objects can be subselected by event name, index, or\n:term:slice (see tut-section-subselect-epochs). But\n~mne.Epochs objects with metadata can also be queried using\nPandas query strings &lt;pandas:indexing.query&gt; by passing the query\nstring just as you would normally pass an event name. For example:", "print(epochs['WORD.str.startswith(\"dis\")'])", "This capability uses the :meth:pandas.DataFrame.query method under the\nhood, so you can check out the documentation of that method to learn how to\nformat query strings. Here's another example:", "print(epochs['Concreteness > 6 and WordFrequency < 1'])", "Note also that traditional epochs subselection by condition name still works;\nMNE-Python will try the traditional method first before falling back on rich\nmetadata querying.", "epochs['solenoid'].plot_psd()", "One use of the Pandas query string approach is to select specific words for\nplotting:", "words = ['typhoon', 'bungalow', 'colossus', 'drudgery', 'linguist', 'solenoid']\nepochs['WORD in {}'.format(words)].plot(n_channels=29)", "Notice that in this dataset, each \"condition\" (A.K.A., each word) occurs only\nonce, whereas with the sample-dataset dataset each condition (e.g.,\n\"auditory/left\", \"visual/right\", etc) occurred dozens of times. This makes\nthe Pandas querying methods especially useful when you want to aggregate\nepochs that have different condition names but that share similar stimulus\nproperties. For example, here we group epochs based on the number of letters\nin the stimulus word, and compare the average signal at electrode Pz for\neach group:", "evokeds = dict()\nquery = 'NumberOfLetters == {}'\nfor n_letters in epochs.metadata['NumberOfLetters'].unique():\n evokeds[str(n_letters)] = epochs[query.format(n_letters)].average()\n\nmne.viz.plot_compare_evokeds(evokeds, cmap=('word length', 'viridis'),\n picks='Pz')", "Metadata can also be useful for sorting the epochs in an image plot. For\nexample, here we order the epochs based on word frequency to see if there's a\npattern to the latency or intensity of the response:", "sort_order = np.argsort(epochs.metadata['WordFrequency'])\nepochs.plot_image(order=sort_order, picks='Pz')", "Although there's no obvious relationship in this case, such analyses may be\nuseful for metadata variables that more directly index the time course of\nstimulus processing (such as reaction time).\nAdding metadata to an Epochs object\nYou can add a metadata :class:~pandas.DataFrame to any\n~mne.Epochs object (or replace existing metadata) simply by\nassigning to the :attr:~mne.Epochs.metadata attribute:", "new_metadata = pd.DataFrame(data=['foo'] * len(epochs), columns=['bar'],\n index=range(len(epochs)))\nepochs.metadata = new_metadata\nepochs.metadata.head()", "You can remove metadata from an ~mne.Epochs object by setting its\nmetadata to None:", "epochs.metadata = None" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.24/_downloads/da444a4db06576d438b46fdb32d045cd/topo_compare_conditions.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compare evoked responses for different conditions\nIn this example, an Epochs object for visual and auditory responses is created.\nBoth conditions are then accessed by their respective names to create a sensor\nlayout plot of the related evoked responses.", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Alexandre Gramfort <alexandre.gramfort@inria.fr>\n\n# License: BSD-3-Clause\n\nimport matplotlib.pyplot as plt\nimport mne\n\nfrom mne.viz import plot_evoked_topo\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up amplitude-peak rejection values for MEG channels\nreject = dict(grad=4000e-13, mag=4e-12)\n\n# Create epochs including different events\nevent_id = {'audio/left': 1, 'audio/right': 2,\n 'visual/left': 3, 'visual/right': 4}\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n picks='meg', baseline=(None, 0), reject=reject)\n\n# Generate list of evoked objects from conditions names\nevokeds = [epochs[name].average() for name in ('left', 'right')]", "Show topography for two different conditions", "colors = 'blue', 'red'\ntitle = 'MNE sample data\\nleft vs right (A/V combined)'\n\nplot_evoked_topo(evokeds, color=colors, title=title, background_color='w')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
martinjrobins/hobo
examples/sampling/monomial-gamma-hmc.ipynb
bsd-3-clause
[ "Inference: Monomial-Gamma Hamiltonian Monte Carlo\nThis example shows you how to perform Bayesian inference on a Gaussian distribution and a time-series problem, using Monomial-Gamma HMC.\nFirst, we create a simple normal distribution", "import pints\nimport pints.toy\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Create log pdf\nlog_pdf = pints.toy.GaussianLogPDF([2, 4], [[1, 0], [0, 3]])\n\n# Contour plot of pdf\nlevels = np.linspace(-3,12,20)\nnum_points = 100\nx = np.linspace(-1, 5, num_points)\ny = np.linspace(-0, 8, num_points)\nX, Y = np.meshgrid(x, y)\nZ = np.zeros(X.shape)\nZ = np.exp([[log_pdf([i, j]) for i in x] for j in y])\nplt.contour(X, Y, Z)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()", "Now we set up and run a sampling routine using Monomial-Gamma HMC MCMC", "# Choose starting points for 3 mcmc chains\nxs = [\n [2, 1],\n [3, 3],\n [5, 4],\n]\n\n# Create mcmc routine\nsigma = [1, 1]\nmcmc = pints.MCMCController(log_pdf, 3, xs, method=pints.MonomialGammaHamiltonianMCMC, sigma0=sigma)\n\n# Add stopping criterion\nmcmc.set_max_iterations(1000)\n\n# Set up modest logging\nmcmc.set_log_to_screen(True)\nmcmc.set_log_interval(100)\n\n# change 'a' parameter in kinetic energy function used by individual samplers\nfor sampler in mcmc.samplers():\n sampler.set_a(0.5)\n\n# Run!\nprint('Running...')\nfull_chains = mcmc.run()\nprint('Done!')\n\n# Show traces and histograms\nimport pints.plot\npints.plot.trace(full_chains)\nplt.show()\n\n# Discard warm up\nchains = full_chains[:, 200:]\n\n# Check convergence and other properties of chains\nresults = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['mean_x', 'mean_y'])\nprint(results)\n\n# Look at distribution in chain 0\npints.plot.pairwise(chains[0], kde=True)\nplt.show()\n\n# Check Kullback-Leibler divergence of chains\nprint(log_pdf.kl_divergence(chains[0]))\nprint(log_pdf.kl_divergence(chains[1]))\nprint(log_pdf.kl_divergence(chains[2]))", "Monomial-Gamma HMC on a time-series problem\nWe now try the same method on a time-series problem", "import pints\nimport pints.toy as toy\nimport pints.plot\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Load a forward model\nmodel = toy.LogisticModel()\n\n# Create some toy data\ntimes = np.linspace(0, 1000, 50)\nreal_parameters = np.array([0.015, 500])\norg_values = model.simulate(real_parameters, times)\n\n# Add noise\nnp.random.seed(1)\nnoise = 10\nvalues = org_values + np.random.normal(0, noise, org_values.shape)\n\n# Create an object with links to the model and time series\nproblem = pints.SingleOutputProblem(model, times, values)\n\n# Create a log-likelihood function\nlog_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, noise)\n\n# Create a uniform prior over the parameters\nlog_prior = pints.UniformLogPrior(\n [0.01, 400],\n [0.02, 600]\n)\n\n# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)\n\n# Choose starting points for 3 mcmc chains\nxs = [\n real_parameters * 1.01,\n real_parameters * 0.9,\n real_parameters * 1.1,\n]\n\n# Create mcmc routine\nmcmc = pints.MCMCController(log_posterior, len(xs), xs, method=pints.MonomialGammaHamiltonianMCMC)\n\n# Add stopping criterion\nmcmc.set_max_iterations(1000)\n\n# Set up modest logging\nmcmc.set_log_to_screen(True)\nmcmc.set_log_interval(100)\n\n# Run!\nprint('Running...')\nchains = mcmc.run()\nprint('Done!')", "The chains do not take long to reach equilibrium with this method.", "# Check convergence and other properties of chains\nresults = pints.MCMCSummary(chains=chains[:, 200:], time=mcmc.time(), parameter_names=['growth rate', 'capacity'])\nprint(results)\n\n# Show traces and histograms\npints.plot.trace(chains)\nplt.show()", "Chains have converged!\nExtract any divergent iterations -- looks fine as there were none.", "div_iterations = []\nfor sampler in mcmc.samplers():\n div_iterations.append(sampler.divergent_iterations())\nprint(\"There were \" + str(np.sum(div_iterations)) + \" divergent iterations.\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/lectures/Introduction_to_Pandas/answers/notebook.ipynb
apache-2.0
[ "Exercises: Introduction to pandas - Answer Key\nBy Christopher van Hoecke, Maxwell Margenot\nLecture Link :\nhttps://www.quantopian.com/lectures/introduction-to-pandas\nIMPORTANT NOTE:\nThis lecture corresponds to the Introduction to Pandas lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public", "# Useful Functions\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Exercise 1\na. Series\nGiven an array of data, please create a pandas Series s with a datetime index starting 2016-01-01. The index should be daily frequency and should be the same length as the data.", "l = np.random.randint(1,100, size=1000)\ns = pd.Series(l)\n\nnew_index = pd.date_range(\"2016-01-01\", periods=len(s), freq=\"D\")\ns.index = new_index\nprint s", "b. Accessing Series Elements.\n\nPrint every other element of the first 50 elements of series s.\nFind the value associated with the index 2017-02-20.", "# Print every other element of the first 50 elements\ns.iloc[:50:2];\n# Values associated with the index 2017-02-20\ns.loc['2017-02-20']", "c. Boolean Indexing.\nIn the series s, print all the values between 1 and 3.", "# Print s between 1 and 3\ns.loc[(s>1) & (s<3)]", "Exercise 2 : Indexing and time series.\na. Display\nPrint the first and last 5 elements of the series s.", "# First 5 elements\ns.head(5)\n# Last 5 elements\ns.tail(5)", "b. Resampling\n\nUsing the resample method, upsample the daily data to monthly frequency. Use the median method so that each monthly value is the median price of all the days in that month.\nTake the daily data and fill in every day, including weekends and holidays, using forward-fills.", "symbol = \"CMG\"\nstart = \"2012-01-01\"\nend = \"2016-01-01\"\nprices = get_pricing(symbol, start_date=start, end_date=end, fields=\"price\")\n\n# Resample daily prices to get monthly prices using median. \nmonthly_prices = prices.resample('M').median()\nmonthly_prices.head(24)\n\n# Data for every day, (including weekends and holidays)\ncalendar_dates = pd.date_range(start=start, end=end, freq='D', tz='UTC')\ncalendar_prices = prices.reindex(calendar_dates, method='ffill')\ncalendar_prices.head(15)", "Exercise 3 : Missing Data\n\nReplace all instances of NaN using the forward fill method. \nInstead of filling, remove all instances of NaN from the data.", "# Fill missing data using Backwards fill method\nbfilled_prices = calendar_prices.fillna(method='bfill')\nbfilled_prices.head(10)\n\n# Drop instances of nan in the data\ndropped_prices = calendar_prices.dropna()\ndropped_prices.head(10)", "Exercise 4 : Time Series Analysis with pandas\na. General Information\nPrint the count, mean, standard deviation, minimum, 25th, 50th, and 75th percentiles, and the max of our series s.", "print \"Summary Statistics\"\nprint prices.describe()", "b. Series Operations\n\nGet the additive and multiplicative returns of this series. \nCalculate the rolling mean with a 60 day window.\nCalculate the standard deviation with a 60 day window.", "data = get_pricing('GE', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')\n\nmult_returns = data.pct_change()[1:] #Multiplicative returns \nadd_returns = data.diff()[1:] #Additive returns \n\n# Rolling mean\nrolling_mean = data.rolling(window=60).mean()\nrolling_mean.name = \"60-day rolling mean\"\n\n# Rolling Standard Deviation\nrolling_std = data.rolling(window=60).std()\nrolling_std.name = \"60-day rolling volatility\"", "Exercise 5 : DataFrames\na. Indexing\nForm a DataFrame out of dict_data with l as its index.", "l = ['First','Second', 'Third', 'Fourth', 'Fifth']\ndict_data = {'a' : [1, 2, 3, 4, 5], \n 'b' : ['L', 'K', 'J', 'M', 'Z'],\n 'c' : np.random.normal(0, 1, 5)\n }\n\n# Adding l as an index to dict_data\nframe_data = pd.DataFrame(dict_data, index=l)\nprint frame_data", "b. DataFrames Manipulation\n\nConcatenate the following two series to form a dataframe. \nRename the columns to Good Numbers and Bad Numbers. \nChange the index to be a datetime index starting on 2016-01-01.", "s1 = pd.Series([2, 3, 5, 7, 11, 13], name='prime')\ns2 = pd.Series([1, 4, 6, 8, 9, 10], name='other')\n\nnumbers = pd.concat([s1, s2], axis=1) # Concatenate the two series\nnumbers.columns = ['Useful Numbers', 'Not Useful Numbers'] # Rename the two columns\nnumbers.index = pd.date_range(\"2016-01-01\", periods=len(numbers)) # Index change\nprint numbers", "Exercise 6 : Accessing DataFrame elements.\na. Columns\n\nCheck the data type of one of the DataFrame's columns.\nPrint the values associated with time range 2013-01-01 to 2013-01-10.", "symbol = [\"XOM\", \"BP\", \"COP\", \"TOT\"]\nstart = \"2012-01-01\"\nend = \"2016-01-01\"\nprices = get_pricing(symbol, start_date=start, end_date=end, fields=\"price\")\nif isinstance(symbol, list):\n prices.columns = map(lambda x: x.symbol, prices.columns)\nelse:\n prices.name = symbol\n\n# Check Type of Data for these two. \nprices.XOM.head()\nprices.loc[:, 'XOM'].head()\n\n# Print data type\nprint type(prices.XOM)\nprint type(prices.loc[:, 'XOM'])\n\n# Print values associated with time range\nprices.loc['2013-01-01':'2013-01-10']", "Exercise 7 : Boolean Indexing\na. Filtering.\n\nFilter pricing data from the last question (stored in prices) to only print values where:\nBP > 30\nXOM < 100\nThe intersection of both above conditions (BP > 30 and XOM < 100)\nThe union of the previous composite condition along with TOT having no nan values ((BP > 30 and XOM < 100) or TOT is non-NaN).\n\n\nAdd a column for TSLA and drop the column for XOM.", "# Filter data \n# BP > 30\nprint prices.loc[prices.BP > 30].head()\n# XOM < 100\nprint prices.loc[prices.XOM < 100].head()\n# BP > 30 AND XOM < 100\nprint prices.loc[(prices.BP > 30) & (prices.XOM < 100)].head()\n# The union of (BP > 30 AND XOM < 100) with TOT being non-nan\nprint prices.loc[((prices.BP > 30) & (prices.XOM < 100)) | (~ prices.TOT.isnull())].head()\n\n# Adding TSLA \ns_1 = get_pricing('TSLA', start_date=start, end_date=end, fields='price')\nprices.loc[:, 'TSLA'] = s_1\n\n# Dropping XOM\nprices = prices.drop('XOM', axis=1)\nprices.head(5)", "b. DataFrame Manipulation (again)\n\nConcatenate these DataFrames.\nFill the missing data with 0s", "df_1 = get_pricing(['SPY', 'VXX'], start_date=start, end_date=end, fields='price')\ndf_2 = get_pricing(['MSFT', 'AAPL', 'GOOG'], start_date=start, end_date=end, fields='price')\n# Concatenate the dataframes\ndf_3 = pd.concat([df_1, df_2], axis=1)\ndf_3.head()\n\n# Fill GOOG missing data with nan\nfilled0_df_3 = df_3.fillna(0)\nfilled0_df_3.head(5)", "Exercise 8 : Time Series Analysis\na. Summary\n\nPrint out a summary of the prices DataFrame from above.\nTake the log returns and print the first 10 values.\nPrint the multiplicative returns of each company.\nNormalize and plot the returns from 2014 to 2015.\nPlot a 60 day window rolling mean of the prices.\nPlot a 60 day window rolling standfard deviation of the prices.", "# Summary\nprices.describe()\n\n# Natural Log of the returns and print out the first 10 values\nnp.log(prices).head(10)\n\n# Multiplicative returns\nmult_returns = prices.pct_change()[1:]\nmult_returns.head()\n\n# Normalizing the returns and plotting one year of data\nnorm_returns = (mult_returns - mult_returns.mean(axis=0))/mult_returns.std(axis=0)\nnorm_returns.loc['2014-01-01':'2015-01-01'].plot();\n\n# Rolling mean\nrolling_mean = prices.rolling(window=60).mean()\nrolling_mean.columns = prices.columns\n\n# Rolling standard deviation\nrolling_std = prices.rolling(window=60).std()\nrolling_mean.columns = prices.columns\n\n# Plotting \nmean = rolling_mean.plot();\nplt.title(\"Rolling Mean of Prices\")\nplt.xlabel(\"Date\")\nplt.ylabel(\"Price\")\nplt.legend();\n\nstd = rolling_std.plot();\nplt.title(\"Rolling standard deviation of Prices\")\nplt.xlabel(\"Date\")\nplt.ylabel(\"Price\")\nplt.legend();", "Congratulations on completing the Introduction to pandas exercises!\nAs you learn more about writing trading algorithms and the Quantopian platform, be sure to check out the daily Quantopian Contest, in which you can compete for a cash prize every day.\nStart by going through the Writing a Contest Algorithm Tutorial.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cgpotts/cs224u
nli_02_models.ipynb
apache-2.0
[ "Natural language inference: models", "__author__ = \"Christopher Potts\"\n__version__ = \"CS224u, Stanford, Spring 2022\"", "Contents\n\nOverview\nSet-up\nSparse feature representations\nFeature representations\nModel wrapper for hyperparameter search\nAssessment\n\n\nHypothesis-only baselines\nSentence-encoding models\nDense representations\nSentence-encoding RNNs\nOther sentence-encoding model ideas\n\n\nChained models\nSimple RNN\nSeparate premise and hypothesis RNNs\n\n\nAttention mechanisms\nError analysis with the MultiNLI annotations\n\nOverview\nThis notebook defines and explores a number of models for NLI. The general plot is familiar from our work with the Stanford Sentiment Treebank:\n\nModels based on sparse feature representations\nLinear classifiers and feed-forward neural classifiers using dense feature representations\nRecurrent neural networks (and, briefly, tree-structured neural networks)\n\nThe twist here is that, while NLI is another classification problem, the inputs have important high-level structure: a premise and a hypothesis. This invites exploration of a host of neural designs:\n\n\nIn sentence-encoding models, the premise and hypothesis are analyzed separately, and combined only for the final classification step.\n\n\nIn chained models, the premise is processed first, then the hypotheses, giving a unified representation of the pair.\n\n\nNLI resembles sequence-to-sequence problems like machine translation and language modeling. The central modeling difference is that NLI doesn't produce an output sequence, but rather consumes two sequences to produce a label. Still, there are enough affinities that many ideas have been shared among these areas.\nSet-up\nSee the previous notebook for set-up instructions for this unit.", "from collections import Counter\nfrom itertools import product\nfrom nltk.tokenize.treebank import TreebankWordTokenizer\nimport numpy as np\nimport os\nimport pandas as pd\nfrom datasets import load_dataset\nimport warnings\n\nfrom sklearn.exceptions import ConvergenceWarning\nfrom sklearn.linear_model import LogisticRegression\n\nimport torch\nimport torch.nn as nn\nimport torch.utils.data\nfrom torch_model_base import TorchModelBase\nfrom torch_rnn_classifier import TorchRNNClassifier, TorchRNNModel\nfrom torch_shallow_neural_classifier import TorchShallowNeuralClassifier\n\nimport nli\nimport utils\n\nutils.fix_random_seeds()\n\nGLOVE_HOME = os.path.join('data', 'glove.6B')\n\nDATA_HOME = os.path.join(\"data\", \"nlidata\")\n\nANNOTATIONS_HOME = os.path.join(DATA_HOME, \"multinli_1.0_annotations\")\n\nsnli = load_dataset(\"snli\")", "Sparse feature representations\nWe begin by looking at models based in sparse, hand-built feature representations. As in earlier units of the course, we will see that these models are competitive: easy to design, fast to optimize, and highly effective.\nFeature representations\nThe guiding idea for NLI sparse features is that one wants to knit together the premise and hypothesis, so that the model can learn about their relationships rather than just about each part separately.\nWith word_overlap_phi, we just get the set of words that occur in both the premise and hypothesis.", "tokenizer = TreebankWordTokenizer()\n\ndef word_overlap_phi(ex):\n \"\"\"\n Basis for features for the words in both the premise and hypothesis.\n Downcases all words.\n\n Parameters\n ----------\n ex: NLIExample instance\n\n Returns\n -------\n defaultdict\n Maps each word in both `t1` and `t2` to 1.\n\n \"\"\"\n words1 = {w.lower() for w in tokenizer.tokenize(ex.premise)}\n words2 = {w.lower() for w in tokenizer.tokenize(ex.hypothesis)}\n return Counter(words1 & words2)", "With word_cross_product_phi, we count all the pairs $(w_{1}, w_{2})$ where $w_{1}$ is a word from the premise and $w_{2}$ is a word from the hypothesis. This creates a very large feature space. These models are very strong right out of the box, and they can be supplemented with more fine-grained features.", "def word_cross_product_phi(ex):\n \"\"\"\n Basis for cross-product features. Downcases all words.\n\n Parameters\n ----------\n ex: NLIExample instance\n\n Returns\n -------\n defaultdict\n Maps each (w1, w2) in the cross-product of `t1.leaves()` and\n `t2.leaves()` (both downcased) to its count. This is a\n multi-set cross-product (repetitions matter).\n\n \"\"\"\n words1 = [w.lower() for w in tokenizer.tokenize(ex.premise)]\n words2 = [w.lower() for w in tokenizer.tokenize(ex.hypothesis)]\n return Counter([(w1, w2) for w1, w2 in product(words1, words2)])", "Model wrapper for hyperparameter search\nOur experiment framework is basically the same as the one we used for the Stanford Sentiment Treebank. \nFor a full evaluation, we would like to search for the best hyperparameters. However, SNLI is very large, so each evaluation is very expensive. To try to keep this under control, we can set the optimizer to do just a few epochs of training during the search phase. The assumption here is that the best parameters actually emerge as best early in the process. This is by no means guaranteed, but it seems like a good way to balance doing serious hyperparameter search with the costs of doing dozens or even thousands of experiments. (See also the discussion of hyperparameter search in the evaluation methods notebook.)", "def fit_softmax_with_hyperparameter_search(X, y):\n \"\"\"\n A MaxEnt model of dataset with hyperparameter cross-validation.\n\n Parameters\n ----------\n X : 2d np.array\n The matrix of features, one example per row.\n\n y : list\n The list of labels for rows in `X`.\n\n Returns\n -------\n sklearn.linear_model.LogisticRegression\n A trained model instance, the best model found.\n\n \"\"\"\n\n mod = LogisticRegression(\n fit_intercept=True,\n max_iter=3, ## A small number of iterations.\n solver='liblinear',\n multi_class='ovr')\n\n param_grid = {\n 'C': [0.4, 0.6, 0.8, 1.0],\n 'penalty': ['l1','l2']}\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n bestmod = utils.fit_classifier_with_hyperparameter_search(\n X, y, mod, param_grid=param_grid, cv=3)\n\n return bestmod", "Assessment", "%%time\nword_cross_product_experiment_xval = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=word_cross_product_phi,\n train_func=fit_softmax_with_hyperparameter_search,\n assess_reader=None,\n verbose=False)\n\noptimized_word_cross_product_model = word_cross_product_experiment_xval['model']\n\n# `word_cross_product_experiment_xval` consumes a lot of memory, and we\n# won't make use of it outside of the model, so we can remove it now.\ndel word_cross_product_experiment_xval\n\ndef fit_optimized_word_cross_product(X, y):\n optimized_word_cross_product_model.max_iter = 1000 # To convergence in this phase!\n optimized_word_cross_product_model.fit(X, y)\n return optimized_word_cross_product_model\n\n%%time\n_ = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=word_cross_product_phi,\n train_func=fit_optimized_word_cross_product,\n assess_reader=nli.NLIReader(snli['validation']))", "As expected word_cross_product_phi is reasonably strong. This model is similar to (a simplified version of) the baseline \"Lexicalized Classifier\" in the original SNLI paper by Bowman et al..\nHypothesis-only baselines\nIn an outstanding project for this course in 2016, Leonid Keselman observed that one can do much better than chance on SNLI by processing only the hypothesis. This relates to observations we made in the word-level homework/bake-off about how certain terms will tend to appear more on the right in entailment pairs than on the left. In 2018, a number of groups independently (re-)discovered this fact and published analyses: Poliak et al. 2018, Tsuchiya 2018, Gururangan et al. 2018. Let's build on this insight by fitting a hypothesis-only model that seems comparable to the cross-product-based model we just looked at:", "def hypothesis_only_unigrams_phi(ex):\n return Counter(tokenizer.tokenize(ex.hypothesis))\n\ndef fit_softmax(X, y):\n mod = LogisticRegression(\n fit_intercept=True,\n solver='liblinear',\n multi_class='ovr')\n mod.fit(X, y)\n return mod\n\n%%time\n_ = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=hypothesis_only_unigrams_phi,\n train_func=fit_softmax,\n assess_reader=nli.NLIReader(snli['validation']))", "Chance performance on SNLI is 0.33 accuracy/F1. The above makes it clear that using chance as a baseline will overstate how much traction a model has actually gotten on the SNLI problem. The hypothesis-only baseline is better for this kind of calibration. \nIdeally, for each model one explores, one would fit a minimally different hypothesis-only model as a baseline. To avoid undue complexity, I won't do that here, but we will use the above results to provide informal context, and I will sketch reasonable hypothesis-only baselines for each model we consider.\nSentence-encoding models\nWe turn now to sentence-encoding models. The hallmark of these is that the premise and hypothesis get their own representation in some sense, and then those representations are combined to predict the label. Bowman et al. 2015 explore models of this form as part of introducing SNLI.\nDense representations\nPerhaps the simplest sentence-encoding model sums (or averages, etc.) the word representations for the premise, does the same for the hypothesis, and concatenates those two representations for use as the input to a linear classifier. \nHere's a diagram that is meant to suggest the full space of models of this form:\n<img src=\"fig/nli-softmax.png\" width=800 />\nHere's an implementation of this model where \n\nThe embedding is GloVe.\nThe word representations are summed.\nThe premise and hypothesis vectors are concatenated.\nA softmax classifier is used at the top.", "glove_lookup = utils.glove2dict(\n os.path.join(GLOVE_HOME, 'glove.6B.300d.txt'))\n\ndef glove_leaves_phi(ex, np_func=np.mean):\n \"\"\"\n Represent `ex` as a combination of the vector of their words,\n and concatenate these two combinator vectors.\n\n Parameters\n ----------\n ex : NLIExample\n\n np_func : function\n A numpy matrix operation that can be applied columnwise,\n like `np.mean`, `np.sum`, or `np.prod`. The requirement is that\n the function take `axis=0` as one of its arguments (to ensure\n columnwise combination) and that it return a vector of a\n fixed length, no matter what the size of the tree is.\n\n Returns\n -------\n np.array\n\n \"\"\"\n prem_vecs = _get_tree_vecs(ex.premise, glove_lookup, np_func)\n hyp_vecs = _get_tree_vecs(ex.hypothesis, glove_lookup, np_func)\n return np.concatenate((prem_vecs, hyp_vecs))\n\n\ndef _get_tree_vecs(text, lookup, np_func):\n tokens = tokenizer.tokenize(text) \n allvecs = np.array([lookup[w] for w in tokens if w in lookup])\n if len(allvecs) == 0:\n dim = len(next(iter(lookup.values())))\n feats = np.zeros(dim)\n else:\n feats = np_func(allvecs, axis=0)\n return feats\n\n%%time\n_ = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=glove_leaves_phi,\n train_func=fit_softmax_with_hyperparameter_search,\n assess_reader=nli.NLIReader(snli['validation']),\n vectorize=False) # Ask `experiment` not to featurize; we did it already.", "The hypothesis-only counterpart of this model is very clear: we would just encode ex.hypothesiss with GloVe, leaving ex.premise out entirely.\nAs an elaboration of this approach, it is worth considering the VecAvg model we studied in sst_03_neural_networks.ipynb, which updates the initial vector representations during learning.\nSentence-encoding RNNs\nA more sophisticated sentence-encoding model processes the premise and hypothesis with separate RNNs and uses the concatenation of their final states as the basis for the classification decision at the top:\n<img src=\"fig/nli-rnn-sentencerep.png\" width=800 />\nIt is relatively straightforward to extend torch_rnn_classifier so that it can handle this architecture:\nA sentence-encoding dataset\nWhereas torch_rnn_classifier.TorchRNNDataset creates batches that consist of (sequence, sequence_length, label) triples, the sentence encoding model requires us to double the first two components. The most important features of this is collate_fn, which determines what the batches look like:", "class TorchRNNSentenceEncoderDataset(torch.utils.data.Dataset):\n def __init__(self, prem_seqs, hyp_seqs, prem_lengths, hyp_lengths, y=None):\n self.prem_seqs = prem_seqs\n self.hyp_seqs = hyp_seqs\n self.prem_lengths = prem_lengths\n self.hyp_lengths = hyp_lengths\n self.y = y\n assert len(self.prem_seqs) == len(self.hyp_seqs)\n assert len(self.hyp_seqs) == len(self.prem_lengths)\n assert len(self.prem_lengths) == len(self.hyp_lengths)\n if self.y is not None:\n assert len(self.hyp_lengths) == len(self.y)\n\n @staticmethod\n def collate_fn(batch):\n batch = list(zip(*batch))\n X_prem = torch.nn.utils.rnn.pad_sequence(batch[0], batch_first=True)\n X_hyp = torch.nn.utils.rnn.pad_sequence(batch[1], batch_first=True)\n prem_lengths = torch.tensor(batch[2])\n hyp_lengths = torch.tensor(batch[3])\n if len(batch) == 5:\n y = torch.tensor(batch[4])\n return X_prem, X_hyp, prem_lengths, hyp_lengths, y\n else:\n return X_prem, X_hyp, prem_lengths, hyp_lengths\n\n def __len__(self):\n return len(self.prem_seqs)\n\n def __getitem__(self, idx):\n if self.y is None:\n return (self.prem_seqs[idx], self.hyp_seqs[idx],\n self.prem_lengths[idx], self.hyp_lengths[idx])\n else:\n return (self.prem_seqs[idx], self.hyp_seqs[idx],\n self.prem_lengths[idx], self.hyp_lengths[idx],\n self.y[idx])", "A sentence-encoding model\nWith TorchRNNSentenceEncoderClassifierModel, we create a new nn.Module that functions just like the existing torch_rnn_classifier.TorchRNNClassifierModel, except that it takes two RNN instances as arguments and combines their final output states to create the classifier input:", "class TorchRNNSentenceEncoderClassifierModel(nn.Module):\n def __init__(self, prem_rnn, hyp_rnn, output_dim):\n super().__init__()\n self.prem_rnn = prem_rnn\n self.hyp_rnn = hyp_rnn\n self.output_dim = output_dim\n self.bidirectional = self.prem_rnn.bidirectional\n # Doubled because we concatenate the final states of\n # the premise and hypothesis RNNs:\n self.classifier_dim = self.prem_rnn.hidden_dim * 2\n # Bidirectionality doubles it again:\n if self.bidirectional:\n self.classifier_dim *= 2\n self.classifier_layer = nn.Linear(\n self.classifier_dim, self.output_dim)\n\n def forward(self, X_prem, X_hyp, prem_lengths, hyp_lengths):\n # Premise:\n _, prem_state = self.prem_rnn(X_prem, prem_lengths)\n prem_state = self.get_batch_final_states(prem_state)\n # Hypothesis:\n _, hyp_state = self.hyp_rnn(X_hyp, hyp_lengths)\n hyp_state = self.get_batch_final_states(hyp_state)\n # Final combination:\n state = torch.cat((prem_state, hyp_state), dim=1)\n # Classifier layer:\n logits = self.classifier_layer(state)\n return logits\n\n def get_batch_final_states(self, state):\n if self.prem_rnn.rnn.__class__.__name__ == 'LSTM':\n state = state[0].squeeze(0)\n else:\n state = state.squeeze(0)\n if self.bidirectional:\n state = torch.cat((state[0], state[1]), dim=1)\n return state", "A sentence-encoding model interface\nFinally, we subclass TorchRNNClassifier. Here, just need to redefine three methods: build_dataset and build_graph to make use of the new components above:", "class TorchRNNSentenceEncoderClassifier(TorchRNNClassifier):\n\n def build_dataset(self, X, y=None):\n X_prem, X_hyp = zip(*X)\n X_prem, prem_lengths = self._prepare_sequences(X_prem)\n X_hyp, hyp_lengths = self._prepare_sequences(X_hyp)\n if y is None:\n return TorchRNNSentenceEncoderDataset(\n X_prem, X_hyp, prem_lengths, hyp_lengths)\n else:\n self.classes_ = sorted(set(y))\n self.n_classes_ = len(self.classes_)\n class2index = dict(zip(self.classes_, range(self.n_classes_)))\n y = [class2index[label] for label in y]\n return TorchRNNSentenceEncoderDataset(\n X_prem, X_hyp, prem_lengths, hyp_lengths, y)\n\n def build_graph(self):\n prem_rnn = TorchRNNModel(\n vocab_size=len(self.vocab),\n embedding=self.embedding,\n use_embedding=self.use_embedding,\n embed_dim=self.embed_dim,\n rnn_cell_class=self.rnn_cell_class,\n hidden_dim=self.hidden_dim,\n bidirectional=self.bidirectional,\n freeze_embedding=self.freeze_embedding)\n\n hyp_rnn = TorchRNNModel(\n vocab_size=len(self.vocab),\n embedding=prem_rnn.embedding, # Same embedding for both RNNs.\n use_embedding=self.use_embedding,\n embed_dim=self.embed_dim,\n rnn_cell_class=self.rnn_cell_class,\n hidden_dim=self.hidden_dim,\n bidirectional=self.bidirectional,\n freeze_embedding=self.freeze_embedding)\n\n model = TorchRNNSentenceEncoderClassifierModel(\n prem_rnn, hyp_rnn, output_dim=self.n_classes_)\n\n self.embed_dim = prem_rnn.embed_dim\n\n return model", "Simple example\nThis toy problem illustrates how this works in detail:", "def simple_example():\n vocab = ['a', 'b', '$UNK']\n\n # Reversals are good, and other pairs are bad:\n train = [\n [(list('ab'), list('ba')), 'good'],\n [(list('aab'), list('baa')), 'good'],\n [(list('abb'), list('bba')), 'good'],\n [(list('aabb'), list('bbaa')), 'good'],\n [(list('ba'), list('ba')), 'bad'],\n [(list('baa'), list('baa')), 'bad'],\n [(list('bba'), list('bab')), 'bad'],\n [(list('bbaa'), list('bbab')), 'bad'],\n [(list('aba'), list('bab')), 'bad']]\n\n test = [\n [(list('baaa'), list('aabb')), 'bad'],\n [(list('abaa'), list('baaa')), 'bad'],\n [(list('bbaa'), list('bbaa')), 'bad'],\n [(list('aaab'), list('baaa')), 'good'],\n [(list('aaabb'), list('bbaaa')), 'good']]\n\n mod = TorchRNNSentenceEncoderClassifier(\n vocab,\n max_iter=1000,\n embed_dim=10,\n bidirectional=True,\n hidden_dim=10)\n\n X, y = zip(*train)\n mod.fit(X, y)\n\n X_test, y_test = zip(*test)\n preds = mod.predict(X_test)\n\n print(\"\\nPredictions:\")\n for ex, pred, gold in zip(X_test, preds, y_test):\n score = \"correct\" if pred == gold else \"incorrect\"\n print(\"{0:>6} {1:>6} - predicted: {2:>4}; actual: {3:>4} - {4}\".format(\n \"\".join(ex[0]), \"\".join(ex[1]), pred, gold, score))\n\nsimple_example()", "Example SNLI run", "def sentence_encoding_rnn_phi(ex):\n \"\"\"Map `ex.premise` and `ex.hypothesis` to a pair of lists of leaf nodes.\"\"\"\n p = tuple(tokenizer.tokenize(ex.premise))\n h = tuple(tokenizer.tokenize(ex.hypothesis)) \n return (p, h)\n\ndef get_sentence_encoding_vocab(X, n_words=None, mincount=1):\n wc = Counter([w for pair in X for ex in pair for w in ex])\n wc = wc.most_common(n_words) if n_words else wc.items()\n if mincount > 1:\n wc = {(w, c) for w, c in wc if c >= mincount}\n vocab = {w for w, c in wc}\n vocab.add(\"$UNK\")\n return sorted(vocab)\n\ndef fit_simple_sentence_encoding_rnn_with_hyperparameter_search(X, y):\n vocab = get_sentence_encoding_vocab(X, mincount=2)\n\n mod = TorchRNNSentenceEncoderClassifier(\n vocab,\n hidden_dim=300,\n embed_dim=300,\n bidirectional=True,\n early_stopping=True,\n max_iter=1)\n\n param_grid = {\n 'batch_size': [32, 64, 128, 256],\n 'eta': [0.0001, 0.001, 0.01]}\n\n bestmod = utils.fit_classifier_with_hyperparameter_search(\n X, y, mod, cv=3, param_grid=param_grid)\n\n return bestmod\n\n%%time\nsentence_encoder_rnn_experiment_xval = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=sentence_encoding_rnn_phi,\n train_func=fit_simple_sentence_encoding_rnn_with_hyperparameter_search,\n assess_reader=None,\n vectorize=False)\n\noptimized_sentence_encoding_rnn = sentence_encoder_rnn_experiment_xval['model']\n\n# Remove unneeded experimental data:\ndel sentence_encoder_rnn_experiment_xval\n\ndef fit_optimized_sentence_encoding_rnn(X, y):\n optimized_sentence_encoding_rnn.max_iter = 1000 # Give early_stopping time!\n optimized_sentence_encoding_rnn.fit(X, y)\n return optimized_sentence_encoding_rnn\n\n%%time\n_ = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=sentence_encoding_rnn_phi,\n train_func=fit_optimized_sentence_encoding_rnn,\n assess_reader=nli.NLIReader(snli['validation']),\n vectorize=False)", "This is above our general hypothesis-only baseline ($\\approx$0.65), but it is below the simpler word cross-product model ($\\approx$0.75).\nA natural hypothesis-only baseline for this model be a simple TorchRNNClassifier that processed only the hypothesis.\nOther sentence-encoding model ideas\nGiven that we already explored tree-structured neural networks (TreeNNs), it's natural to consider these as the basis for sentence-encoding NLI models:\n<img src=\"fig/nli-treenn.png\" width=800 />\nAnd this is just the begnning: any model used to represent sentences is presumably a candidate for use in sentence-encoding NLI!\nChained models\nThe final major class of NLI designs we look at are those in which the premise and hypothesis are processed sequentially, as a pair. These don't deliver representations of the premise or hypothesis separately. They bear the strongest resemblance to classic sequence-to-sequence models.\nSimple RNN\nIn the simplest version of this model, we just concatenate the premise and hypothesis. The model itself is identical to the one we used for the Stanford Sentiment Treebank:\n<img src=\"fig/nli-rnn-chained.png\" width=800 />\nTo implement this, we can use TorchRNNClassifier out of the box. We just need to concatenate the leaves of the premise and hypothesis trees:", "def simple_chained_rep_rnn_phi(ex):\n \"\"\"Map `ex.premise` and `ex.hypothesis` to a single list of leaf nodes.\n\n A slight variant might insert a designated boundary symbol between\n the premise leaves and the hypothesis leaves. Be sure to add it to\n the vocab in that case, else it will be $UNK.\n \"\"\"\n p = tokenizer.tokenize(ex.premise)\n h = tokenizer.tokenize(ex.hypothesis)\n return p + h\n\ndef fit_simple_chained_rnn_with_hyperparameter_search(X, y):\n vocab = utils.get_vocab(X, mincount=2)\n\n mod = TorchRNNClassifier(\n vocab,\n hidden_dim=300,\n embed_dim=300,\n bidirectional=True,\n early_stopping=True,\n max_iter=1)\n\n param_grid = {\n 'batch_size': [32, 64, 128, 256],\n 'eta': [0.0001, 0.001, 0.01]}\n\n bestmod = utils.fit_classifier_with_hyperparameter_search(\n X, y, mod, cv=3, param_grid=param_grid)\n\n return bestmod\n\n%%time\nchained_rnn_experiment_xval = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=simple_chained_rep_rnn_phi,\n train_func=fit_simple_chained_rnn_with_hyperparameter_search,\n assess_reader=None,\n vectorize=False)\n\noptimized_chained_rnn = chained_rnn_experiment_xval['model']\n\ndel chained_rnn_experiment_xval\n\ndef fit_optimized_simple_chained_rnn(X, y):\n optimized_chained_rnn.max_iter = 1000\n optimized_chained_rnn.fit(X, y)\n return optimized_chained_rnn\n\n%%time\n_ = nli.experiment(\n train_reader=nli.NLIReader(snli['train']),\n phi=simple_chained_rep_rnn_phi,\n train_func=fit_optimized_simple_chained_rnn,\n assess_reader=nli.NLIReader(snli['validation']),\n vectorize=False)", "This model is close to the word cross-product baseline ($\\approx$0.75), but it's not better. Perhaps using a GloVe embedding would suffice to push it into the lead.\nThe hypothesis-only baseline for this model is very simple: we just use the same model, but we process only the hypothesis.\nSeparate premise and hypothesis RNNs\nA natural variation on the above is to give the premise and hypothesis each their own RNN:\n<img src=\"fig/nli-rnn-chained-separate.png\" width=800 />\nThis greatly increases the number of parameters, but it gives the model more chances to learn that appearing in the premise is different from appearing in the hypothesis. One could even push this idea further by giving the premise and hypothesis their own embeddings as well. This could take the form of a simple modification to the sentence-encoder version defined above.\nAttention mechanisms\nMany of the best-performing systems in the SNLI leaderboard use attention mechanisms to help the model learn important associations between words in the premise and words in the hypothesis. I believe Rocktäschel et al. (2015) were the first to explore such models for NLI.\nFor instance, if puppy appears in the premise and dog in the conclusion, then that might be a high-precision indicator that the correct relationship is entailment.\nThis diagram is a high-level schematic for adding attention mechanisms to a chained RNN model for NLI:\n<img src=\"fig/nli-rnn-attention.png\" width=800 />\nSince PyTorch will handle the details of backpropagation, implementing these models is largely reduced to figuring out how to wrangle the states of the model in the desired way.\nError analysis with the MultiNLI annotations\nThe annotations included with the MultiNLI corpus create some powerful yet easy opportunities for error analysis right out of the box. This section illustrates how to make use of them with models you've trained.\nFirst, we train a chained RNN model on a sample of the MultiNLI data, just for illustrative purposes. To save time, we'll carry over the optimal model we used above for SNLI. (For a real experiment, of course, we would want to conduct the hyperparameter search again, since MultiNLI is very different from SNLI.)", "mnli = load_dataset(\"multi_nli\")\n\nrnn_multinli_experiment = nli.experiment(\n train_reader=nli.NLIReader(mnli['train']),\n phi=simple_chained_rep_rnn_phi,\n train_func=fit_optimized_simple_chained_rnn,\n assess_reader=None,\n random_state=42,\n vectorize=False)", "The return value of nli.experiment contains the information we need to make predictions on new examples. \nNext, we load in the 'matched' condition annotations ('mismatched' would work as well):", "matched_ann_filename = os.path.join(\n ANNOTATIONS_HOME,\n \"multinli_1.0_matched_annotations.txt\")\n\nmatched_ann = nli.read_annotated_subset(\n matched_ann_filename, mnli['validation_matched'])", "The following function uses rnn_multinli_experiment to make predictions on annotated examples, and harvests some other information that is useful for error analysis:", "def predict_annotated_example(ann, experiment_results):\n model = experiment_results['model']\n phi = experiment_results['phi']\n ex = ann['example']\n feats = phi(ex)\n pred = model.predict([feats])[0]\n data = {cat: True for cat in ann['annotations']}\n data.update({'gold': ex.label, 'prediction': pred, 'correct': ex.label == pred})\n return data", "Finally, this function applies predict_annotated_example to a collection of annotated examples and puts the results in a pd.DataFrame for flexible analysis:", "def get_predictions_for_annotated_data(anns, experiment_results):\n data = []\n for ex_id, ann in anns.items():\n results = predict_annotated_example(ann, experiment_results)\n data.append(results)\n return pd.DataFrame(data)\n\nann_analysis_df = get_predictions_for_annotated_data(\n matched_ann, rnn_multinli_experiment)", "With ann_analysis_df, we can see how the model does on individual annotation categories:", "pd.crosstab(ann_analysis_df['correct'], ann_analysis_df['#MODAL'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v1.1/_downloads/4211928bfede6cdca0afdb2d06bea2d1/Find_Natural_Neighbors_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Find Natural Neighbors Verification\nFinding natural neighbors in a triangulation\nA triangle is a natural neighbor of a point if that point is within a circumscribed\ncircle (\"circumcircle\") containing the triangle.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import Delaunay\n\nfrom metpy.interpolate.geometry import circumcircle_radius, find_natural_neighbors\n\n# Create test observations, test points, and plot the triangulation and points.\ngx, gy = np.meshgrid(np.arange(0, 20, 4), np.arange(0, 20, 4))\npts = np.vstack([gx.ravel(), gy.ravel()]).T\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(figsize=(15, 10))\nfor i, inds in enumerate(tri.simplices):\n pts = tri.points[inds]\n x, y = np.vstack((pts, pts[0])).T\n ax.plot(x, y)\n ax.annotate(i, xy=(np.mean(x), np.mean(y)))\n\ntest_points = np.array([[2, 2], [5, 10], [12, 13.4], [12, 8], [20, 20]])\n\nfor i, (x, y) in enumerate(test_points):\n ax.plot(x, y, 'k.', markersize=6)\n ax.annotate('test ' + str(i), xy=(x, y))", "Since finding natural neighbors already calculates circumcenters, return\nthat information for later use.\nThe key of the neighbors dictionary refers to the test point index, and the list of integers\nare the triangles that are natural neighbors of that particular test point.\nSince point 4 is far away from the triangulation, it has no natural neighbors.\nPoint 3 is at the confluence of several triangles so it has many natural neighbors.", "neighbors, circumcenters = find_natural_neighbors(tri, test_points)\nprint(neighbors)", "We can plot all of the triangles as well as the circles representing the circumcircles", "fig, ax = plt.subplots(figsize=(15, 10))\nfor i, inds in enumerate(tri.simplices):\n pts = tri.points[inds]\n x, y = np.vstack((pts, pts[0])).T\n ax.plot(x, y)\n ax.annotate(i, xy=(np.mean(x), np.mean(y)))\n\n# Using circumcenters and calculated circumradii, plot the circumcircles\nfor idx, cc in enumerate(circumcenters):\n ax.plot(cc[0], cc[1], 'k.', markersize=5)\n circ = plt.Circle(cc, circumcircle_radius(*tri.points[tri.simplices[idx]]),\n edgecolor='k', facecolor='none', transform=fig.axes[0].transData)\n ax.add_artist(circ)\n\nax.set_aspect('equal', 'datalim')\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Lstyle1/Deep_learning_projects
seq2seq/sequence_to_sequence_implementation.ipynb
mit
[ "Character Sequence to Sequence\nIn this notebook, we'll build a model that takes in a sequence of letters, and outputs a sorted version of that sequence. We'll do that using what we've learned so far about Sequence to Sequence models. This notebook was updated to work with TensorFlow 1.1 and builds on the work of Dave Currie. Check out Dave's post Text Summarization with Amazon Reviews.\n<img src=\"images/sequence-to-sequence.jpg\"/>\nDataset\nThe dataset lives in the /data/ folder. At the moment, it is made up of the following files:\n * letters_source.txt: The list of input letter sequences. Each sequence is its own line. \n * letters_target.txt: The list of target sequences we'll use in the training process. Each sequence here is a response to the input sequence in letters_source.txt with the same line number.", "import numpy as np\nimport time\n\nimport helper\n\nsource_path = 'data/letters_source.txt'\ntarget_path = 'data/letters_target.txt'\n\nsource_sentences = helper.load_data(source_path)\ntarget_sentences = helper.load_data(target_path)", "Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.", "source_sentences[:50].split('\\n')", "source_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. source_sentences contains a sorted characters of the line.", "target_sentences[:50].split('\\n')", "Preprocess\nTo do anything useful with it, we'll need to turn the each string into a list of characters: \n<img src=\"images/source_and_target_arrays.png\"/>\nThen convert the characters to their int values as declared in our vocabulary:", "def extract_character_vocab(data):\n special_words = ['<PAD>', '<UNK>', '<GO>', '<EOS>']\n\n set_words = set([character for line in data.split('\\n') for character in line])\n int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))}\n vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()}\n\n return int_to_vocab, vocab_to_int\n\n# Build int2letter and letter2int dicts\nsource_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences)\ntarget_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences)\n\n# Convert characters to ids\nsource_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<UNK>']) for letter in line] for line in source_sentences.split('\\n')]\ntarget_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<UNK>']) for letter in line] + [target_letter_to_int['<EOS>']] for line in target_sentences.split('\\n')] \n\nprint(\"Example source sequence\")\nprint(source_letter_ids[:3])\nprint(\"\\n\")\nprint(\"Example target sequence\")\nprint(target_letter_ids[:3])", "This is the final shape we need them to be in. We can now proceed to building the model.\nModel\nCheck the Version of TensorFlow\nThis will check to make sure you have the correct version of TensorFlow", "from distutils.version import LooseVersion\nimport tensorflow as tf\nfrom tensorflow.python.layers.core import Dense\n\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))", "Hyperparameters", "# Number of Epochs\nepochs = 60\n# Batch Size\nbatch_size = 128\n# RNN Size\nrnn_size = 50\n# Number of Layers\nnum_layers = 2\n# Embedding Size\nencoding_embedding_size = 15\ndecoding_embedding_size = 15\n# Learning Rate\nlearning_rate = 0.001", "Input", "def get_model_inputs():\n input_data = tf.placeholder(tf.int32, [None, None], name='input')\n targets = tf.placeholder(tf.int32, [None, None], name='targets')\n lr = tf.placeholder(tf.float32, name='learning_rate')\n\n target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')\n max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')\n source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')\n \n return input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length\n", "Sequence to Sequence Model\nWe can now start defining the functions that will build the seq2seq model. We are building it from the bottom up with the following components:\n2.1 Encoder\n - Embedding\n - Encoder cell\n2.2 Decoder\n 1- Process decoder inputs\n 2- Set up the decoder\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n2.3 Seq2seq model connecting the encoder and decoder\n2.4 Build the training graph hooking up the model with the \n optimizer\n\n2.1 Encoder\nThe first bit of the model we'll build is the encoder. Here, we'll embed the input data, construct our encoder, then pass the embedded data to the encoder.\n\n\nEmbed the input data using tf.contrib.layers.embed_sequence\n<img src=\"images/embed_sequence.png\" />\n\n\nPass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.\n<img src=\"images/encoder.png\" />", "def encoding_layer(input_data, rnn_size, num_layers,\n source_sequence_length, source_vocab_size, \n encoding_embedding_size):\n\n\n # Encoder embedding\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size)\n\n # RNN cell\n def make_cell(rnn_size):\n enc_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return enc_cell\n\n enc_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n enc_output, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, sequence_length=source_sequence_length, dtype=tf.float32)\n \n return enc_output, enc_state", "2.2 Decoder\nThe decoder is probably the most involved part of this model. The following steps are needed to create it:\n1- Process decoder inputs\n2- Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\nProcess Decoder Input\nIn the training process, the target sequences will be used in two different places:\n\nUsing them to calculate the loss\nFeeding them to the decoder during training to make the model more robust.\n\nNow we need to address the second point. Let's assume our targets look like this in their letter/word form (we're doing this for readibility. At this point in the code, these sequences would be in int form):\n<img src=\"images/targets_1.png\"/>\nWe need to do a simple transformation on the tensor before feeding it to the decoder:\n1- We will feed an item of the sequence to the decoder at each time step. Think about the last timestep -- where the decoder outputs the final word in its output. The input to that step is the item before last from the target sequence. The decoder has no use for the last item in the target sequence in this scenario. So we'll need to remove the last item. \nWe do that using tensorflow's tf.strided_slice() method. We hand it the tensor, and the index of where to start and where to end the cutting.\n<img src=\"images/strided_slice_1.png\"/>\n2- The first item in each sequence we feed to the decoder has to be GO symbol. So We'll add that to the beginning.\n<img src=\"images/targets_add_go.png\"/>\nNow the tensor is ready to be fed to the decoder. It looks like this (if we convert from ints to letters/symbols):\n<img src=\"images/targets_after_processing_1.png\"/>", "# Process the input we'll feed to the decoder\ndef process_decoder_input(target_data, vocab_to_int, batch_size):\n '''Remove the last word id from each batch and concat the <GO> to the begining of each batch'''\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n dec_input = tf.concat([tf.fill([batch_size, 1], vocab_to_int['<GO>']), ending], 1)\n\n return dec_input", "Set up the decoder components\n - Embedding\n - Decoder cell\n - Dense output layer\n - Training decoder\n - Inference decoder\n\n1- Embedding\nNow that we have prepared the inputs to the training decoder, we need to embed them so they can be ready to be passed to the decoder. \nWe'll create an embedding matrix like the following then have tf.nn.embedding_lookup convert our input to its embedded equivalent:\n<img src=\"images/embeddings.png\" />\n2- Decoder Cell\nThen we declare our decoder cell. Just like the encoder, we'll use an tf.contrib.rnn.LSTMCell here as well.\nWe need to declare a decoder for the training process, and a decoder for the inference/prediction process. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model).\nFirst, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM.\n3- Dense output layer\nBefore we move to declaring our decoders, we'll need to create the output layer, which will be a tensorflow.python.layers.core.Dense layer that translates the outputs of the decoder to logits that tell us which element of the decoder vocabulary the decoder is choosing to output at each time step.\n4- Training decoder\nEssentially, we'll be creating two decoders which share their parameters. One for training and one for inference. The two are similar in that both created using tf.contrib.seq2seq.BasicDecoder and tf.contrib.seq2seq.dynamic_decode. They differ, however, in that we feed the the target sequences as inputs to the training decoder at each time step to make it more robust.\nWe can think of the training decoder as looking like this (except that it works with sequences in batches):\n<img src=\"images/sequence-to-sequence-training-decoder.png\"/>\nThe training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters).\n5- Inference decoder\nThe inference decoder is the one we'll use when we deploy our model to the wild.\n<img src=\"images/sequence-to-sequence-inference-decoder.png\"/>\nWe'll hand our encoder hidden state to both the training and inference decoders and have it process its output. TensorFlow handles most of the logic for us. We just have to use the appropriate methods from tf.contrib.seq2seq and supply them with the appropriate inputs.", "def decoding_layer(target_letter_to_int, decoding_embedding_size, num_layers, rnn_size,\n target_sequence_length, max_target_sequence_length, enc_state, dec_input):\n # 1. Decoder Embedding\n target_vocab_size = len(target_letter_to_int)\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n\n # 2. Construct the decoder cell\n def make_cell(rnn_size):\n dec_cell = tf.contrib.rnn.LSTMCell(rnn_size,\n initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))\n return dec_cell\n\n dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])\n \n # 3. Dense layer to translate the decoder's output at each time \n # step into a choice from the target vocabulary\n output_layer = Dense(target_vocab_size,\n kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))\n\n\n # 4. Set up a training decoder and an inference decoder\n # Training Decoder\n with tf.variable_scope(\"decode\"):\n\n # Helper for the training process. Used by BasicDecoder to read inputs.\n training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,\n sequence_length=target_sequence_length,\n time_major=False)\n \n \n # Basic decoder\n training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n training_helper,\n enc_state,\n output_layer) \n \n # Perform dynamic decoding using the decoder\n training_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(training_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n # 5. Inference Decoder\n # Reuses the same parameters trained by the training process\n with tf.variable_scope(\"decode\", reuse=True):\n start_tokens = tf.tile(tf.constant([target_letter_to_int['<GO>']], dtype=tf.int32), [batch_size], name='start_tokens')\n\n # Helper for the inference process.\n inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings,\n start_tokens,\n target_letter_to_int['<EOS>'])\n\n # Basic decoder\n inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,\n inference_helper,\n enc_state,\n output_layer)\n \n # Perform dynamic decoding using the decoder\n inference_decoder_output, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,\n impute_finished=True,\n maximum_iterations=max_target_sequence_length)\n \n\n \n return training_decoder_output, inference_decoder_output", "2.3 Seq2seq model\nLet's now go a step above, and hook up the encoder and decoder using the methods we just declared", "\ndef seq2seq_model(input_data, targets, lr, target_sequence_length, \n max_target_sequence_length, source_sequence_length,\n source_vocab_size, target_vocab_size,\n enc_embedding_size, dec_embedding_size, \n rnn_size, num_layers):\n \n # Pass the input data through the encoder. We'll ignore the encoder output, but use the state\n _, enc_state = encoding_layer(input_data, \n rnn_size, \n num_layers, \n source_sequence_length,\n source_vocab_size, \n encoding_embedding_size)\n \n \n # Prepare the target sequences we'll feed to the decoder in training mode\n dec_input = process_decoder_input(targets, target_letter_to_int, batch_size)\n \n # Pass encoder state and decoder inputs to the decoders\n training_decoder_output, inference_decoder_output = decoding_layer(target_letter_to_int, \n decoding_embedding_size, \n num_layers, \n rnn_size,\n target_sequence_length,\n max_target_sequence_length,\n enc_state, \n dec_input) \n \n return training_decoder_output, inference_decoder_output\n \n\n", "Model outputs training_decoder_output and inference_decoder_output both contain a 'rnn_output' logits tensor that looks like this:\n<img src=\"images/logits.png\"/>\nThe logits we get from the training tensor we'll pass to tf.contrib.seq2seq.sequence_loss() to calculate the loss and ultimately the gradient.", "# Build the graph\ntrain_graph = tf.Graph()\n# Set the graph to default to ensure that it is ready for training\nwith train_graph.as_default():\n \n # Load the model inputs \n input_data, targets, lr, target_sequence_length, max_target_sequence_length, source_sequence_length = get_model_inputs()\n \n # Create the training and inference logits\n training_decoder_output, inference_decoder_output = seq2seq_model(input_data, \n targets, \n lr, \n target_sequence_length, \n max_target_sequence_length, \n source_sequence_length,\n len(source_letter_to_int),\n len(target_letter_to_int),\n encoding_embedding_size, \n decoding_embedding_size, \n rnn_size, \n num_layers) \n \n # Create tensors for the training logits and inference logits\n training_logits = tf.identity(training_decoder_output.rnn_output, 'logits')\n inference_logits = tf.identity(inference_decoder_output.sample_id, name='predictions')\n \n # Create the weights for sequence_loss\n masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')\n\n with tf.name_scope(\"optimization\"):\n \n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n training_logits,\n targets,\n masks)\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -5., 5.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)\n", "Get Batches\nThere's little processing involved when we retreive the batches. This is a simple example assuming batch_size = 2\nSource sequences (it's actually in int form, we're showing the characters for clarity):\n<img src=\"images/source_batch.png\" />\nTarget sequences (also in int, but showing letters for clarity):\n<img src=\"images/target_batch.png\" />", "def pad_sentence_batch(sentence_batch, pad_int):\n \"\"\"Pad sentences with <PAD> so that each sentence of a batch has the same length\"\"\"\n max_sentence = max([len(sentence) for sentence in sentence_batch])\n return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]\n\ndef get_batches(targets, sources, batch_size, source_pad_int, target_pad_int):\n \"\"\"Batch targets, sources, and the lengths of their sentences together\"\"\"\n for batch_i in range(0, len(sources)//batch_size):\n start_i = batch_i * batch_size\n sources_batch = sources[start_i:start_i + batch_size]\n targets_batch = targets[start_i:start_i + batch_size]\n pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))\n pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))\n \n # Need the lengths for the _lengths parameters\n pad_targets_lengths = []\n for target in pad_targets_batch:\n pad_targets_lengths.append(len(target))\n \n pad_source_lengths = []\n for source in pad_sources_batch:\n pad_source_lengths.append(len(source))\n \n yield pad_targets_batch, pad_sources_batch, pad_targets_lengths, pad_source_lengths", "Train\nWe're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.", "# Split data to training and validation sets\ntrain_source = source_letter_ids[batch_size:]\ntrain_target = target_letter_ids[batch_size:]\nvalid_source = source_letter_ids[:batch_size]\nvalid_target = target_letter_ids[:batch_size]\n(valid_targets_batch, valid_sources_batch, valid_targets_lengths, valid_sources_lengths) = next(get_batches(valid_target, valid_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>']))\n\ndisplay_step = 20 # Check training loss after every 20 batches\n\ncheckpoint = \"best_model.ckpt\" \nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n \n for epoch_i in range(1, epochs+1):\n for batch_i, (targets_batch, sources_batch, targets_lengths, sources_lengths) in enumerate(\n get_batches(train_target, train_source, batch_size,\n source_letter_to_int['<PAD>'],\n target_letter_to_int['<PAD>'])):\n \n # Training step\n _, loss = sess.run(\n [train_op, cost],\n {input_data: sources_batch,\n targets: targets_batch,\n lr: learning_rate,\n target_sequence_length: targets_lengths,\n source_sequence_length: sources_lengths})\n\n # Debug message updating us on the status of the training\n if batch_i % display_step == 0 and batch_i > 0:\n \n # Calculate validation cost\n validation_loss = sess.run(\n [cost],\n {input_data: valid_sources_batch,\n targets: valid_targets_batch,\n lr: learning_rate,\n target_sequence_length: valid_targets_lengths,\n source_sequence_length: valid_sources_lengths})\n \n print('Epoch {:>3}/{} Batch {:>4}/{} - Loss: {:>6.3f} - Validation loss: {:>6.3f}'\n .format(epoch_i,\n epochs, \n batch_i, \n len(train_source) // batch_size, \n loss, \n validation_loss[0]))\n\n \n \n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, checkpoint)\n print('Model Trained and Saved')", "Prediction", "def source_to_seq(text):\n '''Prepare the text for the model'''\n sequence_length = 7\n return [source_letter_to_int.get(word, source_letter_to_int['<UNK>']) for word in text]+ [source_letter_to_int['<PAD>']]*(sequence_length-len(text))\n\n\n\n\ninput_sentence = 'hello'\ntext = source_to_seq(input_sentence)\n\ncheckpoint = \"./best_model.ckpt\"\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(checkpoint + '.meta')\n loader.restore(sess, checkpoint)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('predictions:0')\n source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')\n target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')\n \n #Multiply by batch_size to match the model's input parameters\n answer_logits = sess.run(logits, {input_data: [text]*batch_size, \n target_sequence_length: [len(text)]*batch_size, \n source_sequence_length: [len(text)]*batch_size})[0] \n\n\npad = source_letter_to_int[\"<PAD>\"] \n\nprint('Original Text:', input_sentence)\n\nprint('\\nSource')\nprint(' Word Ids: {}'.format([i for i in text]))\nprint(' Input Words: {}'.format(\" \".join([source_int_to_letter[i] for i in text])))\n\nprint('\\nTarget')\nprint(' Word Ids: {}'.format([i for i in answer_logits if i != pad]))\nprint(' Response Words: {}'.format(\" \".join([target_int_to_letter[i] for i in answer_logits if i != pad])))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
twosigma/beakerx
doc/python/ChartingAPI.ipynb
apache-2.0
[ "Python API to BeakerX Interactive Plotting\nYou can access Beaker's native interactive plotting library from Python.\nPlot with simple properties\nPython plots has syntax very similar to Groovy plots. Property names are the same.", "from beakerx import *\nimport pandas as pd\n\ntableRows = pd.read_csv('../resources/data/interest-rates.csv')\n\nPlot(title=\"Title\",\n xLabel=\"Horizontal\",\n yLabel=\"Vertical\",\n initWidth=500,\n initHeight=200)", "Plot items\nLines, Bars, Points and Right yAxis", "x = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\n\npp = Plot(title='Bars, Lines, Points and 2nd yAxis', \n xLabel=\"xLabel\", \n yLabel=\"yLabel\", \n legendLayout=LegendLayout.HORIZONTAL,\n legendPosition=LegendPosition.RIGHT,\n omitCheckboxes=True)\n\npp.add(YAxis(label=\"Right yAxis\"))\npp.add(Bars(displayName=\"Bar\", \n x=[1,3,5,7,10], \n y=[100, 120,90,100,80], \n width=1))\npp.add(Line(displayName=\"Line\", \n x=x, \n y=y, \n width=6, \n yAxis=\"Right yAxis\"))\npp.add(Points(x=x, \n y=y, \n size=10, \n shape=ShapeType.DIAMOND,\n yAxis=\"Right yAxis\"))\n\nplot = Plot(title= \"Setting line properties\")\nys = [0, 1, 6, 5, 2, 8]\nys2 = [0, 2, 7, 6, 3, 8]\nplot.add(Line(y= ys, width= 10, color= Color.red))\nplot.add(Line(y= ys, width= 3, color= Color.yellow))\nplot.add(Line(y= ys, width= 4, color= Color(33, 87, 141), style= StrokeType.DASH, interpolation= 0))\nplot.add(Line(y= ys2, width= 2, color= Color(212, 57, 59), style= StrokeType.DOT))\nplot.add(Line(y= [5, 0], x= [0, 5], style= StrokeType.LONGDASH))\nplot.add(Line(y= [4, 0], x= [0, 5], style= StrokeType.DASHDOT))\n\nplot = Plot(title= \"Changing Point Size, Color, Shape\")\ny1 = [6, 7, 12, 11, 8, 14]\ny2 = [4, 5, 10, 9, 6, 12]\ny3 = [2, 3, 8, 7, 4, 10]\ny4 = [0, 1, 6, 5, 2, 8]\nplot.add(Points(y= y1))\nplot.add(Points(y= y2, shape= ShapeType.CIRCLE))\nplot.add(Points(y= y3, size= 8.0, shape= ShapeType.DIAMOND))\nplot.add(Points(y= y4, size= 12.0, color= Color.orange, outlineColor= Color.red))\n\nplot = Plot(title= \"Changing point properties with list\")\ncs = [Color.black, Color.red, Color.orange, Color.green, Color.blue, Color.pink]\nss = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0]\nfs = [False, False, False, True, False, False]\nplot.add(Points(y= [5] * 6, size= 12.0, color= cs))\nplot.add(Points(y= [4] * 6, size= 12.0, color= Color.gray, outlineColor= cs))\nplot.add(Points(y= [3] * 6, size= ss, color= Color.red))\nplot.add(Points(y= [2] * 6, size= 12.0, color= Color.black, fill= fs, outlineColor= Color.black))\n\nplot = Plot()\ny1 = [1.5, 1, 6, 5, 2, 8]\ncs = [Color.black, Color.red, Color.gray, Color.green, Color.blue, Color.pink]\nss = [StrokeType.SOLID, StrokeType.SOLID, StrokeType.DASH, StrokeType.DOT, StrokeType.DASHDOT, StrokeType.LONGDASH]\nplot.add(Stems(y= y1, color= cs, style= ss, width= 5))\n\nplot = Plot(title= \"Setting the base of Stems\")\nys = [3, 5, 2, 3, 7]\ny2s = [2.5, -1.0, 3.5, 2.0, 3.0]\nplot.add(Stems(y= ys, width= 2, base= y2s))\nplot.add(Points(y= ys))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 5 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(x= [1, 2, 3, 4, 5], y= [3, 5, 2, 3, 7], color= cs, outlineColor= Color.black, width= 0.3))", "Lines, Points with Pandas", "plot = Plot(title= \"Pandas line\")\nplot.add(Line(y= tableRows.y1, width= 2, color= Color(216, 154, 54)))\nplot.add(Line(y= tableRows.y10, width= 2, color= Color.lightGray))\n\nplot\n\nplot = Plot(title= \"Pandas Series\")\nplot.add(Line(y= pd.Series([0, 6, 1, 5, 2, 4, 3]), width=2))\n\nplot = Plot(title= \"Bars\")\ncs = [Color(255, 0, 0, 128)] * 7 # transparent bars\ncs[3] = Color.red # set color of a single bar, solid colored bar\nplot.add(Bars(pd.Series([0, 6, 1, 5, 2, 4, 3]), color= cs, outlineColor= Color.black, width= 0.3))", "Areas, Stems and Crosshair", "ch = Crosshair(color=Color.black, width=2, style=StrokeType.DOT)\nplot = Plot(crosshair=ch)\ny1 = [4, 8, 16, 20, 32]\nbase = [2, 4, 8, 10, 16]\ncs = [Color.black, Color.orange, Color.gray, Color.yellow, Color.pink]\nss = [StrokeType.SOLID, \n StrokeType.SOLID, \n StrokeType.DASH, \n StrokeType.DOT, \n StrokeType.DASHDOT, \n StrokeType.LONGDASH]\nplot.add(Area(y=y1, base=base, color=Color(255, 0, 0, 50)))\nplot.add(Stems(y=y1, base=base, color=cs, style=ss, width=5))\n\nplot = Plot()\ny = [3, 5, 2, 3]\nx0 = [0, 1, 2, 3]\nx1 = [3, 4, 5, 8]\nplot.add(Area(x= x0, y= y))\nplot.add(Area(x= x1, y= y, color= Color(128, 128, 128, 50), interpolation= 0))\n\np = Plot()\np.add(Line(y= [3, 6, 12, 24], displayName= \"Median\"))\np.add(Area(y= [4, 8, 16, 32], base= [2, 4, 8, 16],\n color= Color(255, 0, 0, 50), displayName= \"Q1 to Q3\"))\n\nch = Crosshair(color= Color(255, 128, 5), width= 2, style= StrokeType.DOT)\npp = Plot(crosshair= ch, omitCheckboxes= True,\n legendLayout= LegendLayout.HORIZONTAL, legendPosition= LegendPosition.TOP)\nx = [1, 4, 6, 8, 10]\ny = [3, 6, 4, 5, 9]\npp.add(Line(displayName= \"Line\", x= x, y= y, width= 3))\npp.add(Bars(displayName= \"Bar\", x= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], y= [2, 2, 4, 4, 2, 2, 0, 2, 2, 4], width= 0.5))\npp.add(Points(x= x, y= y, size= 10))", "Constant Lines, Constant Bands", "p = Plot ()\np.add(Line(y=[-1, 1]))\np.add(ConstantLine(x=0.65, style=StrokeType.DOT, color=Color.blue))\np.add(ConstantLine(y=0.1, style=StrokeType.DASHDOT, color=Color.blue))\np.add(ConstantLine(x=0.3, y=0.4, color=Color.gray, width=5, showLabel=True))\n\nPlot().add(Line(y=[-3, 1, 3, 4, 5])).add(ConstantBand(x=[1, 2], y=[1, 3]))\n\np = Plot() \np.add(Line(x= [-3, 1, 2, 4, 5], y= [4, 2, 6, 1, 5]))\np.add(ConstantBand(x= ['-Infinity', 1], color= Color(128, 128, 128, 50)))\np.add(ConstantBand(x= [1, 2]))\np.add(ConstantBand(x= [4, 'Infinity']))\n\nfrom decimal import Decimal\npos_inf = Decimal('Infinity')\nneg_inf = Decimal('-Infinity')\nprint (pos_inf)\nprint (neg_inf)\n\n\nfrom beakerx.plot import Text as BeakerxText\nplot = Plot()\nxs = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\nys = [8.6, 6.1, 7.4, 2.5, 0.4, 0.0, 0.5, 1.7, 8.4, 1]\ndef label(i):\n if ys[i] > ys[i+1] and ys[i] > ys[i-1]:\n return \"max\"\n if ys[i] < ys[i+1] and ys[i] < ys[i-1]:\n return \"min\"\n if ys[i] > ys[i-1]:\n return \"rising\"\n if ys[i] < ys[i-1]:\n return \"falling\"\n return \"\"\n\nfor i in xs:\n i = i - 1\n if i > 0 and i < len(xs)-1:\n plot.add(BeakerxText(x= xs[i], y= ys[i], text= label(i), pointerAngle= -i/3.0))\n\nplot.add(Line(x= xs, y= ys))\nplot.add(Points(x= xs, y= ys))\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\n#plot.setYBound(1, 5)\nplot.getYAxes()[0].setBound(1,5)\nplot.getYAxes()[1].setBound(3,6)\n\n\nplot\n\nplot = Plot(title= \"Setting 2nd Axis bounds\")\nys = [0, 2, 4, 6, 15, 10]\nys2 = [-40, 50, 6, 4, 2, 0]\nys3 = [3, 6, 3, 6, 70, 6]\nplot.add(YAxis(label=\"Spread\"))\nplot.add(Line(y= ys))\nplot.add(Line(y= ys2, yAxis=\"Spread\"))\nplot.setXBound([-2, 10])\nplot.setYBound(1, 5)\n\nplot", "TimePlot", "import time\n\nmillis = current_milli_time()\n\nhour = round(1000 * 60 * 60)\nxs = []\nys = []\nfor i in range(11):\n xs.append(millis + hour * i)\n ys.append(i)\n\nplot = TimePlot(timeZone=\"America/New_York\")\n# list of milliseconds\nplot.add(Points(x=xs, y=ys, size=10, displayName=\"milliseconds\"))\n\nplot = TimePlot()\nplot.add(Line(x=tableRows['time'], y=tableRows['m3']))", "numpy datatime64", "y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [np.datetime64('2015-02-01'), \n np.datetime64('2015-02-02'), \n np.datetime64('2015-02-03'),\n np.datetime64('2015-02-04'),\n np.datetime64('2015-02-05'),\n np.datetime64('2015-02-06')]\nplot = TimePlot()\n\nplot.add(Line(x=dates, y=y))", "Timestamp", "y = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = pd.Series(['2015-02-01',\n '2015-02-02',\n '2015-02-03',\n '2015-02-04',\n '2015-02-05',\n '2015-02-06']\n , dtype='datetime64[ns]')\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n", "Datetime and date", "import datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.date(2015, 2, 1),\n datetime.date(2015, 2, 2),\n datetime.date(2015, 2, 3),\n datetime.date(2015, 2, 4),\n datetime.date(2015, 2, 5),\n datetime.date(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))\n\n\nimport datetime\n\ny = pd.Series([7.5, 7.9, 7, 8.7, 8, 8.5])\ndates = [datetime.datetime(2015, 2, 1),\n datetime.datetime(2015, 2, 2),\n datetime.datetime(2015, 2, 3),\n datetime.datetime(2015, 2, 4),\n datetime.datetime(2015, 2, 5),\n datetime.datetime(2015, 2, 6)]\nplot = TimePlot()\nplot.add(Line(x=dates, y=y))", "NanoPlot", "millis = current_milli_time()\nnanos = millis * 1000 * 1000\nxs = []\nys = []\nfor i in range(11):\n xs.append(nanos + 7 * i)\n ys.append(i)\n\nnanoplot = NanoPlot()\nnanoplot.add(Points(x=xs, y=ys))", "Stacking", "y1 = [1,5,3,2,3]\ny2 = [7,2,4,1,3]\np = Plot(title='Plot with XYStacker', initHeight=200)\na1 = Area(y=y1, displayName='y1')\na2 = Area(y=y2, displayName='y2')\nstacker = XYStacker()\np.add(stacker.stack([a1, a2]))", "SimpleTime Plot", "SimpleTimePlot(tableRows, [\"y1\", \"y10\"], # column names\n timeColumn=\"time\", # time is default value for a timeColumn\n yLabel=\"Price\", \n displayNames=[\"1 Year\", \"10 Year\"],\n colors = [[216, 154, 54], Color.lightGray],\n displayLines=True, # no lines (true by default)\n displayPoints=False) # show points (false by default))\n\n#time column base on DataFrame index \ntableRows.index = tableRows['time']\n\nSimpleTimePlot(tableRows, ['m3'])\n\nrng = pd.date_range('1/1/2011', periods=72, freq='H')\nts = pd.Series(np.random.randn(len(rng)), index=rng)\ndf = pd.DataFrame(ts, columns=['y'])\nSimpleTimePlot(df, ['y'])\n", "Second Y Axis\nThe plot can have two y-axes. Just add a YAxis to the plot object, and specify its label.\nThen for data that should be scaled according to this second axis,\nspecify the property yAxis with a value that coincides with the label given.\nYou can use upperMargin and lowerMargin to restrict the range of the data leaving more white, perhaps for the data on the other axis.", "p = TimePlot(xLabel= \"Time\", yLabel= \"Interest Rates\")\np.add(YAxis(label= \"Spread\", upperMargin= 4))\np.add(Area(x= tableRows.time, y= tableRows.spread, displayName= \"Spread\",\n yAxis= \"Spread\", color= Color(180, 50, 50, 128)))\np.add(Line(x= tableRows.time, y= tableRows.m3, displayName= \"3 Month\"))\np.add(Line(x= tableRows.time, y= tableRows.y10, displayName= \"10 Year\"))", "Combined Plot", "import math\npoints = 100\nlogBase = 10\nexpys = []\nxs = []\nfor i in range(0, points):\n xs.append(i / 15.0)\n expys.append(math.exp(xs[i]))\n\n\ncplot = CombinedPlot(xLabel= \"Linear\")\nlogYPlot = Plot(title= \"Linear x, Log y\", yLabel= \"Log\", logY= True, yLogBase= logBase)\nlogYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlogYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(logYPlot, 4)\n\nlinearYPlot = Plot(title= \"Linear x, Linear y\", yLabel= \"Linear\")\nlinearYPlot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nlinearYPlot.add(Line(x= xs, y= xs, displayName= \"g(x) = x\"))\ncplot.add(linearYPlot,4)\n\ncplot\n\n\nplot = Plot(title= \"Log x, Log y\", xLabel= \"Log\", yLabel= \"Log\",\n logX= True, xLogBase= logBase, logY= True, yLogBase= logBase)\n\nplot.add(Line(x= xs, y= expys, displayName= \"f(x) = exp(x)\"))\nplot.add(Line(x= xs, y= xs, displayName= \"f(x) = x\"))\n\nplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jackbrucesimpson/Machine-Learning-Workshop
images_features.ipynb
mit
[ "Image and feature analysis\nLet's start by loading the libraries we'll need:", "import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n%matplotlib inline", "Extract Images\nIncluded in these workshop materials is a compressed file (\"data.tar.gz\") containg the images that we'll be classifying today. Once you extract this file, you should have a directory called \"data\" which contains the following directories:\nDirectory | Contents\n:-------------------------:|:-------------------------:\nI | Contains rectangle tag images\nO | Contains circle tag images\nQ | Contains blank tag images\nFeel free to have a look through these directories, and we'll show you how to load these images into Python using OpenCV next.\nReading Images\nWe're now going to be using OpenCV's \"imread\" command to load one of the images from each type of tag into Python and then use Matplotlib to plot the images:", "rect_image = cv2.imread('data/I/27.png', cv2.IMREAD_GRAYSCALE)\ncircle_image = cv2.imread('data/O/11527.png', cv2.IMREAD_GRAYSCALE)\nqueen_image = cv2.imread('data/Q/18027.png', cv2.IMREAD_GRAYSCALE)\n\nplt.figure(figsize = (10, 7))\nplt.title('Rectangle Tag')\nplt.axis('off')\nplt.imshow(rect_image, cmap = cm.Greys_r)\n\nplt.figure(figsize = (10, 7))\nplt.title('Circle Tag')\nplt.axis('off')\nplt.imshow(circle_image, cmap = cm.Greys_r)\n\nplt.figure(figsize = (10, 7))\nplt.title('Queen Tag')\nplt.axis('off')\nplt.imshow(queen_image, cmap = cm.Greys_r)", "Image Properties\nOne of the really useful things about using OpenCV to manipulate images in Python is that all images are treated as NumPy matrices. This means we can use NumPy's functions to manipulate and understand the data we're working with. To demonstrate this, we'll use use NumPy's \"shape\" and \"dtype\" commands to take a closer look at the rectangular tag image we just read in:", "print (rect_image.shape)\nprint (rect_image.dtype)", "This tells us that this image is 24x24 pixels in size, and that the datatype of the values it stores are unsigned 8 bit integers. While the explanation of this datatype isn't especially relevant to the lesson, the main point is that it is extremely important to double check the size and structure of your data. Let's do the same thing for the circular tag image too:", "print (circle_image.shape)\nprint (circle_image.dtype)", "This holds the same values, which is good. When you're working with your own datasets in the future, it would be highly beneficial to write your own little program to check the values and structure of your data to ensure that subtle bugs don't creep in to your analysis.\nCropping\nOne of the things you've probably noticed is that there's a dark area around the edges of the tags. As we're only interested in the pattern in the middle of the tags, we should try to crop this out. Have a little play with the code below and experiment with different pixel slices.", "cropped_rect_image = rect_image[4:20,4:20]\ncropped_circle_image = circle_image[4:20,4:20]\ncropped_queen_image = queen_image[4:20,4:20]\n\nplt.figure(figsize = (10, 7))\nplt.title('Rectangle Tag ' + str(cropped_rect_image.shape))\nplt.axis('off')\nplt.imshow(cropped_rect_image, cmap = cm.Greys_r)\n\nplt.figure(figsize = (10, 7))\nplt.title('Circle Tag ' + str(cropped_circle_image.shape))\nplt.axis('off')\nplt.imshow(cropped_circle_image, cmap = cm.Greys_r)\n\nplt.figure(figsize = (10, 7))\nplt.title('Queen Tag ' + str(cropped_queen_image.shape))\nplt.axis('off')\nplt.imshow(cropped_queen_image, cmap = cm.Greys_r)", "Feature Engineering\nWhen people think of machine learning, the first thing that comes to mind tends to be the fancy algorithms that will train the computer to solve your problem. Of course this is important, but the reality of the matter is that the way you process the data you'll eventually feed into the machine learning algorithm is often the thing you'll spend the most time doing and will have the biggest effect on the accuracy of your results.\nNow, when most people think of features in data, they think that this is what it is:", "plt.figure(figsize = (10, 7))\nplt.title('Rectangle Tag')\nplt.axis('off')\nplt.imshow(rect_image, cmap = cm.Greys_r)", "In fact this is not actualy the case. In the case of this dataset, the features are actually the pixel values that make up the images - those are the values we'll be training the machine learning algorithm with:", "print(rect_image)", "So what can we do to manipulate the features in out dataset? We'll explore three methods to acheive this:\n\nImage smoothing\nModifying brightness\nModifying contrast\n\nTechniques like image smoothing can be useful when improving the features you train the machine learning algorithm on as you can eliminate some of the potential noise in the image that could confuse the program.\nSmoothing\nImage smoothing is another name for blurring the image. It involves passing a rectangular box (called a kernel) over the image and modifying pixels in the image based on the surrounding values.\nAs part of this exercise, we'll explore 3 different smoothing techniques:\nSmoothing Method | Explanation\n:-------------------------:|:-------------------------:\nMean | Replaces pixel with the mean value of the surrounding pixels\nMedian | Replaces pixel with the median value of the surrounding pixels\nGaussian | Replaces pixel by placing different weightings on surrrounding pixels according to the gaussian distribution", "mean_smoothed = cv2.blur(rect_image, (5, 5))\nmedian_smoothed = cv2.medianBlur(rect_image, 5)\ngaussian_smoothed = cv2.GaussianBlur(rect_image, (5, 5), 0)", "Feel free to have a play with the different parameters for these smoothing operations. We'll now write some code to place the original images next to their smoothed counterparts in order to compare them:", "mean_compare = np.hstack((rect_image, mean_smoothed))\nmedian_compare = np.hstack((rect_image, median_smoothed))\ngaussian_compare = np.hstack((rect_image, gaussian_smoothed))\n\nplt.figure(figsize = (15, 12))\nplt.title('Mean')\nplt.axis('off')\nplt.imshow(mean_compare, cmap = cm.Greys_r) \n\nplt.figure(figsize = (15, 12))\nplt.title('Median')\nplt.axis('off')\nplt.imshow(median_compare, cmap = cm.Greys_r)\n\nplt.figure(figsize = (15, 12))\nplt.title('Gaussian')\nplt.axis('off')\nplt.imshow(gaussian_compare, cmap = cm.Greys_r)", "Brightness and Contrast\nModifying the brightness and contrast of our images is a surprisingly simple task, but can have a big impact on the appearance of the image. Here is how you can increase and decrease these characteristics in an image:\nCharacteristic | Increase/Decrease | Action\n:-------------------------:|:-------------------------:|:-------------------------\nBrightness | Increase | Add an integer to every pixel\nBrightness | Decrease | Subtract an integer from every pixel\nConstrast | Increase | Multiply every pixel by a number greater than 1\nConstrast | Decrease | Multiple every pixel by a floating point number less than 1\nNow we can see how this affects our rectangular tag image. Again, feel free to experiment with different values in order to see the final effect.", "increase_brightness = rect_image + 30\ndecrease_brightness = rect_image - 30\nincrease_contrast = rect_image * 1.5\ndecrease_contrast = rect_image * 0.5\n\nbrightness_compare = np.hstack((increase_brightness, decrease_brightness))\nconstrast_compare = np.hstack((increase_contrast, decrease_contrast))\n\nplt.figure(figsize = (15, 12))\nplt.title('Brightness')\nplt.axis('off')\nplt.imshow(brightness_compare, cmap = cm.Greys_r) \n\nplt.figure(figsize = (15, 12))\nplt.title('Contrast')\nplt.axis('off')\nplt.imshow(constrast_compare, cmap = cm.Greys_r)", "Module Summary\nIn this section we have covered:\n\nReading images\nImage properties\nFeature engineering\nImage smoothing\nBrightness/constrast operations\n\nIn the next section of this workshop we'll cover how to put these skills together to train a machine learning algorithm to recognise these images." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
peterwittek/ipython-notebooks
Comparing_DMRG_ED_and_SDP.ipynb
gpl-3.0
[ "Comparing the ground state energies obtained by density matrix renormalization group, exact diagonalization, and an SDP hierarchy\nWe would like to compare the ground state energy of the following spinless fermionic system [1]:\n$H_{\\mathrm{free}}=\\sum_{<rs>}\\left[c_{r}^{\\dagger} c_{s}+c_{s}^{\\dagger} c_{r}-\\gamma(c_{r}^{\\dagger} c_{s}^{\\dagger}+c_{s}c_{r} )\\right]-2\\lambda\\sum_{r}c_{r}^{\\dagger}c_{r},$\nwhere $<rs>$ goes through nearest neighbour pairs in a two-dimensional lattice. The fermionic operators are subject to the following constraints:\n${c_{r}, c_{s}^{\\dagger}}=\\delta_{rs}I_{r}$\n${c_r^\\dagger, c_s^\\dagger}=0,$\n${c_{r}, c_{s}}=0.$\nOur primary goal is to benchmark the SDP hierarchy of Reference [2]. The baseline methods are density matrix renormalization group (DMRG) and exact diagonalization (ED), both of which are included in Algorithms and Libraries for Physics Simulations (ALPS, [3]). The range of predefined Hamiltonians is limited, so we simplify the equation by setting $\\gamma=0$.\nPrerequisites\nTo run this notebook, ALPS, Sympy, Scipy, and SDPA must be installed. A recent version of Ncpol2sdpa is also necessary.\nCalculating the ground state energy with DMRG and ED\nDMRG and ED are included in ALPS. To start the calculations, we need to import the Python interface:", "import pyalps", "For now, we are only interested in relatively small systems, we will try lattice sizes between $2\\times 2$ and $5\\times 5$. With this, we set the parameters for DMRG and ED:", "lattice_range = [2, 3, 4, 5]\nparms = [{ \n 'LATTICE' : \"open square lattice\", # Set up the lattice\n 'MODEL' : \"spinless fermions\", # Select the model \n 'L' : L, # Lattice dimension\n 't' : -1 , # This and the following\n 'mu' : 2, # are parameters to the\n 'U' : 0 , # Hamiltonian.\n 'V' : 0,\n 'Nmax' : 2 , # These parameters are\n 'SWEEPS' : 20, # specific to the DMRG\n 'MAXSTATES' : 300, # solver.\n 'NUMBER_EIGENVALUES' : 1, \n 'MEASURE_ENERGY' : 1\n} for L in lattice_range ]", "We will need a helper function to extract the ground state energy from the solutions:", "def extract_ground_state_energies(data):\n E0 = []\n for Lsets in data:\n allE = []\n for q in pyalps.flatten(Lsets):\n allE.append(q.y[0])\n E0.append(allE[0])\n return sorted(E0, reverse=True)", "We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\\times 4$.", "prefix_sparse = 'comparison_sparse'\ninput_file_sparse = pyalps.writeInputFiles(prefix_sparse, parms[:-1])\n\nres = pyalps.runApplication('sparsediag', input_file_sparse)\nsparsediag_data = pyalps.loadEigenstateMeasurements(\n pyalps.getResultFiles(prefix=prefix_sparse)) \n\nsparsediag_ground_state_energy = extract_ground_state_energies(sparsediag_data)\nsparsediag_ground_state_energy.append(0)", "DMRG scales to all the lattice sizes we want:", "prefix_dmrg = 'comparison_dmrg'\ninput_file_dmrg = pyalps.writeInputFiles(prefix_dmrg, parms)\nres = pyalps.runApplication('dmrg',input_file_dmrg)\ndmrg_data = pyalps.loadEigenstateMeasurements(\n pyalps.getResultFiles(prefix=prefix_dmrg)) \ndmrg_ground_state_energy = extract_ground_state_energies(dmrg_data)\n", "Calculating the ground state energy with SDP\nThe ground state energy problem can be rephrased as a polynomial optimiziation problem of noncommuting variables. We use Ncpol2sdpa to translate this optimization problem to a sparse SDP relaxation [4]. The relaxation is solved with SDPA, a high-performance SDP solver that deals with sparse problems efficiently [5]. First we need to import a few more functions:", "from sympy.physics.quantum.dagger import Dagger\nfrom ncpol2sdpa import SdpRelaxation, generate_operators, \\\n fermionic_constraints, get_neighbors", "We set the additional parameters for this formulation, including the order of the relaxation:", "level = 1\ngam, lam = 0, 1", "Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step:", "sdp_ground_state_energy = []\nfor lattice_dimension in lattice_range:\n n_vars = lattice_dimension * lattice_dimension\n C = generate_operators('C%s' % (lattice_dimension), n_vars)\n \n hamiltonian = 0\n for r in range(n_vars):\n hamiltonian -= 2*lam*Dagger(C[r])*C[r]\n for s in get_neighbors(r, lattice_dimension):\n hamiltonian += Dagger(C[r])*C[s] + Dagger(C[s])*C[r]\n hamiltonian -= gam*(Dagger(C[r])*Dagger(C[s]) + C[s]*C[r])\n \n substitutions = fermionic_constraints(C)\n \n sdpRelaxation = SdpRelaxation(C)\n sdpRelaxation.get_relaxation(level, objective=hamiltonian, substitutions=substitutions)\n sdpRelaxation.solve()\n sdp_ground_state_energy.append(sdpRelaxation.primal)", "Comparison\nThe level-one relaxation matches the ground state energy given by DMRG and ED.", "data = [dmrg_ground_state_energy,\\\n sparsediag_ground_state_energy,\\\n sdp_ground_state_energy]\nlabels = [\"DMRG\", \"ED\", \"SDP\"]\nprint (\"{:>4} {:>9} {:>10} {:>10} {:>10}\").format(\"\", *lattice_range)\nfor label, row in zip(labels, data):\n print (\"{:>4} {:>7.6f} {:>7.6f} {:>7.6f} {:>7.6f}\").format(label, *row)", "References\n[1] Corboz, P.; Evenbly, G.; Verstraete, F. & Vidal, G. Simulation of interacting fermions with entanglement renormalization. Physics Review A, 2010, 81, pp. 010303.\n[2] Pironio, S.; Navascués, M. & Acín, A. Convergent relaxations of polynomial optimization problems with noncommuting variables. SIAM Journal on Optimization, 2010, 20, pp. 2157-2180.\n[3] Bauer, B.; Carr, L.; Evertz, H.; Feiguin, A.; Freire, J.; Fuchs, S.; Gamper, L.; Gukelberger, J.; Gull, E.; Guertler, S. & others. The ALPS project release 2.0: Open source software for strongly correlated systems. Journal of Statistical Mechanics: Theory and Experiment, IOP Publishing, 2011, 2011, P05001.\n[4] Wittek, P. Ncpol2sdpa -- Sparse Semidefinite Programming Relaxations for Polynomial Optimization Problems of Noncommuting Variables. arXiv:1308.6029, 2013.\n[5] Yamashita, M.; Fujisawa, K. & Kojima, M. Implementation and evaluation of SDPA 6.0 (semidefinite programming algorithm 6.0). Optimization Methods and Software, 2003, 18, 491-505." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hglanz/phys202-2015-work
assignments/assignment04/MatplotlibEx01.ipynb
mit
[ "Matplotlib Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math", "Line plot of sunspot data\nDownload the .txt data for the \"Yearly mean total sunspot number [1700 - now]\" from the SILSO website. Upload the file to the same directory as this notebook.", "import os\nassert os.path.isfile('yearssn.dat')", "Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.", "data = np.array(np.loadtxt('yearssn.dat'))\nyear = data[:,0]\nssc = data[:,1]\n#raise NotImplementedError()\n\nassert len(year)==315\nassert year.dtype==np.dtype(float)\nassert len(ssc)==315\nassert ssc.dtype==np.dtype(float)", "Make a line plot showing the sunspot count as a function of year.\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.", "ticks = np.arange(year.min(), year.max(), 10)\nf = plt.figure(figsize=(30,1))\nplt.plot(year, ssc, 'b.-');\nplt.xlim(year.min() - 5, year.max() + 5);\nplt.xticks(ticks, [str(int(x)) for x in ticks]);\nplt.xlabel(\"Year\")\nplt.ylabel(\"Mean Total Sunspots\")\nplt.title(\"Mean Total Sunspots vs. Year\")\n#raise NotImplementedError()\n\nassert True # leave for grading", "Describe the choices you have made in building this visualization and how they make it effective.\nI haven't altered much. The aspect ratio to effect a maximum slope of 1 was hard to achieve. I could have taken the borders of the box off the top and right. I liked blue and decided to use points with lines so that it was clear how fine the temporal resolution is. I labeled every 10 years so that the x-axis labels were not crowded but also not sparse. All other labels describe the plot.\nNow make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above:\n\nCustomize your plot to follow Tufte's principles of visualizations.\nAdjust the aspect ratio/size so that the steepest slope in your plot is approximately 1.\nCustomize the box, grid, spines and ticks to match the requirements of this data.", "cents = np.array([int(x / 100) for x in year])\nucents = np.unique(cents)\n\nf, ax = plt.subplots(2, 2, sharey = True, figsize = (12, 1))\n\nfor i in range(2):\n for j in range(2):\n subyr = np.array([year[x] for x in range(len(year)) if cents[x] == ucents[2*i + j]])\n subspots = [ssc[x] for x in range(len(year)) if cents[x] == ucents[2*i + j]]\n subticks = np.arange(subyr.min(), subyr.max(), 10)\n \n plt.sca(ax[i,j])\n plt.plot(subyr, subspots, \"b.-\")\n plt.xlim(subyr.min() - 5, subyr.max() + 5);\n plt.xticks(subticks, [str(int(x)) for x in subticks]);\n plt.xlabel(\"Year\")\n plt.ylabel(\"Mean Total Sunspots\")\n plt.title(\"Mean Total Sunspots vs. Year\")\n\nplt.subplots_adjust(hspace = 1.5, bottom = 0.05, top = 2.25)\n#raise NotImplementedError()\n\nassert True # leave for grading" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
WMD-group/MacroDensity
tutorials/Porous/Porous.ipynb
mit
[ "Ionisation potential of a porous material\nIn this example we use MacroDensity with VASP to align the energy levels of a porous material.\nThe procedure involves one DFT calculaion, yielding different important values\n\n$\\epsilon_{vbm}$ - the valence band maximum\n$V_{vac}$ - the vacuum potential\n\nThe ionisation potential ($IP$) is then obtained from:\n$IP = V_{vac} - \\epsilon_{vbm}$\nThe difference to a bulk calculation is that here the material itself has a vacuum within. That means that we can sample the vacuum level from there.\nThe procedure was first outlined in a seminal JACS paper, read it here.\nOur system\nThe beautiful ZIF-8 is our porous system of choice for this demonstration.\n<img src=\"zif8.png\">\nProcedure\n\nOptimise the structre\n\nCalculate the electronic structure at your chosen level of theory Remember in your INCAR:\nLVHAR = .TRUE. # This generates a LOCPOT file with the potential \n\n\nLocate the centre of the largest pore - do this \"by eye\" first\n\nPlot the potential in that plane, so see if it plateaus\nPlot a profile of the potential across the pore, again to see the plateau\nSample the potential from the pore centre\n\nNB This whole procedure is probably better run in a notebook than by script, the reason being that you can read the file once, then do the manipulations later. The reading is the intensive and time consuming step.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\nimport sys\nsys.path.append('../../')\nimport imp\nimport macrodensity as md\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport os", "Read the potential", "if os.path.isfile('LOCPOT'):\n print('LOCPOT already exists')\nelse:\n os.system('bunzip2 LOCPOT.bz2')\ninput_file = 'LOCPOT'\n#=== No need to edit below\nvasp_pot, NGX, NGY, NGZ, Lattice = md.read_vasp_density(input_file)\nvector_a,vector_b,vector_c,av,bv,cv = md.matrix_2_abc(Lattice)\nresolution_x = vector_a/NGX\nresolution_y = vector_b/NGY\nresolution_z = vector_c/NGZ\ngrid_pot, electrons = md.density_2_grid(vasp_pot,NGX,NGY,NGZ)", "Look for pore centre points\n\n\nFor this we will use VESTA.\n\nOpen the LOCPOT in VESTA.\nExpand to 2x2x2 cell, by choosing the Boundary option on the left hand side.\nLook for a pore centre - I think that [1,1,1] is at a pore centre here.\nNow draw a lattice plane through that point.\nChoose Edit > Lattice Planes.\nClick New.\nPut in the Miller index (I choose 1,1,1).\nMove the plane up and down using the d parameter, until it passes through the point you think is the centre.\nIt should look like the picture below.\n<img src=\"111.png\">\n\n\n\n\n\nNow we look at a contour plot of this plane to see if we are at a plateau.\n\nUtiltiy > 2D Data Display.\nClick Slice and enter the same parameters as in the 3D view.\nNow choose contours to play with the settings\nZ(max) and Z(min) tell you the potentail max and min.\nSet contour max = Z(max) and contour min = 0\nSet the interval to 0.1\nWith some playing with the General settings, you can get something like this:\n\n\n\n<img src='plane.png'>\n\nWe can see the [1,1,1], at the centre of the picture is a maximum and is a plateau, so we can now use it for sampling.\n\nSampling the potential\n\nWe now set the point to sample at [1,1,1]\nWe must also set the travelled parameter, for this type of analysis it is always [0,0,0].", "cube_origin = [1,1,1]\ntravelled = [0,0,0]\n\nint(cube_origin[0]*NGX)", "We want to try a range of sampling area sizes.\nWe analyse how the potential is affects.\nWe also want low variance (plateau condidtions).\nIdeally we should have as large an area as possible, with low (< 1e-5) variance.", "dim = [1,10,20,40,60,80,100]\nprint(\"Dimension Potential Variance\")\nprint(\"--------------------------------\")\nfor d in dim:\n cube = [d, d, d]\n cube_potential, cube_var = md.volume_average(cube_origin, cube,grid_pot, NGX, NGY, NGZ, travelled=travelled)\n print(\" %3i %10.4f %10.6f\"%(d, cube_potential, cube_var))", "From the OUTCAR the VBM is at -2.4396 V", "print(\"IP: %3.4f eV\" % (2.3068 -- 2.4396 ))", "AFI\nNo wtry the procedure with the files in the AFI folder" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]